Graphics Processing Unit: Difference between revisions
Added image from Wikimedia commons, wrote caption.
m (fix broken external links) |
(Added image from Wikimedia commons, wrote caption.) |
||
(9 intermediate revisions by 6 users not shown) | |||
Line 1:
{{
[[File:Nvidia@12nm@Turing@TU104@GeForce RTX 2080@S TAIWAN 1841A1 PKYN44.000 TU104-400-A1 DSC06154-DSC06272 - ZS-retouched (50914918427).jpg|thumb|Massive graphical performance. One tiny chip.]]
A GPU is the common term for a piece of a computer/console's hardware that is dedicated to drawing things. IE: graphics. The term "GPU" was coined by nVidia upon the launch of their GeForce line of hardware. This was generally a marketing stunt, though the GeForce did have some fairly advanced processing features in it. However, the term GPU has become the accepted shorthand for ''any'' graphics processing chip, even pre-GeForce ones.
Line 6 ⟶ 7:
Both consoles and regular computers have had various kinds of GPUs. They had two divergent kinds of 2D GPUs, but they converged with the advent of 3D rendering.
This kind of GPU, pioneered by the TMS 9918/9928 (see below) and popularized by the [[NES]], [[Sega Master System]] and [[Sega Genesis]], forces a particular kind of look onto the games that use them. You know this look: everything is composed of a series of images, tiles, that are used in various configurations to build the world.
Line 14 ⟶ 15:
In this GPU, the tilemaps and the sprites are all built up into the final image by the GPU hardware itself. This drastically reduces the amount of processing power needed -- all the CPU needs to do is upload new parts of the tilemaps as the user scrolls around, adjust the scroll position of the tilemaps, and say where the sprites go.
Computers had different needs. Computer 2D rendering was driven by the needs of applications more so than games. Therefore, rendering needed to be fairly generic. Such hardware had a framebuffer, an image that represents what the user sees. And the hardware had [[Video RAM|video memory]] to store extra images that the user could use.
Line 20 ⟶ 21:
Such hardware had fast routines for drawing colored rectangles and lines. But the most useful operation was the blit or BitBlt]: a fast video memory copy. Combined with video memory, the user could store an image in VRAM and copy it to the framebuffer as needed. Some advanced 2D hardware had scaled-blits (so the destination location could be larger or smaller than the source image) and other special blit features.
The CPU effort is more involved in this case. Every element must be explicitly drawn by a CPU command. The background was generally the most complicated. This is why many early computer games used a static background. They basically had a single background image in video memory which they blitted to the framebuffer each frame, followed by a few sprites on top of it. Later PC games before the 3D era managed to equal or exceed the best contemporary consoles like the [[Super NES]] both through raw power (the 80486DX2/66, a common gaming processor of the early 90s, ran at 66 MHz, almost 10 times the clock speed of the [[Sega Genesis]], and could run 32-bit code as an "extension" to 16-bit DOS) and through various programming tricks that took advantage of quirks in the way early PCs and VGA worked. [[John Carmack]] once described the engine underpinning his company's breakout hit ''[[
Before the rise of Windows in the mid-1990s, most PC games couldn't take advantage of newer graphics cards with hardware blitting support; the CPU had to do all the work, and this made both a fast CPU and a fast path to the video RAM essential. PCs with local-bus video and 80486 processors were a must for games like ''[[Doom]]'' and ''[[Heretic]]''; playing them on an old 386 with ISA video was possible, but wouldn't be very fun.
The basic 3D-based GPU is much more complicated. It isn't as limiting as the NES-style 2D GPU.
Line 32 ⟶ 33:
The early forms of this GPU were just triangle/texture renderers. The CPU had to position each triangle properly each frame. Later forms, like the first GeForce chip, incorporated triangle transform and lighting into the hardware. This allowed the CPU to say, "here's a bunch of triangles; render them," and then go do something else while they were rendered.
Around the time of the GeForce 3 GPU, something happened in GPU design.
Line 38 ⟶ 39:
Take the application of textures to a polygon. The very first GPU had a very simple function. For each pixel of a triangle:
{{quote|
A simple equation. But then, developers wanted to apply 2 textures to a triangle. So this function became more complex:
{{quote|
Interesting though this may be, developers wanted more say in how the textures were combined. That is, developers wanted to insert more general math into the process. So GPU makers added a few more switches and complications to the process.
Line 48 ⟶ 49:
The GeForce 3 basically decided to say "Screw that!" and let the developers do arbitrary stuff:
{{quote|
What used to be a simple function had now become a user-written ''program''. The program took texture colors and could do fairly arbitrary computations with them.
Line 54 ⟶ 55:
In the early days, "fairly arbitrary computations" was quite limited. Nowadays, not so much. These GPU programs, called ''shaders'', commonly do things like video decompression and other sundry activities. Modern GPUs can become something called the General Purpose GPU, which people have taken advantage of massive calculation performance of the GPU to do work that would take a CPU much longer to do.
GPUs and CPUs are built around some of the same general components, but they're put together in very different ways. A chip only has a limited amount of space to put circuits on, and GPUs and CPUs use the available space in different ways. The differences can be briefly summarized as follows:
Line 63 ⟶ 64:
In the end, CPUs can execute a wide variety of programs at acceptable speed. GPUs can execute some special types of programs far faster than a CPU, but anything else it will execute much slower, if it can execute it at all.
GPUs today can execute a lot of programs that formerly only CPUs could, but with radically different performance characteristics. A typical home GPU can run hundreds of threads at once, while a typical mid-range home CPU can run
=== 1970s ===
'''Motorola 6845''' (1977)
Line 89 ⟶ 90:
The first programmable home-computer GPU. ANTIC was ahead of its time; it was a full microprocessor with its own instruction set and direct access to system memory, much like the blitter in the [[Amiga]] 6 years later (which, not coincidentally, was designed by the same person). By tweaking its "display list" or instruction queue, some very wild special effects were possible, including smooth animation and 3D effects. CTIA and GTIA provided up to 128 or 256 colors, respectively, a huge number for the time.
=== 1980s ===
'''IBM Monochrome Display Adapter''' and '''Color Graphics Adapter''' (1981)
Line 115 ⟶ 116:
=== 1990s ===
'''S3 86C911''' (1991)
Line 148 ⟶ 149:
'''SGI Reality Co-Processor''' (1996)
Developed for the [[Nintendo 64]], what this GPU brought was [
----
Line 191 ⟶ 192:
=== 2000s ===
'''3dfx Voodoo5''' (2000)
Line 204 ⟶ 205:
'''ATi Flipper''' (2001)
This was the GPU for the [[Nintendo
----
Line 220 ⟶ 221:
'''ATi Radeon 9700''' (2002)
What was actually stunning about this graphics card was that it supported the new [[Direct X]] 9.0 ''before it was officially released.'' But not only that, due to nVidia making a critical error (see below), it was a [[Curb Stomp Battle]] against the GeForce FX in any game using [[Direct X]] 9 (in particular, ''[[
----
'''nVidia GeForce FX''' (2003)
After an unimpressive launch with the overheating, under-performing FX 5800 model, the succeeding FX 5900 was on par with ATi's cards in DirectX 7 and 8 games, but nVidia made some ill-advised decisions in implementing the shader processor across the series. Direct3D 9 required a minimum of 24-bit accuracy in computations, but nVidia's design was optimized around 16-bit math. It could do 32-bit, but only at ''half'' performance. nVidia had assumed that developers would write code specifically for their software. They didn't, and it resulted in the card performing barely half as well as the competing Radeons in ''[[
The aforementioned FX 5800 introduced the idea of GPU coolers which took up a whole expansion slot all by themselves, which is now standard in anything higher than an entry level card. Unfortunately, nVidia got the execution of ''that'' wrong as well, using an undersized fan which constantly ran at full speed and made the card ridiculously loud. This eventually gave way to a more reasonable cooler in the FX 5900, and some fondly remembered [[Self-Deprecation]] videos from nVidia. In a bit of irony, the GeForce FX was developed by the team that came from 3dfx, whom nVidia bought a few years earlier.
Line 256 ⟶ 257:
'''Intel Larrabee'''
In 2008, Intel announced they would try their hands in the dedicated graphics market once more with a radical approach. Traditionally lighting is actually estimated using shading techniques done on each pixel. Intel's approach was to use [
nVidia tried their hands on "real time" ray tracing with the GeForce GTX 480 using a proprietary API. However nVidia's attempts would not see adoption until 2018 with the release of the GeForce 20 series, introducing RTX hardware accelerated ray tracing.
----
Line 285 ⟶ 286:
[[Category:How Video Game Specs Work]]
[[Category:Graphics Processing Unit]]
[[Category:
|