Graphics Processing Unit: Difference between revisions

Added image from Wikimedia commons, wrote caption.
(Added image from Wikimedia commons, wrote caption.)
 
(2 intermediate revisions by 2 users not shown)
Line 1:
{{Useful Notes}}
[[File:Nvidia@12nm@Turing@TU104@GeForce RTX 2080@S TAIWAN 1841A1 PKYN44.000 TU104-400-A1 DSC06154-DSC06272 - ZS-retouched (50914918427).jpg|thumb|Massive graphical performance. One tiny chip.]]
A GPU is the common term for a piece of a computer/console's hardware that is dedicated to drawing things. IE: graphics. The term "GPU" was coined by nVidia upon the launch of their GeForce line of hardware. This was generally a marketing stunt, though the GeForce did have some fairly advanced processing features in it. However, the term GPU has become the accepted shorthand for ''any'' graphics processing chip, even pre-GeForce ones.
 
Line 65 ⟶ 66:
== The Future ==
 
GPUs today can execute a lot of programs that formerly only CPUs could, but with radically different performance characteristics. A typical home GPU can run hundreds of threads at once, while a typical mid-range home CPU can run two to4-16 fourthreads. On the other hand, each GPU thread progresses far more slowly than a CPU thread. Thus if you have thousands of almost identical tasks you need to run at once, like many pixels in a graphical scene or many objects in a game with physics, a GPU might be able to do work a hundred times faster than a CPU. But if you only have a few things to do and they have to happen in sequence, a CPU-style architecture will give vastly better performance. As general-purpose GPU programming progresses, GPUs might get used for more and more things until they're nearly as indispensable as CPUs. (OrIndeed, in some tasks a strong GPU is often required: From the late 2010s on, more and more consumer devices include powerful GPUs for things other than gaming, often related to maybemachine notlearning.)
 
== Some notable GPUs over the years: ==
Line 258 ⟶ 259:
In 2008, Intel announced they would try their hands in the dedicated graphics market once more with a radical approach. Traditionally lighting is actually estimated using shading techniques done on each pixel. Intel's approach was to use [[wikipedia:Ray tracing (graphics)|ray tracing]], which at the time was a hugely computationally expensive operation. Intel's design however was to use the Pentium architecture, but scale it down using modern integrated chip sizes, modify it for graphic related instructions. A special version of ''[[Enemy Territory: Quake Wars|Enemy Territory Quake Wars]]'' was used to demonstrate it. It was axed in late 2009.
 
nVidia tried their hands on "real time" ray tracing with the GeForce GTX 480 using a proprietary API. However nVidia's attempts would not see adoption until 2018 with the release of the GeForce 20 series, introducing RTX hardware accelerated ray tracing.
 
----