Everipedia Logo
Everipedia is now IQ.wiki - Join the IQ Brainlist and our Discord for early access to editing on the new platform and to participate in the beta testing.
Graphics processing unit

Graphics processing unit

A graphics processing unit (GPU) is a specialized electronic circuit designed to rapidly manipulate and alter memory to accelerate the creation of images in a frame buffer intended for output to a display device. GPUs are used in embedded systems, mobile phones, personal computers, workstations, and game consoles. Modern GPUs are very efficient at manipulating computer graphics and image processing. Their highly parallel structure makes them more efficient than general-purpose central processing units (CPUs) for algorithms that process large blocks of data in parallel. In a personal computer, a GPU can be present on a video card or embedded on the motherboard. In certain CPUs, they are embedded on the CPU die.[1]

The term "GPU" was coined by Sony in reference to the PlayStation console's Toshiba-designed Sony GPU in 1994.[2] The term was popularized by Nvidia in 1999, who marketed the GeForce 256 as "the world's first GPU".[3] It was presented as a "single-chip processor with integrated transform, lighting, triangle setup/clipping, and rendering engines".[4] Rival ATI Technologies coined the term "visual processing unit" or VPU with the release of the Radeon 9700 in 2002.[5]



Arcade system boards have been using specialized graphics chips since the 1970s. In early video game hardware, the RAM for frame buffers was expensive, so video chips composited data together as the display was being scanned out on the monitor.[6]

Fujitsu's MB14241 video shifter was used to accelerate the drawing of sprite graphics for various 1970s arcade games from Taito and Midway, such as Gun Fight (1975), Sea Wolf (1976) and Space Invaders (1978).[7][8][9] The Namco Galaxian arcade system in 1979 used specialized graphics hardware supporting RGB color, multi-colored sprites and tilemap backgrounds.[10] The Galaxian hardware was widely used during the golden age of arcade video games, by game companies such as Namco, Centuri, Gremlin, Irem, Konami, Midway, Nichibutsu, Sega and Taito.[11][12]

In the home market, the Atari 2600 in 1977 used a video shifter called the Television Interface Adaptor.[13] The Atari 8-bit computers (1979) had ANTIC, a video processor which interpreted instructions describing a "display list"—the way the scan lines map to specific bitmapped or character modes and where the memory is stored (so there did not need to be a contiguous frame buffer).[14] 6502 machine code subroutines could be triggered on scan lines by setting a bit on a display list instruction.[15] ANTIC also supported smooth vertical and horizontal scrolling independent of the CPU.[16]


The NEC µPD7220 was the first implementation of a PC graphics display processor as a single Large Scale Integration (LSI) integrated circuit chip, enabling the design of low-cost, high-performance video graphics cards such as those from Number Nine Visual Technology. It became the best-known GPU up until the mid-1980s.[17] It was the first fully integrated VLSI (very large-scale integration) metal-oxide-semiconductor (NMOS) graphics display processor for PCs, supported up to 1024x1024 resolution, and laid the foundations for the emerging PC graphics market. It was used in a number of graphics cards, and was licensed for clones such as the Intel 82720, the first of Intel's graphics processing units.[18] The Williams Electronics arcade games Robotron 2084, Joust, Sinistar, and Bubbles, all released in 1982, contain custom blitter chips for operating on 16-color bitmaps.[19][20]

In 1984, Hitachi released ARTC HD63484, the first major CMOS graphics processor for PC. The ARTC was capable of displaying up to 4K resolution when in monochrome mode, and it was used in a number of PC graphics cards and terminals during the late 1980s.[21] In 1985, the Commodore Amiga featured a custom graphics chip, with a blitter unit accelerating bitmap manipulation, line draw, and area fill functions. Also included is a coprocessor with its own simple instruction set, capable of manipulating graphics hardware registers in sync with the video beam (e.g. for per-scanline palette switches, sprite multiplexing, and hardware windowing), or driving the blitter. In 1986, Texas Instruments released the TMS34010, the first fully programmable graphics processor.[22] It could run general-purpose code, but it had a graphics-oriented instruction set. During 1990-1992, this chip would become the basis of the Texas Instruments Graphics Architecture ("TIGA") Windows accelerator cards.

In 1987, the IBM 8514 graphics system was released as one of the first video cards for IBM PC compatibles to implement fixed-function 2D primitives in electronic hardware. Sharp's X68000, released in 1987, used a custom graphics chipset[23] with a 65,536 color palette and hardware support for sprites, scrolling, and multiple playfields,[24] eventually serving as a development machine for Capcom's CP System arcade board. Fujitsu later competed with the FM Towns computer, released in 1989 with support for a full 16,777,216 color palette.[25] In 1988, the first dedicated polygonal 3D graphics boards were introduced in arcades with the Namco System 21[26] and Taito Air System.[27]

IBM's proprietary Video Graphics Array (VGA) display standard was introduced in 1987, with a maximum resolution of 640×480 pixels. In November 1988, NEC Home Electronics announced its creation of the Video Electronics Standards Association (VESA) to develop and promote a Super VGA (SVGA) computer display standard as a successor to IBM's proprietary VGA display standard. Super VGA enabled graphics display resolutions up to 800×600 pixels, a 36% increase.[28]


In 1991, S3 Graphics introduced the S3 86C911, which its designers named after the Porsche 911 as an indication of the performance increase it promised.[29] The 86C911 spawned a host of imitators: by 1995, all major PC graphics chip makers had added 2D acceleration support to their chips.[30][31] By this time, fixed-function Windows accelerators had surpassed expensive general-purpose graphics coprocessors in Windows performance, and these coprocessors faded away from the PC market.

Throughout the 1990s, 2D GUI acceleration continued to evolve. As manufacturing capabilities improved, so did the level of integration of graphics chips. Additional application programming interfaces (APIs) arrived for a variety of tasks, such as Microsoft's WinG graphics library for Windows 3.x, and their later DirectDraw interface for hardware acceleration of 2D games within Windows 95 and later.

In the early- and mid-1990s, real-time 3D graphics were becoming increasingly common in arcade, computer and console games, which led to an increasing public demand for hardware-accelerated 3D graphics. Early examples of mass-market 3D graphics hardware can be found in arcade system boards such as the Sega Model 1, Namco System 22, and Sega Model 2, and the fifth-generation video game consoles such as the Saturn, PlayStation and Nintendo 64. Arcade systems such as the Sega Model 2 and Namco Magic Edge Hornet Simulator in 1993 were capable of hardware T&L (transform, clipping, and lighting) years before appearing in consumer graphics cards.[32][33] Some systems used DSPs to accelerate transformations. Fujitsu, which worked on the Sega Model 2 arcade system,[34] began working on integrating T&L into a single LSI solution for use in home computers in 1995;[35][36] the Fujitsu Pinolite, the first 3D geometry processor for personal computers, released in 1997.[37] The first hardware T&L GPU on home video game consoles was the Nintendo 64's Reality Coprocessor, released in 1996.[38] In 1997, Mitsubishi released the 3Dpro/2MP, a fully featured GPU capable of transformation and lighting, for workstations and Windows NT desktops;[39] ATi utilized it for their FireGL 4000 graphics card, released in 1997.[40]

The term "GPU" was coined by Sony in reference to the 32-bit Sony GPU (designed by Toshiba) in the PlayStation video game console, released in 1994.[2]

In the PC world, notable failed first tries for low-cost 3D graphics chips were the S3 ViRGE, ATI Rage, and Matrox Mystique. These chips were essentially previous-generation 2D accelerators with 3D features bolted on. Many were even pin-compatible with the earlier-generation chips for ease of implementation and minimal cost. Initially, performance 3D graphics were possible only with discrete boards dedicated to accelerating 3D functions (and lacking 2D GUI acceleration entirely) such as the PowerVR and the 3dfx Voodoo. However, as manufacturing technology continued to progress, video, 2D GUI acceleration and 3D functionality were all integrated into one chip. Rendition's Verite chipsets were among the first to do this well enough to be worthy of note. In 1997, Rendition went a step further by collaborating with Hercules and Fujitsu on a "Thriller Conspiracy" project which combined a Fujitsu FXG-1 Pinolite geometry processor with a Vérité V2200 core to create a graphics card with a full T&L engine years before Nvidia's GeForce 256. This card, designed to reduce the load placed upon the system's CPU, never made it to market.

OpenGL appeared in the early '90s as a professional graphics API, but originally suffered from performance issues which allowed the Glide API to step in and become a dominant force on the PC in the late '90s.[41] However, these issues were quickly overcome and the Glide API fell by the wayside. Software implementations of OpenGL were common during this time, although the influence of OpenGL eventually led to widespread hardware support. Over time, a parity emerged between features offered in hardware and those offered in OpenGL. DirectX became popular among Windows game developers during the late 90s. Unlike OpenGL, Microsoft insisted on providing strict one-to-one support of hardware. The approach made DirectX less popular as a standalone graphics API initially, since many GPUs provided their own specific features, which existing OpenGL applications were already able to benefit from, leaving DirectX often one generation behind. (See: Comparison of OpenGL and Direct3D.)

Over time, Microsoft began to work more closely with hardware developers, and started to target the releases of DirectX to coincide with those of the supporting graphics hardware. Direct3D 5.0 was the first version of the burgeoning API to gain widespread adoption in the gaming market, and it competed directly with many more-hardware-specific, often proprietary graphics libraries, while OpenGL maintained a strong following. Direct3D 7.0 introduced support for hardware-accelerated transform and lighting (T&L) for Direct3D, while OpenGL had this capability already exposed from its inception. 3D accelerator cards moved beyond being just simple rasterizers to add another significant hardware stage to the 3D rendering pipeline. The Nvidia GeForce 256 (also known as NV10) was the first consumer-level card released on the market with hardware-accelerated T&L, while professional 3D cards already had this capability. Hardware transform and lighting, both already existing features of OpenGL, came to consumer-level hardware in the '90s and set the precedent for later pixel shader and vertex shader units which were far more flexible and programmable.

2000 to 2010

Nvidia was first to produce a chip capable of programmable shading, the GeForce 3 (code named NV20). Each pixel could now be processed by a short "program" that could include additional image textures as inputs, and each geometric vertex could likewise be processed by a short program before it was projected onto the screen. Used in the Xbox console, it competed with the PlayStation 2 (which used a custom vector DSP for hardware accelerated vertex processing; commonly referred to VU0/VU1). The earliest incarnations of shader execution engines used in Xbox were not general purpose and could not execute arbitrary pixel code. Vertices and pixels were processed by different units which had their own resources with pixel shaders having much tighter constraints (being as they are executed at much higher frequencies than with vertices). Pixel shading engines were actually more akin to a highly customizable function block and didn't really "run" a program. Many of these disparities between vertex and pixel shading wouldn't be addressed until much later with the Unified Shader Model.

By October 2002, with the introduction of the ATI Radeon 9700 (also known as R300), the world's first Direct3D 9.0 accelerator, pixel and vertex shaders could implement looping and lengthy floating point math, and were quickly becoming as flexible as CPUs, yet orders of magnitude faster for image-array operations. Pixel shading is often used for bump mapping, which adds texture, to make an object look shiny, dull, rough, or even round or extruded.[42]

With the introduction of the GeForce 8 series, which was produced by Nvidia, and then new generic stream processing unit GPUs became a more generalized computing device. Today, parallel GPUs have begun making computational inroads against the CPU, and a subfield of research, dubbed GPU Computing or GPGPU for General Purpose Computing on GPU, has found its way into fields as diverse as machine learning,[43] oil exploration, scientific image processing, linear algebra,[44] statistics,[45] 3D reconstruction and even stock options pricing determination. GPGPU at the time was the precursor to what we now call compute shaders (e.g. CUDA, OpenCL, DirectCompute) and actually abused the hardware to a degree by treating the data passed to algorithms as texture maps and executing algorithms by drawing a triangle or quad with an appropriate pixel shader. This obviously entails some overheads since we involve units like the Scan Converter where they aren't really needed (nor do we even care about the triangles, except to invoke the pixel shader). Over the years, the energy consumption of GPUs has increased and to manage it, several techniques have been proposed.[46]

Nvidia's CUDA platform, first introduced in 2007,[47] was the earliest widely adopted programming model for GPU computing. More recently OpenCL has become broadly supported. OpenCL is an open standard defined by the Khronos Group which allows for the development of code for both GPUs and CPUs with an emphasis on portability.[48] OpenCL solutions are supported by Intel, AMD, Nvidia, and ARM, and according to a recent report by Evan's Data, OpenCL is the GPGPU development platform most widely used by developers in both the US and Asia Pacific.

2010 to present

In 2010, Nvidia began a partnership with Audi to power their cars' dashboards. These Tegra GPUs were powering the cars' dashboard, offering increased functionality to cars' navigation and entertainment systems.[49] Advancements in GPU technology in cars has helped push self-driving technology.[50] AMD's Radeon HD 6000 Series cards were released in 2010 and in 2011, AMD released their 6000M Series discrete GPUs to be used in mobile devices.[51] The Kepler line of graphics cards by Nvidia came out in 2012 and were used in the Nvidia's 600 and 700 series cards. A new feature in this new GPU microarchitecture included GPU boost, a technology adjusts the clock-speed of a video card to increase or decrease it according to its power draw.[52] The Kepler microarchitecture was manufactured on the 28 nm process.

The PS4 and Xbox One were released in 2013, they both use GPUs based on AMD's Radeon HD 7850 and 7790.[53] Nvidia's Kepler line of GPUs was followed by the Maxwell line, manufactured on the same process. 28 nm chips by Nvidia were manufactured by TSMC, the Taiwan Semiconductor Manufacturing Company, that was manufacturing using the 28 nm process at the time. Compared to the 40 nm technology from the past, this new manufacturing process allowed a 20 percent boost in performance while drawing less power.[54][55] Virtual reality headsets have very high system requirements. VR headset manufacturers recommended the GTX 970 and the R9 290X or better at the time of their release.[56][57] Pascal is the next generation of consumer graphics cards by Nvidia released in 2016. The GeForce 10 series of cards are under this generation of graphics cards. They are made using the 16 nm manufacturing process which improves upon previous microarchitectures.[58] Nvidia has released one non-consumer card under the new Volta architecture, the Titan V. Changes from the Titan XP, Pascal's high-end card, include an increase in the number of CUDA cores, the addition of tensor cores, and HBM2. Tensor cores are cores specially designed for deep learning, while high-bandwidth memory is on-die, stacked, lower-clocked memory that offers an extremely wide memory bus that is useful for the Titan V's intended purpose. To emphasize that the Titan V is not a gaming card, Nvidia removed the "GeForce GTX" suffix it adds to consumer gaming cards.

On August 20, 2018, Nvidia launched the RTX 20 series GPUs that add ray-tracing cores to GPUs, improving their performance on lighting effects.[59] Polaris 11 and Polaris 10 GPUs from AMD are fabricated a 14-nanometer process. Their release results in a substantial increase in the performance per watt of AMD video cards.[60] AMD has also released the Vega GPUs series for the high end market as a competitor to Nvidia's high end Pascal cards, also featuring HBM2 like the Titan V.

GPU companies

Many companies have produced GPUs under a number of brand names. In 2009, Intel, Nvidia and AMD/ATI were the market share leaders, with 49.4%, 27.8% and 20.6% market share respectively. However, those numbers include Intel's integrated graphics solutions as GPUs. Not counting those, Nvidia and AMD control nearly 100% of the market as of 2018. Their respective market shares are 66% and 33%.[61] In addition, S3 Graphics[62] and Matrox[63] produce GPUs. Modern smartphones also use mostly Adreno GPUs from Qualcomm, PowerVR GPUs from Imagination Technologies and Mali GPUs from ARM.

Computational functions

Modern GPUs use most of their transistors to do calculations related to 3D computer graphics. In addition to the 3D hardware, today's GPUs include basic 2D acceleration and framebuffer capabilities (usually with a VGA compatibility mode). Newer cards such as AMD/ATI HD5000-HD7000 even lack 2D acceleration; it has to be emulated by 3D hardware. GPUs were initially used to accelerate the memory-intensive work of texture mapping and rendering polygons, later adding units to accelerate geometric calculations such as the rotation and translation of vertices into different coordinate systems. Recent developments in GPUs include support for programmable shaders which can manipulate vertices and textures with many of the same operations supported by CPUs, oversampling and interpolation techniques to reduce aliasing, and very high-precision color spaces. Because most of these computations involve matrix and vector operations, engineers and scientists have increasingly studied the use of GPUs for non-graphical calculations; they are especially suited to other embarrassingly parallel problems.

With the emergence of deep learning, the importance of GPUs has increased. In research done by Indigo, it was found that while training deep learning neural networks, GPUs can be 250 times faster than CPUs. The explosive growth of Deep Learning in recent years has been attributed to the emergence of general purpose GPUs.[64] There has been some level of competition in this area with ASICs, most prominently the Tensor Processing Unit (TPU) made by Google. However, ASICs require changes to existing code and GPUs are still very popular.

GPU accelerated video decoding

Most GPUs made since 1995 support the YUV color space and hardware overlays, important for digital video playback, and many GPUs made since 2000 also support MPEG primitives such as motion compensation and iDCT. This process of hardware accelerated video decoding, where portions of the video decoding process and video post-processing are offloaded to the GPU hardware, is commonly referred to as "GPU accelerated video decoding", "GPU assisted video decoding", "GPU hardware accelerated video decoding" or "GPU hardware assisted video decoding".

More recent graphics cards even decode high-definition video on the card, offloading the central processing unit. The most common APIs for GPU accelerated video decoding are DxVA for Microsoft Windows operating system and VDPAU, VAAPI, XvMC, and XvBA for Linux-based and UNIX-like operating systems. All except XvMC are capable of decoding videos encoded with MPEG-1, MPEG-2, MPEG-4 ASP (MPEG-4 Part 2), MPEG-4 AVC (H.264 / DivX 6), VC-1, WMV3/WMV9, Xvid / OpenDivX (DivX 4), and DivX 5 codecs, while XvMC is only capable of decoding MPEG-1 and MPEG-2.

Video decoding processes that can be accelerated

The video decoding processes that can be accelerated by today's modern GPU hardware are:

  • Motion compensation (mocomp)

  • Inverse discrete cosine transform (iDCT) Inverse telecine 3:2 and 2:2 pull-down correction

  • Inverse modified discrete cosine transform (iMDCT)

  • In-loop deblocking filter

  • Intra-frame prediction

  • Inverse quantization (IQ)

  • Variable-length decoding (VLD), more commonly known as slice-level acceleration

  • Spatial-temporal deinterlacing and automatic interlace/progressive source detection

  • Bitstream processing (Context-adaptive variable-length coding/Context-adaptive binary arithmetic coding) and perfect pixel positioning.

GPU forms


In personal computers, there are two main forms of GPUs. Each has many synonyms:[65]

  • Dedicated graphics card - also called discrete.

  • Integrated graphics - also called: shared graphics solutions, integrated graphics processors (IGP), or unified memory architecture (UMA).

Usage specific GPU

Most GPUs are designed for a specific usage, real-time 3D graphics or other mass calculations:

  1. Gaming GeForce GTX, RTX Nvidia Titan X AMD Radeon HD AMD Radeon R5, R7, R9, RX, Vega and Navi series

  2. Cloud Gaming Nvidia Grid AMD Radeon Sky

  3. Workstation Nvidia Quadro Nvidia Titan X AMD FirePro AMD Radeon Pro AMD Radeon VII

  4. Cloud Workstation Nvidia Tesla AMD FireStream

  5. Artificial Intelligence Cloud Nvidia Tesla AMD Radeon Instinct

  6. Automated/Driverless car Nvidia Drive PX

Dedicated graphics cards

The GPUs of the most powerful class typically interface with the motherboard by means of an expansion slot such as PCI Express (PCIe) or Accelerated Graphics Port (AGP) and can usually be replaced or upgraded with relative ease, assuming the motherboard is capable of supporting the upgrade. A few graphics cards still use Peripheral Component Interconnect (PCI) slots, but their bandwidth is so limited that they are generally used only when a PCIe or AGP slot is not available.

A dedicated GPU is not necessarily removable, nor does it necessarily interface with the motherboard in a standard fashion. The term "dedicated" refers to the fact that dedicated graphics cards have RAM that is dedicated to the card's use, not to the fact that most dedicated GPUs are removable. Further, this RAM is usually specially selected for the expected serial workload of the graphics card (see GDDR). Sometimes, systems with dedicated, discrete GPUs were called "DIS" systems,[66] as opposed to "UMA" systems (see next section). Dedicated GPUs for portable computers are most commonly interfaced through a non-standard and often proprietary slot due to size and weight constraints. Such ports may still be considered PCIe or AGP in terms of their logical host interface, even if they are not physically interchangeable with their counterparts.

Technologies such as SLI by Nvidia and CrossFire by AMD allow multiple GPUs to draw images simultaneously for a single screen, increasing the processing power available for graphics.

Integrated graphics processing unit

Integrated graphics processing unit (IGPU), Integrated graphics, shared graphics solutions, integrated graphics processors (IGP) or unified memory architecture (UMA) utilize a portion of a computer's system RAM rather than dedicated graphics memory. IGPs can be integrated onto the motherboard as part of the chipset, or on the same die with the CPU (like AMD APU or Intel HD Graphics). On certain motherboards,[67] AMD's IGPs can use dedicated sideport memory. This is a separate fixed block of high performance memory that is dedicated for use by the GPU. In early 2007, computers with integrated graphics account for about 90% of all PC shipments.[68] They are less costly to implement than dedicated graphics processing, but tend to be less capable. Historically, integrated processing was considered unfit to play 3D games or run graphically intensive programs but could run less intensive programs such as Adobe Flash. Examples of such IGPs would be offerings from SiS and VIA circa 2004.[69] However, modern integrated graphics processors such as AMD Accelerated Processing Unit and Intel HD Graphics are more than capable of handling 2D graphics or low stress 3D graphics.

Since the GPU computations are extremely memory-intensive, integrated processing may find itself competing with the CPU for the relatively slow system RAM, as it has minimal or no dedicated video memory. IGPs can have up to 29.856 GB/s of memory bandwidth from system RAM, whereas a graphics card may have up to 264 GB/s of bandwidth between its RAM and GPU core. This memory bus bandwidth can limit the performance of the GPU, though multi-channel memory can mitigate this deficiency.[70] Older integrated graphics chipsets lacked hardware transform and lighting, but newer ones include it.[71][72]

Hybrid graphics processing

This newer class of GPUs competes with integrated graphics in the low-end desktop and notebook markets. The most common implementations of this are ATI's HyperMemory and Nvidia's TurboCache.

Hybrid graphics cards are somewhat more expensive than integrated graphics, but much less expensive than dedicated graphics cards. These share memory with the system and have a small dedicated memory cache, to make up for the high latency of the system RAM. Technologies within PCI Express can make this possible. While these solutions are sometimes advertised as having as much as 768MB of RAM, this refers to how much can be shared with the system memory.

Stream processing and general purpose GPUs (GPGPU)

It is becoming increasingly common to use a general purpose graphics processing unit (GPGPU) as a modified form of stream processor (or a vector processor), running compute kernels. This concept turns the massive computational power of a modern graphics accelerator's shader pipeline into general-purpose computing power, as opposed to being hard wired solely to do graphical operations. In certain applications requiring massive vector operations, this can yield several orders of magnitude higher performance than a conventional CPU. The two largest discrete (see "Dedicated graphics cards" above) GPU designers, AMD and Nvidia, are beginning to pursue this approach with an array of applications. Both Nvidia and AMD have teamed with Stanford University to create a GPU-based client for the Folding@home distributed computing project, for protein folding calculations. In certain circumstances, the GPU calculates forty times faster than the CPUs traditionally used by such applications.[73][74]

GPGPU can be used for many types of embarrassingly parallel tasks including ray tracing. They are generally suited to high-throughput type computations that exhibit data-parallelism to exploit the wide vector width SIMD architecture of the GPU.

Furthermore, GPU-based high performance computers are starting to play a significant role in large-scale modelling. Three of the 10 most powerful supercomputers in the world take advantage of GPU acceleration.[75]

GPU supports API extensions to the C programming language such as OpenCL and OpenMP. Furthermore, each GPU vendor introduced its own API which only works with their cards, AMD APP SDK and CUDA from AMD and Nvidia, respectively. These technologies allow specified functions called compute kernels from a normal C program to run on the GPU's stream processors. This makes it possible for C programs to take advantage of a GPU's ability to operate on large buffers in parallel, while still using the CPU when appropriate. CUDA is also the first API to allow CPU-based applications to directly access the resources of a GPU for more general purpose computing without the limitations of using a graphics API.

Since 2005 there has been interest in using the performance offered by GPUs for evolutionary computation in general, and for accelerating the fitness evaluation in genetic programming in particular. Most approaches compile linear or tree programs on the host PC and transfer the executable to the GPU to be run. Typically the performance advantage is only obtained by running the single active program simultaneously on many example problems in parallel, using the GPU's SIMD architecture.[76][77] However, substantial acceleration can also be obtained by not compiling the programs, and instead transferring them to the GPU, to be interpreted there.[78][79] Acceleration can then be obtained by either interpreting multiple programs simultaneously, simultaneously running multiple example problems, or combinations of both. A modern GPU can readily simultaneously interpret hundreds of thousands of very small programs.

Some modern workstation GPUs, such as the Nvidia Quadro workstation cards using the Volta and Turing architectures, feature dedicating processing cores for tensor-based deep learning applications. In Nvidia's current series of GPUs these cores are called Tensor Cores.[80] These GPUs usually have significant FLOPS performance increases, utilizing 4x4 matrix multiplication and division, resulting in hardware performance up to 128 TFLOPS in some applications.[81] These tensor cores are also supposed to appear in consumer cards running the Turing architecture, and possibly in the Navi series of consumer cards from AMD.[82]

External GPU (eGPU)

An external GPU is a graphics processor located outside of the housing of the computer. External graphics processors are sometimes used with laptop computers. Laptops might have a substantial amount of RAM and a sufficiently powerful central processing unit (CPU), but often lack a powerful graphics processor, and instead have a less powerful but more energy-efficient on-board graphics chip. On-board graphics chips are often not powerful enough for playing the latest games, or for other graphically intensive tasks, such as editing video.

Therefore, it is desirable to be able to attach a GPU to some external bus of a notebook. PCI Express is the only bus commonly used for this purpose. The port may be, for example, an ExpressCard or mPCIe port (PCIe ×1, up to 5 or 2.5 Gbit/s respectively) or a Thunderbolt 1, 2, or 3 port (PCIe ×4, up to 10, 20, or 40 Gbit/s respectively). Those ports are only available on certain notebook systems.[83][84]

Official vendor support for External GPUs has gained traction recently.  One notable milestone was Apple's decision to officially support External GPUs with mac OS High Sierra 10.13.4.[85]  There are also several major hardware vendors (HP, Alienware, Razer) releasing Thunderbolt 3 eGPU enclosures.[86][87][88] This support has continued to fuel eGPU implementations by enthusiasts.[89]


In 2013, 438.3 million GPUs were shipped globally and the forecast for 2014 was 414.2 million.[90]

See also

  • Texture mapping unit (TMU)

  • Render output unit (ROP)

  • Brute force attack

  • Computer hardware

  • Computer monitor

  • GPU cache

  • Manycore

  • Physics processing unit (PPU)

  • Tensor processing unit (TPU)

  • Ray tracing hardware

  • Vision processing unit (VPU)

  • Vector processor

  • Video card

  • Video Display Controller

  • Video game console

  • Virtualized GPU

  • AI accelerator


  • Comparison of AMD graphics processing units

  • Comparison of Nvidia graphics processing units

  • Comparison of Intel graphics processing units

  • Intel GMA

  • Larrabee

  • Nvidia PureVideo - the bit-stream technology from Nvidia used in their graphics chips to accelerate video decoding on hardware GPU with DXVA.

  • SoC

  • UVD (Unified Video Decoder) – the video decoding bit-stream technology from ATI to support hardware (GPU) decode with DXVA


  • OpenGL API

  • DirectX Video Acceleration (DxVA) API for Microsoft Windows operating-system.

  • Mantle (API)

  • Vulkan (API)

  • Video Acceleration API (VA API)

  • VDPAU (Video Decode and Presentation API for Unix)

  • X-Video Bitstream Acceleration (XvBA), the X11 equivalent of DXVA for MPEG-2, H.264, and VC-1

  • X-Video Motion Compensation – the X11 equivalent for MPEG-2 video codec only


  • GPU cluster

  • Mathematica – includes built-in support for CUDA and OpenCL GPU execution

  • Molecular modeling on GPU

  • Deeplearning4j – open-source, distributed deep learning for Java


Citation Linkcomputershopper.comDenny Atkin. "Computer Shopper: The Right GPU for You". Archived from the original on 2007-05-06. Retrieved 2007-05-15.
Sep 30, 2019, 4:35 AM
Citation Linkwww.computer.orghttps://www.computer.org/publications/tech-news/chasing-pixels/is-it-time-to-rename-the-gpu
Sep 30, 2019, 4:35 AM
Citation Linkweb.archive.org"NVIDIA Launches the World's First Graphics Processing Unit: GeForce 256". Nvidia. 31 August 1999. Archived from the original on 12 April 2016. Retrieved 28 March 2016.
Sep 30, 2019, 4:35 AM
Citation Linkweb.archive.org"Graphics Processing Unit (GPU)". Nvidia. Archived from the original on 8 April 2016. Retrieved 29 March 2016.
Sep 30, 2019, 4:35 AM
Citation Linkwww.tomshardware.comPabst, Thomas (18 July 2002). "ATi Takes Over 3D Technology Leadership With Radeon 9700". Tom's Hardware. Retrieved 29 March 2016.
Sep 30, 2019, 4:35 AM
Citation Linkweb.archive.orgHague, James (September 10, 2013). "Why Do Dedicated Game Consoles Exist?". Programming in the 21st Century. Archived from the original on May 4, 2015.
Sep 30, 2019, 4:35 AM
Citation Linkgithub.com"mame/8080bw.c at master 路 mamedev/mame 路 GitHub". GitHub. Archived from the original on 2014-11-21.
Sep 30, 2019, 4:35 AM
Citation Linkgithub.com"mame/mw8080bw.c at master 路 mamedev/mame 路 GitHub". GitHub. Archived from the original on 2014-11-21.
Sep 30, 2019, 4:35 AM
Citation Linkwww.computerarcheology.com"Arcade/SpaceInvaders – Computer Archeology". computerarcheology.com. Archived from the original on 2014-09-13.
Sep 30, 2019, 4:35 AM
Citation Linkgithub.com"mame/galaxian.c at master 路 mamedev/mame 路 GitHub". GitHub. Archived from the original on 2014-11-21.
Sep 30, 2019, 4:35 AM
Citation Linkgithub.com"mame/galaxian.c at master 路 mamedev/mame 路 GitHub". GitHub. Archived from the original on 2014-11-21.
Sep 30, 2019, 4:35 AM
Citation Linkmamedev.org"MAME - src/mame/drivers/galdrvr.c". archive.org. Archived from the original on 3 January 2014.
Sep 30, 2019, 4:35 AM
Citation Linkweb.archive.orgSpringmann, Alessondra. "Atari 2600 Teardown: What?s Inside Your Old Console?". The Washington Post. Archived from the original on July 14, 2015. Retrieved July 14, 2015.
Sep 30, 2019, 4:35 AM
Citation Linkwww.atari8.com"What are the 6502, ANTIC, CTIA/GTIA, POKEY, and FREDDIE chips?". Atari8.com. Archived from the original on 2016-03-05.
Sep 30, 2019, 4:35 AM
Citation Linkweb.archive.orgWiegers, Karl E. (April 1984). "Atari Display List Interrupts". Compute! (47): 161. Archived from the original on 2016-03-04.
Sep 30, 2019, 4:35 AM
Citation Linkweb.archive.orgWiegers, Karl E. (December 1985). "Atari Fine Scrolling". Compute! (67): 110. Archived from the original on 2006-02-16.
Sep 30, 2019, 4:35 AM
Citation Linkbooks.google.comF.Robert A. Hopgood, Roger J. Hubbold, David A. Duce, eds. (1986). Advances in Computer Graphics II. Springer. p. 169. ISBN 9783540169109. Perhaps the best known one is the NEC 7220.CS1 maint: uses editors parameter (link)
Sep 30, 2019, 4:35 AM
Citation Linkwww.computer.orgFamous Graphics Chips: NEC µPD7220 Graphics Display Controller (IEEE Computer Society)
Sep 30, 2019, 4:35 AM
Citation Linkweb.archive.orgRiddle, Sean. "Blitter Information". Archived from the original on 2015-12-22.
Sep 30, 2019, 4:35 AM
Citation Linkbooks.google.comWolf, Mark J.P. (June 2012). Before the Crash: Early Video Game History. Wayne State University Press. p. 185. ISBN 978-0814337226.
Sep 30, 2019, 4:35 AM