Pixel vs. Texel

Everything About Fiction You Never Wanted to Know.


  • Main
  • Wikipedia
  • All Subpages
  • Create New
    /wiki/Pixel vs. Texelwork

    Digital images are composed of dots arranged on a grid. Each of these little dots is called a pixel, a contraction of the term 'picture element'. The same is true of the texture graphics that get drawn onto polygons in 3D games; a texel is simply a pixel within a texture image.

    As part of their performance specifications, graphics cards often describe how many texels they can process each second. This is their texture fillrate: the number of times a GPU can access a texture in a single second.

    To access a texture, one often employs filtering. This is a way to fill in the gaps "between" texels, so that you do not have easily visible edges between texels.

    One of the biggest differences between texels and pixels is that pixels are image data. Texels do not have to be. In modern shader-based rendering systems, textures, and thus their component texels, are arbitrary data. They have only what meaning the shader gives them. They certainly can be an image, but they can also be a lookup table. Or a depth map that tells where to start shadows. Or Fresnel table for computing Cook-Torrance specular reflectance.

    Resolution for both pixels and texels have been growing. Until the 6th generation, most video game systems were stuck with screen resolutions about half of standard. A standard TV set has a resolution of 640×480 pixels, and those game systems had a resolution of 320×240 or less. With 2D systems, this was a happy marriage of saving expensive console memory in the frame buffer and saving ROM space, as high resolution sprites take up a lot of ROM size. With 3D systems, this was to save memory, as frame buffers were small at the time. The Nintendo 64 could go to full standard with a memory upgrade, but even then the pixel fillrate prevented it from running smoothly on many games. Staying at 320×240 also saved performance, as rendering at 640×480 took four times as long on the fillrate-limited renderers of the day than 320x240.

    The 6th generation brought 640×480, as well as a lot more video memory for bigger textures. On the Xbox, low-complexity scenes could even show at 720p resolution (1280×720).

    Now we have high definition systems, which offer both high screen resolutions and high texture resolutions. The latter is important, as PC games could do high screen resolutions for years, but with standard resolution textures, that was just "upscaling"; the methods of shader post-processing to hide the blur of upscaled textures often looked artificial.

    With the more modern GPUs on Play Station 3 and the Xbox 360, texture resolutions have been extremely high, but screen resolutions have sometimes been actually lower than the accepted HD minimum of 720 pixels high. This is primarily due to the rise in programmability in GPUs. Executing a complex program can take quite a long time; complex programs also use more textures to do specialized effects. Lighting, proper shadow, in short, all of the things we accept in modern graphics-intensive games, have a cost to them. And that cost comes primarily out of the pixel fillrate. Reducing the overall pixel count means less work for the frame buffer, with the same performance.

    As for fitting all the texture resolution, that's partly thanks to memory, and partly thanks to Texture Compression.