6.1. Overview of the graphics pipeline

Figure 6.1 shows the processing flow for a typical CPU and OpenGL ES 2.0 GPU combination:

Figure 6.1. Processing flow with GPU shaders

To view this graphic, your browser must support the SVG format. Either install a browser with native support, or install an appropriate plugin such as Adobe SVG Viewer.


The primary function of the vertex shader is to calculate the position of a point on the primitive shape by interpolation from the vertex locations. This is often called the transformation stage because it typically generates a pixel location based on matrix operations on a 3D shape. It can also:

The fragment shader outputs gl_FragColor which is the global color value for the fragment. The shader might just assign a fixed color value to all fragments in the shape but it can also:

Figure 6.2 shows the data paths between the application, shader programs, and global context:

Figure 6.2. Simplified view of shader data flow

To view this graphic, your browser must support the SVG format. Either install a browser with native support, or install an appropriate plugin such as Adobe SVG Viewer.


The fragment shader might also use matrix transformations to calculate the fragment color based on all of the lighting sources.

To draw a geometric object, for example a chair:

  1. All vertices associated with the chair must be known. For 3D graphics, the vertices are represented by x, y, and z coordinates. (If not explicitly specified, the value of w is assumed to be 1 for points in 3D space.)

  2. The vertices must be grouped into geometric primitives.

  3. The order to draw the primitives must be determined. Breaking down a complex shape into a sequence of primitive shapes is called modeling.

    The chair is now modeled in its own 3D coordinate system. The next steps position that chair in a larger environment and determine how it looks from the camera.

  4. Specify where the camera is located in 3D space, where the camera is pointing, where is the top of the camera, and the viewing angle for the camera.

  5. A viewing transform calculates where the viewing frustum is in world coordinates.

  6. A projection transform maps the vertex to the display space (clip coordinates).

  7. Modify the result to produce the perspective effects.

  8. The vertex shader uses the interpolated position on the primitive to map the transformed and clipped coordinates to the display window.

  9. The fragment shader converts fragments to colored pixels in the frame buffer.

Copyright © 2010 ARM. All rights reserved.ARM DUI 0527A‑02a
Non-Confidential - Draft - BetaID070710