Datasheet
Graphics, Video, and Display
Intel
®
 Atom™ Processor E6xx Series Datasheet
77
7.2.3 Vertex Processing
Modern graphics processors perform two main procedures to generate 3D graphics. 
First, vertex geometry information is transformed and lit to create a 2D representation 
in the screen space. Those transformed and lit vertices are then processed to create 
display lists in memory. The pixel processor then rasterizes these display lists on a 
regional basis to create the final image.
The integrated graphic processor supports DMA data accesses from main memory. DMA 
accesses are controlled by a main scheduler and data sequencer engine. This engine 
coordinates the data and instruction flow for the vertex processing, pixel processing, 
and general purpose operations. 
Transform and lighting operations are performed by the vertex processing pipeline. A 
3D object is usually expressed in terms of triangles, each of which is made up of three 
vertices defined by X–Y–Z coordinate space. The transform and lighting process is 
performed by processing data through the unified shader core. The results of this 
process are sent to the pixel processing function. The steps to transform and light a 
triangle or vertex are explained below. 
7.2.3.1 Vertex Transform Stages
• Local space—Relative to the model itself (e.g., using the model center at the 
reference point). Prior to being placed into a scene with other objects.
• World space: (transform LOCAL to WORLD)—This is needed to bring all objects 
in the scene together into a common coordinate system.
• Camera space (transform WORLD to CAMERA (also called EYE))—This is 
required to transform the world in order to align it with camera view. In OpenGL 
the local to world and world to camera transformation matrix is combined into one, 
called the ModelView matrix.
• Clip space (transform CAMERA to CLIP)—The projection matrix defines the 
viewing frustum onto which the scene will be projected. Projection can be 
orthographic, or perspective. Clip is used because clipping occurs in clip space.
• Perspective space (transform CLIP to PERSPECTIVE)—The perspective divide 
is basically what enables 3D objects to be projected onto a 2D space. A divide is 
necessary to represent distant objects as smaller on the screen. Coordinates in 
perspective space are called normalized device coordinates ([-1,1] in each axis).
• Screen space (transform PERSPECTIVE to SCREEN)—This space is where 2D 
screen coordinates are finally computed, by scaling and biasing the normalized 
device coordinates according to the required render resolution.
7.2.3.2 Lighting Stages
Lighting is used to generate modifications to the base color and texture of vertices; 
examples of different types of lighting are:
• Ambient lighting is constant in all directions and the same color to all pixels of an 
object. Ambient lighting calculations are fast, but objects appear flat and 
unrealistic.
• Diffuse lighting takes into account the light direction relative to the normal vector 
of the object’s surface. Calculating diffuse lighting effects takes more time because 
the light changes for each object vertex, but objects appear shaded with more 
three-dimensional depth.
• Specular lighting identifies bright reflected highlights that occur when light hits an 
object surface and reflects toward the camera. It is more intense than diffuse light 
and falls off more rapidly across the object surface. Although it takes longer to 










