Your homework for Thursday May 3 is to finish the zbuffer assignment.

If you have already gotten to the point where you can scan convert a triangle by interpreting its red,green,blue vertices down to the pixel level (the assignment that was due this week), then you are more than half way there. In these notes, I'm going to assume that you have completed that part of the algorithm.

I am also going to assume that you have added a surface normal vector to each vertex of your primitive untransformed shapes. As we discussed in class on April 19, here are the notes for how to compute the surface normals for a shape.

The complete steps of the zbuffer algorithm in a frame of animation are as follows:

  1. Set all of the pixels of your zbuffer to zero (essentially, 1/z, where z is an infinitely far away background distance).

  2. Set all of the pixels of your framebuffer to some background r,g,b color.

  3. Render your geometry just as you did in the earlier assignments, traversing the tree of nested transformations, with the difference that you will be doing the zbuffer algorithm rather than drawing the edges of your transformed shapes.

  4. After you have computed the transformation matrix M for any shape, transform each of its vertices by applying the matrix to the vertex point x and also to the vertex normal n:

    1. Transform the point x into Mx as you did before.

    2. Transform the normal vector n into ( M-1 )T n . That is, transform the surface normal vector by the transpose of the inverse of your matrix M.

      After you have transformed the surface normal vector, you must then renormalize it (scale it back to unit length) before using it in the Phong shading algorithm.

  5. Perform the Phong shading algorithm on the transformed vertex, to produce a color at that vertex. You have now replaced (x,y,z,nx,ny,nz) by (x,y,z,r,g,b).

  6. Perform the perspective computation. As we discussed in the previous week's notes, you can do this in various ways, depending on where you place your camera.

    If your camera is at the origin looking into positive z (the convention we adopted for ray tracing), then this computation is (x,y,z) → (fx/z,fy/z,1/z).

    If your camera is at some positive z value z=f, looking back toward the origin (the convention we adopted earlier in the semester), this computation is (x,y,z) → (fx/(f-z),fy/(f-z),1/(f-z)).

  7. Now loop through all of the faces of your shape. If that face contains more than three vertices, split it up into triangles.

    Scan convert the triangle. As you scan convert (that is, interpolate down to the pixel) the projective z value pz of the triangle at this pixel, do a comparison between this interpolated pz and the value stored in the z-buffer at this pixel. If pz is closer, then replace both the value in the z-buffer and the value in the rgb framebuffer by the triangle's interpolated pz and the triangle's interpolated r,g,b.