Your class should have these abstract methods:
as well as a render method, fragments of which are given here:double x(double u, double v); double y(double u, double v); double z(double u, double v);
// allocate an m by n parametric grid for vertex location and normals
double[][][] P = new double[m][n][6];
// compute the location of each vertex
for (int i = 0 ; i < m ; i++) {
double u = (double)i / (m-1);
for (int j = 0 ; j < n ; j++) {
double v = (double)j / (n-1);
P[i][j][0] = x(u,v);
P[i][j][1] = y(u,v);
P[i][j][2] = z(u,v);
}
}
// compute the normals at each point
for (int i = 0 ; i < m ; i++)
for (int j = 0 ; j < n ; j++) {
// find the next lower neighbor, and check for wrap-around
int i0 = i > 0 ? i-1 : samePoint(P[i][j],P[m-1][j]) ? m-2 : i;
int j0 = j > 0 ? j-1 : samePoint(P[i][j],P[n-1][0]) ? n-2 : j;
// find the next higher neighbor, and check for wrap-around
int i1 = i < m-1 ? i+1 : samePoint(P[i][j],P[0][j]) ? 1 : i;
int j1 = j < n-1 ? j+1 : samePoint(P[i][j],P[i][0]) ? 1 : j;
// Fill in P[i][j][3..5] based on cross product of vectors
// connecting neighboring vertices in parametric grid.
// (remember to normalize the normal vector inside computeNormal).
computeNormal(P[i][j], P[i0][j],P[i1][j], P[i][j0],P[i][j1]);
}
// z-buffer each polygon
for (int i = 1 ; i < m ; i++)
for (int j = 1 ; j < n ; j++) {
zbufferTriangle(P[i-1][j-1],P[i][j-1],P[i][j]);
zbufferTriangle(P[i-1][j-1],P[i][j],P[i-1][j]);
}
In your zbuffer algorithm, you should interpolate the normals
over the image, and use the interpolated normal to do shading.
At the minimum, do the same Gouraud lighting
that you already implemented, assuming a single light source.
Don't forget to normalize the length of each normal vector
before using it at each pixel.
Remember, you only need to do this normalization
when you're actually going to write some values into the z-buffer.
Note: In order to do the zbuffer computation, you need to do the perspective (Fx/z,Fy/z,1/z) transformation on the surface point, as well as the viewport transformation (converting x,y to pixel coords). But you shouldn't do those transformations on the normal vector.
where r, cx, cy, cz are parameters that are declared when a ParametricSphere object is instantiated.x(u,v) = r * cos(theta) * cos(phi) + cx y(u,v) = r * sin(phi) + cy z(u,v) = r * sin(theta) * cos(phi) + czwheretheta = 2*PI*u phi = PI*v - PI/2
Render some images to show that your impelementations work.