Homework 10, due by the end of the semester.

As I said in class, I'm going to open things up a bit between now and the end of the semester. I'm going to be going over peoples' progress on this last part every week, so please to *not* wait until the end of the semester to work on your last project.

As I said in class, You can focus on one or more directions for your last project: (i) texturing, (ii) animation, (iii) some other topic with my approval.

As always, your job is to implement the required technology, and then to create interesting, cool, exciting, original content that shows that technology working.

I need to hand in your grades by Dec 23. I'd like to give you as much time as possible between now and then to get your projects completed.

Right now I'm posting on-line notes about textures.

Soon I will also post more extensive on-line notes about advanced animation topics, so watch this space. Meanwhile, if you're interested in doing an animation project, go ahead and start to play with animation. All you really need in order to do that is to build a model that has rotatable joints (such as the swinging arm example we looked at in class earlier this semester), and then start using the cubic spline curves that you have already implemented in order to create time-varying joint angles for rotating the joints in your animatable figure. For example, you might want to try to build a simple two-legged human figure and implement a walk cycle.

Recently in class we went over texturing. First we covered texture mapping and antialiasing, with particular emphasis on MIP mapping. Then we covered procedural texturing, with a mini-lecture on the noise function and how to use it to make interesting textures.

Below are some notes that follow that discussion. If you have any questions about the notes that follow, please be sure to bring them up in this coming Tuesday's class, and we will go over them.

At its most basic, texture mapping is quite simple. When you build a geometric mesh, you place parametric coordinates (u,v) at each vertex, in addition to (x,y,z) and (nx,ny,nz). Note that you have just "fattened" each vertex from six floating point numbers to eight floating point numbers.

For most of the types of geometric meshes that you have been implementing, it is very straighforward to assign a (u,v) value to each mesh vertex - you can just use the parametric coordinates that you used to build the shape (sphere, cylinder, bicubic patch, ...) in the the first place.

When you move through the rendering pipeline all the way to the pixel, the (u,v) value at each vertex becomes a value of (u,v) at each pixel to be shaded. At that point you can use this interpolated (u,v) to do a look-up into a stored texture image, from which you can retrieve a texture color (rt,gt,bt).

Note that (u,v) are not transformed by the matrix that transforms the location and normal of the vertex. But (u,v) are indeed linearly interpolated during scan conversion, along with all the other geometric data at each vertex.

If you do pure vertex shading (as we have done up until now), then you will only have (r,g,b) information at each pixel once you have done the z-buffer step. In this case, the only thing you can do with the texture color is to use it to modulate the color at that pixel:

r *= rt
g *= gt
b *= bt

On the other hand, you can defer shading until later in the pipeline, by interpolating the surface normal vector coordinates down to the pixel level, and doing the Phong algorithm calculation at each pixel. In this case, you can use the retrieved texture to modify any input parameter to the Phong shading algorithm. This includes any information about the color or location of light sources, as well as any material surface parameter such as the Ambient, Diffuse, or Specular color, or the Specular power, in addition to geometric data such as the surface normal.

If you interpolate surface normals between vertices rather than colors, and then do the Phong algorithm at the pixel level, then you are implementing a pixel shader. This can be much slower than doing a vertex shader, but it is also much more flexible and can produce more dramatic and varied surface effects. As I mentioned in class, pixel shaders are extensively used for feature films, when rendering time is not a critial issue, as well as in hardware-accelerated shaders enabled by GPUs (Graphics Processing Units), such as the nVidia GForce and ATI Radeon boards.

Antialiasing

As we discussed in class, you will generally get bad results if you simply use the (u,v) value at each pixel to do a look-up into your source texture image, because you will end up sampling improperly. The problem is that one image pixel can actually cover many texture source pixels, and what you ideally want is to perform an integral over the entire sub-area of the source texture image that is covered by each image pixel.

Doing a brute-force calculation of this area integral can be prohitibively expensive, so people have devised various methods to approximate this integral. The most commonly used even today is MIP mapping (where "MIP" stands for multim in parnum), developed by Lance Williams about 23 years ago. Lance adapted the power-of-two image pyramid strategy first developed by Tanimoto in the late 1970's for computer vision. This is one example of many in which computer vision algorithms for scene analysis have a parallel in computer graphics algorithms for scene synthesis.

Conversion to an image pyramid recursively proceeds by recursively halving the resolution of the texture image. At each stage of the recursion, every 2×2 pixels is replaced by a single pixel that contains the average value of those four texture pixels. The recursion stops when you are left with only a 1×1 image. It is simplest to begin with a source texture image that is a power of two in dimension, 256×256. This is converted into a pyramid of images:

256×256
128×128
64×64
32×32
16×16
8×8
4×4
2×2
1×1

For any pixel texture look-up, we look not just at the value (u,v) at that pixel, but also at the (u,v) at one or more neighboring pixels in the image to be rendered, in order to get an approximation of how much u and v change from one pixel to the next. The magnitude of this variation s is used to create a square shaped extent (u+s/2,v+s/2), over the texture image, which approximates the shape in the texture image that is covered by one pixel.

Now we can use this square shaped extent to do a tri-linearly interpolated look-up into the image pyramid, in which we linearly interpolate in the three dimensions of u, v, and scale. As I discussed in class, this will involve a total of eight accesses into the image pyramid (four each at two neighboring levels of the pyramid), and a total of 23-1, or seven, linear interpolation computations.

Procedural texture

Noise function

You can grab a copy of the source code for the noise function at: http://mrl.nyu.edu/~perlin/noise.

In class we went over an on-line tutorial about the noise function. You can review that tutorial at http://www.noisemachine.com/talk1/.

Using the noise function to make procedural textures

In addition to the examples at that tutorial, here is an on-line example of the use of noise function to vary surface normal: http://mrl.nyu.edu/~perlin/bumpy/ Feel free to look at the source code for class Sphere.java, in order to see how it all works.

The examples at that URL show procedural textures being used to vary the surface normal vector at each pixel, prior to performing the Phong shading algorithm at each pixel. Please note that this example does not use a real geometric model. Rather, the geometry of the spherical ball is faked by an image-based procedure, just for the purposes of the on-line demo.

In this on-line example, I first approximate the three partial derivatives of the noise function by taking differences in each dimension:

(noise(x+ε,y,z) - noise(x,y,z)) / ε
(noise(x,y+ε,z) - noise(x,y,z)) / ε
(noise(x,y,z+ε) - noise(x,y,z)) / ε

Then I subtract this vector-valued derivative function from the surface normal, to do bump mapping prior to performing the Phong shading algorithm. Bump mapping doesn't actually perturb the geometry of the surface, but rather fools the eye into thinking that the surface is perturbed, by changing the surface normals so that the surface responds to light the way an actual bumpy surface would.

If you choose to implement procedural textures, you shouldn't just duplicate the textures that I show in that example. Rather, you should play around with the technique, and try to create your own examples of procedural textures that vary Phong shading parameters or surface normal, or some combination, to create an interesting procedural texture.