CSCI-UA-0480-004
Undergraduate Computer Graphics
719 Broadway, rm 1221
Tuesdays and Thursdays, 11:00am-12:15pm

Office hours: Tuesday, 4-5pm

What we will cover:

There are many courses that can teach you how to use commercial computer graphics packages and APIs. This course, in contrast, will teach you how to build 3D computer graphics from the ground up. This will include 3D modeling, animation, and rendering. At the end of the semester you will have built your own complete working real-time 3D computer graphics systems that runs in web browsers.

What you should already know:

If you are already familiar with JavaScript, that's great. If you are already familiar with Java, C++ or any similar high level language, you will not have any trouble picking up enough JavaScript to do this course.

On the other hand, if you are not already an experienced programmer, then I do not suggest you take this course, as there will be weekly programming assignments, and you will not be able to keep up.

Computer graphics uses a lot of matrix math and some calculus. During the semester we will go over all of the matrix and vector math that you will need.

Text:

Class notes (so make sure you come to class!), will be posted on-line after each lecture.

Graders:

To be announced

Discussion list:

http://www.cs.nyu.edu/mailman/listinfo/csci_ua_0480_004_fa14

Rough outline of topics:

Sep 02: Introductory lecture
Sep 04: Working with fragment shaders, part 1
Sep 09: Working with fragment shaders, part 2
Sep 11: Working with fragment shaders, part 3
Sep 16: Ray tracing, part 1
Sep 18: Ray tracing, part 2
Sep 23: A look at Chalktalk
Sep 25: Guest lecture and VR demo
Sep 30: Surface reflectance, part 1
Oct 02: Surface reflectance, part 2
Oct 07: In-class Chalktalk prototypes, part 1
Oct 09: In-class Chalktalk prototypes, part 2
Oct 14: No class: University holiday
Oct 16: Ray reflection + procedural texture
Oct 21: Shadows, refraction, planes and booleans, part 1
Oct 23: Shadows, refraction, planes and booleans, part 2 + intro to matrices
Oct 28: Drawing on the 2D canvas
Nov 04: Using matrices for animation
Nov 06: Modeling 3D parametric shapes, part 1
Nov 11: Modeling 3D parametric shapes, part 2
Nov 13: Modeling and animating 3D parametric shapes
Nov 18: Superquadric cylinders, subdivision spheres + perspective
Nov 20: Even more ways to make a sphere + Vertex as object
Nov 25: Perspective as linear transform, clipping, Object3D and Geometry
Dec 02: Principles of character animation
Dec 04: Face numbering, introduction to WebGL and three.js
Dec 09: Discussion of final projects and advanced topics
Dec 11: Ray tracing to general second order surfaces
Dec 16: Extra class begins at 10am

Setting up a homepage and access to computers:

Most of you have the homepage and computers thing already figured out. But just to make sure you have at least one way to show your work on line, your NYU webpage can be activated and modified as follows:

  • Go to http://nyu.edu
  • Click on the NYUHome Login tab at the top.
  • Sign in with your NetID and Password.
  • Click on the Files tab.
  • Click on the option to activate your website, if you haven't already.
  • Click on Files 2.0 Login.
  • Log in using your password.
  • Double click on the "public" folder to enter it.
  • Download the file "index.html" to your computer.
  • Edit the file to change it.
  • Click on the upload button to replace the old index.html file by your new one.

To post assignments for this class, you should set up a subdirectory of your web site (preferably your NYU web site). Name this subdirectory "graphics". It should have a main "index.html" file, and that file should link to the various homework assignments. After the first class, you will send the grader an email, with subject line "graphics", telling him the URL.


Sep 02: Introductory lecture
 

I warned students that this will be a programming intensive class, and that you should not take it if you don't feel you are a strong programmer. Although we will be programming mostly in JavaScript, a good facility with any similar high level language -- such as Java, C++ or Python -- should be fine.

Those with a strong math background will find it easier, although we will be going through all the needed math in class. For example, we will review everything you need to understand 4×4 matrices, which we will use to do linear transformations in 3D (such as translation, rotation and scale).

We discussed, at a very high level, the different parts of computer graphics 3D rendering, the fact that we will be using HTML5 (essentially, JavaScript and WebGL), and the different role of the Browser, the JavaScript level, programming the Vertex Shader (called once per triangle vertex), and programming the Fragment Shader (called about once per pixel).

We had a brief philosophical discussion about the implications of having augmented reality everywhere (eg: through augmented reality glasses -- and eventually AR contact lenses). This would mean that computer graphics will become a more seamless and integrated part of our everyday lives, with virtual objects becoming just another part of the "built world". But there are also going to be issues of privacy, and just who owns of our personal visual space.

We very briefly reviewed the syllabus for the semester. On Thursday of this week we will do a more in-depth overview of the semester's topics.

We saw excerpts from the following historically important films:

1940: Night on Bald Mountain scene from Fantasia Walt Disney
1982: TRON (LightCycles) MAGI + Walt Disney

I asked each student to send me an email with the subject line "GRAPHICS". Your email should contain two things:

  1. The URL you will be using for posting your assignments for this class. Ideally this url should be on an NYU server (if possible), and should be in a subfolder called "graphics/".

  2. A short essay, a few paragraphs in length, describing why you were motivated to take this course, and what you hope to get out of it.
 

Sep 04 lecture: Working with fragment shaders, part 1
 

We went over some of the future topics in a little more detail, including ray tracing, how to approximate curved objects like cylinders using triangles, and how to use 4×4 matrices to do coordinate transformations.

Then we went carefully over how to use the code setup that I created to make your own custom fragment shader, with a simple square as the geometry.

We also watched the MAGI Norelco commercial, which was historically significant because it may have been the first time that computer graphics was mistaken for live action.

For Thursday, September 11, your assignment is to adopt the sample code from class, which you can find as a zip file here, and to modify the fragment shader to do something fun and animated, and that responds to the user's mouse gestures in some interesting way.


Sep 09 lecture: Working with fragment shaders, part 2
 

We talked in more detail about GLSL. We spent some time looking at the OpenGL Quick Reference Card. The second-to-last page of that card, which lists Built-in Functions and Common Functions, is particularly useful to you right now. You can also look at the more comprehensive complete documentation for OpenGL, which is here.

We also made some improvements to the code base, which you can grab (see below). Those changes gave us a chance to talk about (1) How to create functions in the fragment shader, (2) a general idea of the Nyquist sampling theorem, and how you might use it to do proper anti-aliasing.

At the end of class, we watched the test that MAGI did for Walt Disney right after TRON, on combining traditional character animation with 3D animation, in the form of a scene from Where the Wild Things Are.

For this Thursday, September 11, your assignment is to adopt the sample code from class. In class today, we made some improvements to the code base, which you can find as a zip file here.


Sep 11 lecture: Working with fragment shaders, part 3
 

We went over in more detail how homogeneous coordinates work, and how they allow you to use "points at infinity" as direction vectors.

We also talked a bit about the use of the fourth "alpha" color channel for blending and transparency.

We talked about two different ways to deal with the same function: (1) Evaluating it at many points on its domain, and (2) Solving for where the function's value equals zero (also known as the "roots of the equation").

We began discussing the ideas behind ray tracing, and started to set up the problem of how to trace a ray from a point V = (vx,vy,vz,1) into a direction W = (wx,wy,wz,0) and seeing where (or whether) it will intersect a sphere centered at (cx,cy,cz) of radius r.

Since the surface of that sphere consists of points where (x-cx)2 + (y-cy)2 + (z-cz)2 - r2 = 0, we will need to substitute the points on the ray into this equation -- which we will do when we next meet on Tuesday, Sep 16.

At the end of the class we watched Carlitopolis by Luis Nieto.

For next Thursday, September 18, your assignment is to adopt create interesting, fun and colorful geometric shapes in your fragment shader. See if you can make triangles, rectangles, diamond shapes, hexagons, ovals, and any other interesting shapes.

Each shape should have a color.

For extra credit, see if you can get shapes to animate when the mouse is over them, or to respond to mouse clicks in some other interesting and fun way.

We are doing this assignment so that you will have more practice using programming constructs like if statements and function calls, to help prepare you for the harder problem of implementing ray tracing in your fragment shader.

All assignments should be completed before the start of Thursday's class.


Sep 16 lecture: Ray tracing, part 1
 

In this lecture we went over the fundamental math for tracing a ray to a sphere (below), and then we watched The Centrifuge Brain Project by Till Nowak.

Rather than working everything through in three dimensions, we worked it through for the 2D case -- tracing a ray in the plane to a circle:

A ray in the plane is given by (V + t W), where ray origin V is the column vector [Vx,Vy,1], and ray direction W is the unit length (that is, "normalized") relative vector [Wx,Wy,0].

So any point at a distance t along the ray has [x,y] coordinates [ Vx + t * Wx , Vy + t * Wy ].

A circle, described by center [Cx,Cy] and radius r, consists of all points [x,y] for which (x - Cx) * (x - Cx) + (y - Cy) * (y - Cy) - r * r = 0.

To get the solution (if there is one) to the intersection of the ray with the circle, we can plug the [x,y] coordinates along the ray into the circle equation. This will give us an equation where everything is constant except for t:

( Vx - Cx + t * Wx ) * ( Vx - Cx + t * Wx ) + ( Vy - Cy + t * Wy ) * ( Vy - Cy + t * Wy ) - r * r = 0.

We can now separate out terms to form a quadratic polynomial in t:

t * t * (Wx * Wx + Wy * Wy) +
t * ( 2 * (Vx - Cx) * Wx + 2 * (Vy - Cy) * Wy ) +
(Vx - Cx) * (Vx - Cx) + (Vy - Cy) * (Vy - Cy) - r * r = 0

Now we can observe several things that make things simpler. For one thing, W is unit length, so (Wx * Wx + Wy * Wy) is just 1.0.

For another thing, all the other products can be expressed as inner products. So our quadratic polynomial can just be expressed as:

t * t + t * (2 * (V-C)·W + ( (V-C)·(V-c) - r*r) = 0

Solving via the quadratic equation we get:

t = -B +- sqrt ( B * B - C )

where B = (V-C)·W and C = (V-C)·(V-C) - r*r.

We now have a way of knowing what will happen if we try to shoot this ray at this circle. The ray will miss the circle when this equation has no real roots. That is, when B * B - C < 0.

If the ray hits the circle, it will enter the circle where t = B - sqrt(B*B - C) and it will exit the circle where t = B + sqrt(B*B - C).

Notice that nothing about this equation relies on it being two dimensional. Everything here will work eaually well if we are ray tracing a three dimensional ray to a sphere.


Sep 18 lecture: Ray tracing, part 2
 

We went over the math for how to form a ray at every pixel (below), and then we saw the historically pivotal Kitchen Scene from Jurassic Park.

Forming a ray at a pixel:

At every pixel we need to form a ray to shoot into the scene.

The origin of the ray will be the "camera", which is located along the positive z axis. The further away the camera is from the x,y plane -- that is, the longer the "focal length" of the camera -- the more telephoto is the view. The nearer the camera is to the x,y plane, the more wide angle the view.

If we set the focal length to some value fl, then the camera is located at point V = (0,0,fl,1).

If we shoot a ray from this camera to a pixel that goes through point (x,y,0,1) on the x,y image plane, we need to calculate the unit length direction W for this ray. We do this in two steps:

  1. Compute a relative vector in the same direction as W by subtraction: D = (x,y,0,1) - (0,0,fl,1) gives us relative vector (x,y,-fl,0).

  2. Normalize D to get the unit length direction vector: W = D / sqrt(D·D).
Your assignment, due by class on Thursday September 25, is to implement a very simple ray tracer, and show that you can trace a ray to a scene that contains two spheres. The result should look something like a circular disk.

I recommend encoding each sphere as a vec4, with the first three components of the vec4 storing the center point (cx,cy,cz) of the sphere, and the fourth component storing the sphere's radius r.

I strongly recommend that you implement a function in your fragment shader that takes three arguments: a ray origin V, a ray direction W, and a vec4 containing the cx,cy,cz,r of the sphere.

Your function should return the value of t -- the distance along the ray -- where the nearest intersection occurs between the ray and the sphere.

If your ray misses the sphere entirely, you can return a very large positive number, such as 10000.0.

Each time your program is called, you will need to form a ray for that pixel, then trace the ray to each of the two spheres. If your ray hits both spheres, then the one "in front" is the one with the smaller value of t.

I suggest you color the background one color, and each of the two spheres a different color. Position your spheres so that the rendered scene will show one sphere with a larger value of cz (in other words, nearer to the camera) partly obscuring the other sphere.


Sep 23 lecture: A look at Chalktalk
 

We took a closer look at the Chalktalk research presentation tool that I've been using to teach this class.

Screenshot of Chalktalk being used to simulate a musical instrument


Sep 25 lecture: Guest lecture and VR demo
 

We had a guest lecture by Kristofer Schlachter, a Ph.D. student in the NYU Department of Computer Science. He talked about his experiences in the computer game industry, and his research into advanced ray tracing techniques using the latest features of the GPU.

Kris also went over in class a code example I made showing how to create an initialization and update function in JavaScript, and how to pass arrays from JavaScript into the fragment shader.

We then had a demo of Virtual Reality, and shared VR between multiple people, by Zhu Wang, who is a research scientist in our lab. The demo you saw was implemented by Zhu.

Your assignment, due by class on Thursday October 2, is to modify your sphere tracing scene so that it starts to make use of arrays, using my code example as a guide. There are a number of ways you can do this. One is to specify your sphere data in your JavaScript, and then pass it into your fragment shader each frame. Note that this will allow you to start doing animation logic in your JavaScript code, which is a good place for it.

The reason we are doing this is to give you practice with more powerful GPU programming features, as we continue to learn more about ray tracing.

Feel free to think of other creative ways to use arrays passed from JavaScript into the fragment shader to make your assignment more interesting.


Sep 30 lecture: Surface reflectance, part 1
 

We went over the basics of the Phong reflectance algorithm, a simple approximation to how surfaces interact with light, originally developed by Bui-Tong Phong. Phong reflectance consists of three components: Ambient, Diffuse and Specular.

The Ambient component uses a single color to appromimate a surface's response to the the light that is bouncing around the room.

The Diffuse component describes a perfectly diffuse Lambert reflector, which attenuates in brightness as the surface normal tilts away from the direction of a light source. The diffuse component is given by Drgb N·Ldiri Lrgbi, where Ldiri is the direction of light source Li and Lrgbi is the rgb color of light source Li.

The Specular component approximates how light bounces off a shiny surface. Because the shiny surface is not quite mirror smooth, the reflected light spreads out. A power term p is used to vary the apparent shininess. The higher the value of p, the more shiny the surface appears. The specular component is given by Srgb R · Ldiri Lrgbi, where R is the reflection of the viewer's direction.

In class I described the specular term slightly differently (I used the reflection of the light direction), but this variation will be more useful to you, because this way you will only need to calculate R once, and then you can keep using the same R vector for all of your light sources.


Oct 02 lecture: Surface reflectance, part 2
 

In class we built an example shader that shows part of the Phong reflectance algorithm in action. It implements only the Ambient and Diffuse components, not the Specular component. That code is here.

I also made a slight fix to the support library in that folder, which is now called gl_lib3.js, so that it will print more informative error messages to the JavaScript console.

We also went over how to compute a reflection direction vector, given the vector toward an incoming direction, and the surface normal.

For example, if a view ray is coming in from incident direction I, then the outgoing reflection direction is given by 2 N (N · I) - I.

When you incorporate this into your ray tracer, incident direction I will just be the negative of your ray's W vector.

Finally, we watched Paul Debevec's seminal 1999 computer animation Fiat Lux.

Your assignment, which is due before class on Thursday Oct 16, will consist of two parts.

The first part is to extend your ray tracer so that it implements the full Phong reflectance algorithm. Your scene should have multiple spheres and multiple light sources, and the material of each sphere should have an Ambient, Diffuse and Specular component.

The second part of your assignment will be to partipate in a group project in class this coming Tuesday and Thursday (Oct 7 and Oct 9). After those sessions, try to work with the Chalktalk code that was distributed in class, to make a simple working prototype of the ideas you sketched out in class.

In our Thursday Oct 16 class we will spend some of the class time going over these sketeches and prototypes.


Oct 16 lecture: Ray reflection + procedural texture
 

Reflecting rays

To create mirror-like reflection in ray tracing, we shoot another ray, starting from the surface point S, and see whether that ray hits another object. Since the ray (V+tW) that is coming into the surface is going in direction w, then the direction of the emerging reflected ray is going to be the mirror image of -W:

W'     =     2 N (N . -W) - -W     =     -2 N (N . W) + W
The origin of the reflected ray is going to be just outside of the surface. A good way to find such point is to use a small value ε, such as ε = 0.001, and then use it to move slightly out of the surface:
V'     =     S + ε W'
When you shoot this reflected ray into the scene, you can mix the resulting color together with the result of your surface's original Phong reflectance color. The more of the reflected ray color that you mix into the final color, the more "mirror-like" will be the final appearance of the object.

Background gradient

Of course many rays will end up missing all of the objects, and these are rays that end up flying off into the background. Rather than make the background black, you can compute a color that suggests a more interesting background.

One way to do this is to create a color gradient, using the y component of the ray, so that color appears to gradually change as a function of the latitude of the background direction.

Procedural texture

You can add procedural texture to any component of the ray tracing algorithm, to make surfaces look more interesting and textured. For example, you can vary the ambient or diffuse components of your surface, based on noise(S) (where S is the surface point), to create a mottled appearance.

You can also try adding noise to vary the surface normal N, to create the appearance of a non-smooth surface.

To generate procedural noise within your fragment shader, you can include this code into your fragment shader to implement noise, as well as a fractal sum of noise and "turbulence", which is a fractal sum of the absolute value of noise.

 

Your assignment, which is due before class on Thursday Oct 23, is to implement ray reflection and a background color gradient, and also to incorporate noise-based procedural texture into your scene.

You can create multiple levels of ray reflection by using a for loop in your fragment shader, but remember that the loop will actually become unrolled by the compiler, so you can only "loop" for an explicitly specified number of steps.


Oct 21 lecture: Shadows, refraction, planes and booleans, part 1
 

We started the class by going over a complete example of Phong reflectance. I've included that version of the code here.

We then went over shadows, refraction, ray tracing to planes and booleans at a high conceptual level.

The essential idea behind ray tracing shadows is that you cast a "query ray" (a ray to find out information) from the surface point S into the direction of each light source Li. If the ray into a given light direction hits any other object, then S is in shadow from that light, and you should not add in either the diffuse or specular components of that light source.

Refraction can occur when light enters a transparent object, such as water, glass or plastic. When a ray of light enters a transparent material, it may slow down, and the amount that the light slows down is referred to as that material's refractive index n. For example, if n = 1.5, that means that light is traveling only 2/3 as fast as it travels in a vacuum. If C is the speed of light in a vacuum, then the speed of light in a medium of refractive index n is given by (C / n).

At the surface between two transparent media (such as air and glass), light will bend, or refract. On Thursday's lecture we will go over this in more detail.

Up until now the only shape that we ray traced to has been a sphere. We can ray trace to any shape whose surface can be described mathematically. For example, we can ray trace to any plane, using the general linear equation for a plane: ax + by + cd + d = 0. Note that this linear equation is described by a vector P with four coefficients (a,b,c,d), and can be thought of as an inner product: P · X, where X = (x,y,z,1).

Given a ray X = (V + t * W), we can find the solution for P · X = 0 the same way we did for spheres: by substituting V+tW into the equation.

This gives us: P · (V + t * W) = (P · V) + t * (P · W) = 0

From this, it is easy to see the solution: t = -(P · V) / (P · W)

In a sense, this equation defines the surface of an infinite half space volume. The set of points X for which P · X is negative is the "inside" of this half space, and the set of points X for P · X is positive is its "outside".

The surface normal of the plane is the same everywhere, and is given by Normal(P) = normalize(a,b,c).

We can take boolean intersections of half spaces to create finite shapes, such as cubes. For example, a unit cube is defined as the intersection of six half spaces (two to bound x, two more bound y, and another two to bound z).

If we shoot a ray (V + t * W) to a shape that is defined as the intersectino of a set of half spaces Pi, we need to do two things:

  1. Compute the intersection of the ray with each Pi, identifying each plane as an entering plane or an exiting plane for this ray. An entering plane is one where the ray enters the half space. This will occur when the surface normal points toward the ray origin. In other words, when Normal(Pi) · W < 0.

    An exiting plane is one where the ray exits the half space. This will occur when the surface normal points away from the ray origin. In other words, when Normal(Pi) · W > 0.

  2. Find what portion of the intersection volume is inside the ray. To do this, we take the maximum tI of the roots for all the entering rays, and the minium tO of the roots for all the exiting rays.

    If tI < tO, then the ray has intersected the shape.
    If tI > tO, then the ray has missed the shape.

At the end of this class, we watched Bruce Branit's iconic 2007 short film Worldbuilder.


Oct 23: Shadows, refraction, planes and booleans, part 2 + intro to matrices  

In this class we went over refraction in a bit more detail. In particular, we reviewed Snell's Law, which describes exactly how much light bends when it crosses from a medium with index of refraction n1 to a medium with index of refraction n2. Snell's Law is given by:

n1 * sin(θ1) = n2 * sin(θ2)
where θ1 is the angle of deviation from the surface normal of the entering ray, and θ2 is the angle of deviation from the surface normal of the exiting ray.

We also looked a bit more closely at booleans of other shapes, such as spheres. For example, if you want to render a flying saucer shape, you can ray trace the intersection of two spheres. Along any given ray, the first sphere will have roots I1 and O1 where it enters and exits, respectively. The second sphere will have roots I2 and O2 where it enters and exits, respectively.

So the segment along the ray which describes the intersection of the two spheres is given by:

tI = max(I1, I2)
tO = min(O1, O2)
From this, you can follow the same rule for determining whether the ray has intersected the shape:

If tI < tO, then the ray has intersected the shape.
If tI > tO, then the ray has missed the shape.

Finally we covered six useful primitive operations for linear transformation in three dimensions: Identity, Translation, X Rotation, Y Rotation, Z Rotation and Scale.

The key to each of these is to define a Matrix class, which stores a 4×4 matrix of values.

For the Identity operation, we set this matrix to:

identity()          
1000
0100
0010
0001
For each of the other five primitive operations, we first create an internal transformation matrix of values, and then we do a matrix multiply to modify the values in our Matrix object.

Here are the respective transformation matrices:

translate(a,b,c)
100a
010b
001c
0001
 
rotateX(a)
1000
0cos(a)-sin(a)0
0sin(a)cos(a)0
0001
 
rotateY(a)
cos(a)0sin(a)0
0100
-sin(a)0cos(a)0
0001
 
rotateZ(a)
cos(a)-sin(a)00
sin(a)cos(a)00
0010
0001
 
scale(a,b,c)
a000
0b00
00c0
0001
 
and then (for the last five operations), to multiply this matrix by an existing 4×4 matrix.

You should also be able to call scale(a) with only one argument, to effect uniform scaling. In your implementation, check to see whether the second argument b is undefined. If so, then set both b and c equal to a.

Finally, it is necessary to apply the linear transformation to points in space, which requires implementing a function matrix.transform(point).

Because matrix multiplication is associative, you can

At the end of this class, we looked a very inspiring real time WebGL demo with caustics and physics by Evan Wallace.

Your assignment, which is due before class on Thursday Oct 30, is going to be very easy, in consideration of the fact that you've just gone through mid-terms in all your other classes. Make a first stab at implementing a matrix class. Within this class you should implement identity(), translate(a,b,c), rotateX(a), rotateY(a), rotateZ(a) and scale(a,b,c), and satisfy yourself that those functions all produce correct output.

Doing that much will prepare you properly for what we will be covering next in class.


Oct 28: Drawing on the 2D canvas  

The HTML5 Canvas object provides a very handy way to do 2D graphics. We are going to be using it in the next week or two as a way for you to test out your Matrix routines.

Here is the official on-line reference to the HTML5 Canvas object. Feel free to explore any of its functions and capabilities, (eg: setting lineWidth), in addition to the ones in the example I showed in class.

Here is the example we did in class.

We then went over matrix multiplication. To multiply two 4×4 matrices A and B, you can think of A as a vertical stack of 4 horizontal vectors, and B as a horizontal sequence of 4 vertical vectors:

A0,0 A1,0 A2,0 A3,0
A0,1 A1,1 A2,1 A3,1
A0,2 A1,2 A2,2 A3,2
A0,3 A1,3 A2,3 A3,3
×
B0,0 B1,0 B2,0 B3,0
B0,1 B1,1 B2,1 B3,1
B0,2 B1,2 B2,2 B3,2
B0,3 B1,3 B2,3 B3,3

The result C = A×B is given by taking the dot product of every combination of the rows of A and the columns of B. There are 4*4 = 16 such combinations, corresponding to the 4 rows and 4 columns of the result matrix C.

At the end of the class, we first saw the a selection of scenes from Minority Report, which shows a "vision of the future" from 2002.

Then we saw the desk of the future scene from TRON, which shows an analogous vision twenty years earlier.

Your assignment, due by class on Thursday November 6, is to just have fun with making cool animations using the Canvas object. Go crazy with it, make creatures and houses, science fiction landscapes, words and poetry, pretty much anything you want. The key is to explore and try things out.

Our goal for the following week will be to start using the Canvas element as a way of looking at the results of Matrix transformations and to start to experiment with building shapes out of triangles, and then things will get more serious.

So take this opportunity to just play around and have fun while you can! :-)


Nov 04: Using matrices for animation  

In class we created an animation of two walking legs.

In the version that you can download, I've replaced the matrix library matrix4x4.js with a stub in which the methods translate, rotateX, etc., don't do anything. If you substitute in the fully functional version that you implemented, you should see the walking legs show up, just like in class.

Your assignment, due by class on Thursday November 13, is to use your fully function implementation of matrices in place of the non-functioning one that is in the folder now.

One note: implementing translate, rotate and scale requires you to multiply two matrices. There are two possible orders for this matrix multiply. Matrix multiplication is not commutative. So in general, the following two matrix operations produce different results:

A ← A × B

A ← B × A

One ordering will produce sensible results, with progressive transformations going from global to local, as we saw in class.

But if you multiply them in the other order, you won't get sensible results. You'll know if you got the order wrong, because you won't see a pair of walking legs on your web page.

Feel free to try it both ways, to see which argument order for matrix multiply works properly.


Nov 06: Modeling 3D parametric shapes, part 1  

In class we explored different ways of generating 3D shapes. Here is the code we ended up with by the end of class.

In the final version of that code, we showed how to create a parametric surface over the two parameters u and v, where 0 ≤ u ≤ 1 and 0 ≤ v ≤ 1.

In particular, we used this technique to create a longitude/latitude globe shape. But we could have used the same technique to create any parametric surface that can be described by two parameters.

In the next class we will continue exploring this technique, and see how it can be used to generate different sorts of shapes.

Between now and Thursday Nov 13, look over that example and familiarize yourself with it.

To get it fully functional, you are going to need to replace the stub matrix library matrix4x4.js with a fully functional one, just as you are already doing for the previous in-class example.


Nov 11: Modeling 3D parametric shapes, part 2  

We did further experiments in class of how to create 3D parametric shapes and draw the to a canvas. First we broke the algorithm into two parts: (1) creating a mesh; (2) rendering the mesh.

Then we refined our globe example, and also created a torus.

Finally, we added some time-varying procedural displacement texture. The end result is here.


Nov 13: Modeling and animating 3D parametric shapes  

We spent more time exploring how to make 3D parametric shapes, including superquadrics, and how to make a cylinder as a single parametric surface. We also looked some more at procedural displacement textures.

We also looked at how you might do procedural displacement texturing in a vertex shader, as a look-ahead to what comes next, and saw that we would also need to deal with adjusting the surface normal, by taking the discrete derivative of the function used to displace the surface.

In class we developed two examples, canvas5.zip and canvas6.zip.

Your assignment, due by class on Thursday November 20, is to put together the previous two assignments to create an interesting animated scene with fun shapes.

For example, you might make a house, or a tree, or a person, or a dog or a car. Try to think of something that tells a little story (eg: the sun rises in the morning and the people wake up).

Scaled globes and cylinders are very good for making limbs of people and animals and trees.

Have fun with it!!


Nov 18: Superquadric cylinders, subdivision spheres + perspective  

In class we showed how a parametric cylinder can also be be defined as a superquadric. We also added perspective, changing x,y,z via the following perspective linear transformation:

z' ← fl / (fl - z)       or, equivalently:     1 / (1 - z/fl)
x' ← x * z'
y' ← y * z'

where fl is the "focal length" of our virtual -- the distance from the origin of the camera along the positive z axis.

All of this is in canvas7.zip

We also showed several other ways to create a sphere. First we used six meshes to form a cube shape, and then "inflated" the vertices of the meshes to form a sphere shape.

Then we used a subdivision technique. We started with eight equilatoral triangles, one for each octant, and subdivided each triangle to add more vertices. This shape was then inflated, so that rather than forming an octahedron it formed a sphere.

The result is in canvas8.zip


Nov 20: Even more ways to make a sphere + Vertex as object  

You can only get so far using arrays for vertices. Eventually you want to make a Vertex be a smart object, with its own access methods and different kinds of data fields.

Using the subdivided sphere as an example, we created a Vertex object type.

The result is in canvas9.zip


Nov 25: Perspective as linear transform, clipping, Object3D and Geometry  

We showed in class that the perspective operations

z' ← 1 / (1 - z/fl)
x' ← x * z'
y' ← y * z'

are actually just the following linear transformation, followed by a projection from (x,y,z,w) down to (x/w,y/z,z/w):

1 0 0 0
0 1 0 0
0 0 0 1
0 0 -1/fl 1

We also discussed, at a high level, the fact that when the z value of a vertex gets very near to fl, and eventually moves to behind the camera plane z=fl, the projected vertex position can blow up, which is not useful. In order to avoid this, modern GPUs contain triangle z-clipping logic, which clips triangles to just in front of the camera plane. This clipping can result in a triangle turning into a quadrangle. In this case, the resulting shape may be sent through the GPU as two separate triangles.

We also talked about how you can describe any geometric shape as a list of vertices and a corresponding list of faces. Each vertex contains (x,y,z) location plus some extra information that we may need, such as surface normal at that vertex.

Each face is a triangle, which is stored as an array of indices, where each index is just the index of some vertex in the vertices array. When viewed from the outside, the vertices of a face should form a counterclockwise loop.

One thing that's tricky about all this is that we need to distinguish between a vertex on a curved surface, where surface normal varies continuously, and the vertices across an edge, when there is a discontinuity of surface normals.

The way we do this is by using a single vertex for a curved surface, which is shared between adjacent faces, but using different vertices across an edge.

So, for example, as we go around the curve of a cylinder, we can share vertices across successive faces. But across the edge between the tube of the cylinder and the top or bottom of the cylinder, we should use different vertices.

A cube is a very simple complete example of a shape that can described by vertices and faces. Because a cube has edges separating its six faces, we don't share vertices across those six faces. Instead, each face has its own distinct vertices. So a cube should have 24 vertices: three vertices at each of its eight corners.

var vertices = [
   [-1,-1,-1], [ 1,-1,-1], [-1, 1,-1], [ 1, 1,-1], [-1,-1, 1], [ 1,-1, 1], [-1, 1, 1], [ 1, 1, 1],
   [-1,-1,-1], [ 1,-1,-1], [-1, 1,-1], [ 1, 1,-1], [-1,-1, 1], [ 1,-1, 1], [-1, 1, 1], [ 1, 1, 1],
   [-1,-1,-1], [ 1,-1,-1], [-1, 1,-1], [ 1, 1,-1], [-1,-1, 1], [ 1,-1, 1], [-1, 1, 1], [ 1, 1, 1],
];
Geometrically, the above vertices are arranged as follows:
       2-------3
      /|      /|
     6-------7 |
     | |     | |
     | 0-----|-1
     |/      |/
     4-------5
If we were storing four sided faces, we could then describe the six sides as follows:
var faces = [
   [ 0,  4,  6,  2], // negative x face
   [ 1,  3,  7,  5], // positive x face
   [ 8,  9, 13, 12], // negative y face
   [10, 14, 15, 11], // positive y face
   [16, 18, 19, 17], // negative z face
   [20, 21, 23, 22], // positive z face

But to make things easier to send to the GPU, we make all faces triangles. So each face of a cube would actually be stored as two triangles:

var faces = [
   [  0,  4,  6 ], [  6,  2,  0],  // [ 0,  4,  6,  2]
   [  1,  3,  7 ], [  7,  5,  1],  // [ 1,  3,  7,  5]
   [  8,  9, 13 ], [ 13, 12,  8],  // [ 8,  9, 13, 12]
   [ 10, 14, 15 ], [ 15, 11, 10],  // [10, 14, 15, 11]
   [ 16, 18, 19 ], [ 19, 17, 16],  // [16, 18, 19, 17]
   [ 20, 21, 23 ], [ 23, 22, 20],  // [20, 21, 23, 22]
];

For Thursday, December 4, your assignment is to figure out how to describe various 3D shapes as a list of vertices and a list of triangular faces.

Shapes you should do this for are: cylinder, sphere, cube (which I already showed you how to do, above), octahedron, torus.

You can try other shapes as well if you are feeling ambitious. For example, can you make a shape that looks like a house? An animal? A tree? 3D letters?

Remember, your triangles all need to be oriented counterclockwise, when viewed from the outside of the shape.

Try making an interesting animated scene using your vertices/faces shapes.


Dec 02: Principles of character animation  

We created a simple humanoind jointed stick figure that is animated entirely by length constraints and simple forces.

We also looked at this example of a procedurally animated walking character. To run it, you need to add trusted site http://mrl.nyu.edu to your Java preferences. On a Mac, you can do that as follows:

  • Click on the Apple menu and open the System Preferences panel;
  • Click on Java;
  • Click on the Security tab;
  • Click on Edit Site List;
  • Add http://mrl.nyu.edu to the list

  • Dec 04: Face numbering, introduction to WebGL and three.js  

    In class we went through the code to create faces for a parametric mesh.

    Then we went through an example of low level code for sending vertices down to WebGL. I will upload that code to this page soon.

    Then we showed how to do the same thing using the high level three.js library, which does most of the work for you.y You can find three.js in the chalktalk library that I provided for you earlier this semester.

    I need to hand in the grades for this class by Wednesday December 24 before noon, so think in terms of a final project that you can complete by noon of Tuesday December 23 (since I'll need time to grade everyone). It's ok for two people to do a final project together, but remember that such a project will need to be more ambitious in scope.

    Your assignment for Thursday, Dec 11 is to finish up anything you may still have unfinished from the assignments to date This is also a good time to make any improvements or extra enhancements that you were meaning to make, but didn't get around to.


    Dec 09: Discussion of final projects and advanced topics  

    In this class students discussed their final project ideas, and then we had a wide ranging and general discussion about various advanced topics.


    Dec 11: Ray tracing to general second order surfaces  

    In this lecture we went over the math for transforming second order surfaces for purposes of ray tracing. Here is a review of what we did, which also includes a little extra section at the end that shows you how to transform the surface normal (so you can do lighting and shading on your transformed surface).

    Also, as requested in class, here is an excellent textbook for those who are interested in more advanced reading on the subject of computer graphics:

    Computer Graphics, Principles and Practice, third edition


    Dec 16: Ray tracing to general second order surfaces  

    Dec 16: Extra class begins at 10am  

    CLASS ON TUESDAY DECEMBER 16 BEGINS AT 10AM!