Course notes for November 20
Introduction to particle systems:
Examples of uses of particle systems: This week we just scratched the surface of particle systems. Next week we will go into more detail about this rich topic. Meanwhile, here's a high level introduction to the subject. Particle systems are very flexible; they can be used to simulate many natural phenomena, including water, leaves, clouds/fog, snow, dust, and stars. When they are "smeared out" so that they are rendered as trails, rather than as discrete particles, they can be used to render hair, fur, grass, and similar natural objects. Basic mechanism: Generally speaking, particles in a particle system begin by being emitted from the surface of an "emitter" object. When a particle begins its life, it has an initial trajectory, which is usually normal to the surface of the emitter object. After that, the path of the particle can be influenced by various things, including gravity and other forces, and collisions with object surfaces. Particles usually have a lifetime, after which they are removed from the system. Also, a particle can itself be an emitter of other particles, spawning one or more other particles in the course of its lifetime. In this way, particles can be made to cascade, generating complex patterns such as flamelike shapes. All of the qualities of a particle -- its lifetime, its velocity and mass, how many particles it spawns, can be a randomly chosen value within some range. By controlling the ranges from which these various properties are chosen, artists can control the look and feel of a particle system. History: Particle systems were first developed by Bill Reeves at Pixar in 1981. Its first public use was for the Genesis Effect in Star Trek 2, the Wrath of Khan 1982. Since then, it has become a mainstay of computer graphic films and games.
Rendering: One nice thing about particle systems is that they are not that difficult to implement in vertex shaders. In addition to their behavior, their appearance can also be hardware accelarated. One common technique is to render each particle as a "billboard": a polygon that is always perpendicular to the camera. This polygon is textured with a translucent image of a fuzzy spot. The effect is to make the particle look like a small gaseous sphere, but at fairly low computational cost.
Linear blend skinning: In class we discussed an approximation to animating the soft skin of game characters which is cheap and can be implemented very easily in vertex shaders. In an animated character, the rigid bones of the character's articulating skeleton are generally covered in some sort of soft skin. A fairly accurate way to model this skin would be to think of each point on its surface (approximated by the vertices of a polygon mesh), as being influenced by the various rigid matrix transformations of nearby bones in the skeleton. To do this properly, one would compute a composite transformation matrix that was influenced by all of those individual bone matrices. However, in practice this is a more expensive operation than can be accommodated in the real-time rendering budget of game engines. So most games instead do a kind of cheat called linear blend skinning. The basic idea is to compute the matrix transformation of each vertex as a part of each of the various nearby bones. This will result in a different position for each bone. Then these positions are blended together into a weighted average to find the final position for the vertex. To make this work, each vertex maintains a list of [bone,weight] pairs, where all of the respective weights sum to 1.0. This technique is very fast, and very easy to implement efficiently in hardware accelarated vertex shaders, but it has some practical deficiencies. For example, twisting between the two ends of a limb can cause the middle of the limb to appear to collapse. To handle cases like this linear blend skinned skeletons are rigged with extra bones to mitigate the effects of such problems.
Marching cubes: Marching Squares (2D case):
Marching Tetrahedra (simpler to implement, less efficient): To avoid the big table look-up of Marching Cubes, a technique I've used is to split up each voxel into six tetrahedra. Given the same corner numbering we used for Marching Cubes, we can partition the voxel cube by "turning on" the binary bits of the numbered corners in different orders, giving the six tetrahedra: [0,1,2,7] , [0,1,5,7] , [0,2,3,7] , [0,2,6,7] , [0,4,5,7] , [0,4,6,7]Since a tetrahedron has only four edges, there are only two non-trivial boundary cases: (1) the boundary is a single triangle, or (2) the boundary is a four sided shape, which can be split into two triangles. This algorithm is less efficient than Marching Cubes, because it generally produces more triangles for each boundary cube. However it requires much less code, and therefore is easier to program, to debug, and to port to a vertex shader.
Fun with vertex shaders: A vertex shader allows you to algorithmically displace each vertex of a triangle or triangle mesh any way you want. Since vertex shaders run in the GPU, they can be very fast (much faster than computations done on the CPU), and therefore it can be very advantageous to move modeling operations off the CPU and down to vertex shaders where possible. In commercial computer games, linear blend skinning and other procedural mesh animations (such as the one I showed for the fish) are often done in vertex shaders. Start with a simple vertex shader: In class we illustrated this with a vertex shader. A piece of that code is shown here. The code started out looking like this: vec3 vp = aVertexPosition; vec3 vn = aVertexNormal; gl_Position = uPMatrix * uMatrix * vec4(vp, 1.); vNormal = normalize(uNMatrix * vec4(vn, 0.)).xyz); Then we displaced the surface, creating a ripple pattern by adding cosine wave functions: float amp = .05; float freq = 10.; vec3 vp = aVertexPosition; vec3 vn = aVertexNormal; float f = amp * cos(freq * vp.x); vp += vec3(f,0.,0.); gl_Position = uPMatrix * uMatrix * vec4(vp, 1.); vNormal = normalize(uNMatrix * vec4(vn, 0.)).xyz); This still won't be shaded properly, because we have not modified the surface normal. In class we did this by explicitly computing the analytic derivative for our displacement function, and adding that displacement to the surface normal: float amp = .05; float freq = 10.; vec3 vp = aVertexPosition; vec3 vn = aVertexNormal; float f = amp * cos(freq * vp.x); float df = amp * freq * -sin(freq * vp.x); vp += vec3(f,0.,0.); vn += vec3(df,0.,0.); gl_Position = uPMatrix * uMatrix * vec4(vp, 1.); vNormal = normalize((uNMatrix * vec4(vn, 0.)).xyz); Computing the change in normal by finite differences: In the above case, we were able to compute the derivative directly, because our displacement function was so simple. In general, it is often too difficult to explicitly compute the derivative. For this reason, people often use finite differences to compute an approximation to the function derivative. This can be done by computing the displacement function four times, first f0 at vp, then f1, f2 and f3 at (vp + ε vpx), (vp + ε vpy) and (vp + ε vpz), respectively, for some small distance ε.
The displacement to add to normal vector
Homework, due November 28
As with last week's homework, for this week's homework, which is due by class on Wednesday November 27, feel free to pick and choose from among the above directions.
|