Course notes for Oct 2 class
A note on matrix inversion
In order to solve the general second order
equation, you will need inverse matrices.
There is a well known algorithm for inverting
an n×n matrix by Gaussian elimination,
but alas it is not a good fit for hardware accelarated shaders.
One way to deal with this is by building
the inverse matrix in stages.
Every time you do an operation that
would transform the forward matrix,
such as translation, rotation or scale,
perform the equivalent operation to transform the
corresponding inverse matrix.
For example, suppose you want to first
translate by (tx,ty,tz),
then rotate by θ about the x axis,
then scale by (sx,sy,sz):
1 0 0 tx
0 1 0 ty
0 0 1 tz
0 0 0 1
|
• |
1 0 0 0
0 c -s 0
0 s c 0
0 0 0 1
|
• |
sx 0 0 0
0 sy 0 0
0 0 sz 0
0 0 0 1
|
In the second matrix above,
c = cos(θ)
and
s = sin(θ).
In shader code, this might look something like:
mat4 m = identityMatrix();
m = m * translationMatrix(tx,ty,tz);
m = m * xRotationMatrix(theta);
m = m * scaleMatrix(sx,sy,sz);
You can compute the inverse of the resulting
matrix by applying the inverse of each
successive step, each time
multiplying on the left
rather than multiplying on the right:
mat4 mInv = identityMatrix();
mInv = translationMatrix(-tx,-ty,-tz) * mInv;
mInv = xRotationMatrix(-theta) * mInv;
mInv = scaleMatrix(1./sx,1./sy,1./sz) * mInv;
The product m*mInv
is guaranteed to be the identity matrix,
so this method does indeed compute the inverse.
Using this approach, you can even, if you like, implement functions
translate(x,y,z) ,
rotateX(theta) ,
rotateY(theta) ,
rotateZ(theta)
and
scale(x,y,z)
that each update both a forward and an inverse transformation.
Note also that when doing rotations,
the following three rotation primitives are convenient
building blocks:
Rot about x: |
| Rot about y: |
| Rot about z: |
1 0 0 0
0 c -s 0
0 s c 0
0 0 0 1
|
|
c 0 s 0
0 1 0 0
-s 0 c 0
0 0 0 1
|
|
c -s 0 0
s c 0 0
0 0 1 0
0 0 0 1
|
|
Procedural texture
Procedural textures provide a powerful and flexible
way to create naturalistic and/or interesting
variations in surface appearance.
Once nice thing about them is that they allow you
to use the (x,y,z) position of a surface point itself
to tell you where to place the texture on the surface.
This is particularly convenient for ray tracing.
As we showed in class, you can create a great variety
of naturalistic surface textures by using the
band limited noise
function (see below), which has the nice
property that it contains variations only
around a specific scale.
This property allows you to combine
calls to noise at different scales to
create a great variety of texture effects.
For example, a procedural stripe pattern
that would look very artificial by itself:
f(x,y,z) = sin(x)
can be make to look more natural by varying
the position of the stripes through the use of noise:
f(p) = sin(px + a * noise(f*p))
In the above example, you can use
f to vary the frequency of noise pattern,
and
a to vary its amplitude.
You can create variations in the stripe pattern
at many different scales by adding sums of scaled noise:
f(x,y,z) =
∑i (
sin(px + a * noise(2i*f*p) / 2i)
)
Also, as we discussed in class, you can vary this
to create more realistic marble patterns by adding
"creases" to the noise at each scale. You can
do this by using the absolute value
|noise(p)|
rather than
noise(p):
f(x,y,z) =
∑i (
sin(px + a * |noise(2i*f*p)| / 2i)
)
These are just a few examples.
I encourage you to try your hand at making
your own custom procedural textures.
More on the noise function:
Procedural texture and Phong reflectance:
You can use procedural textures to vary any
aspect of the Phong reflectance computation.
For example, you can use it to vary the
ambient or diffuse or specular color,
or the specular power (which creates variations in surface shininess).
You can also use it to vary the surface normal,
to create the appearance of bumpy surfaces.
To do this, you can first add a procedural texture
to each of the x,y and z components of your
surface normal vector,
and then renormalize the surface normal vector to unit length.
You can also use a similar technique for
mirror-like surfaces.
Here you would use the same technique as above to
vary the surface normal,
and then use the resulting normal vector to
compute your reflection vector.
|
Refraction
Snell's Law
determines how much the light that
enters a transparent material (such as glass, water or clear plastic) will
change direction when it crosses the meterial's surface.
Every transparent material has an index
of refraction n,
which measures how slowly light travels
through that material.
The larger the value of n, the slower light travels through the material,
according to the rule:
n1 sin θ1 =
n2 sin θ2
where θ is the deviation of a ray's direction from the surface normal.
The refractive index of air
is essentially 1.0, since light travels
pretty much as fast in air as it does in a vacuum.
The refractive index
of water is around 1.333,
of alcohol is around 1.36,
and of glass is around 1.5.
The substance with the highest known
refractive index is diamond, at 2.42.
As you can see in the diagram to the right,
light coming from a less dense medium
to a more dense medium
bends more toward the normal direction.
Light coming from a more dense medium
to a less dense medium
bends more away from the normal direction.
When going from a more dense medium
to a less dense medium at progressively
steeper angles,
at some point the Brewster angle is reached.
After this, the ray simply reflects back into
the more dense medium.
You can see this for yourself by
moving your mouse within the lower half of the
diagram to the right.
| |
Layered fog
If fog is purely a function of y,
then it is non-uniform,
and therefore visually more interesting
than uniform fog,
yet also easy to integrate along a ray.
Here is an example of a layered fog density function, as a function of y (height):
d(y) =
|
if y ≥ -π and y ≤ π
cos(y) + 1
else
0
|
The integral of this function with respect to y is:
dI(y) =
|
if y < -π
0
else if y ≤ π
π
+
sin(y)
+
y
else
2π
|
|
|
|
To compute the total density of a section of ray through this fog,
we need to integrate the density along the ray
(it doesn't matter which way -- we will be taking
the absolute value of the result).
Because the density varies only in y,
the integral along the slanted ray is the same
as the integral along a perfectly vertical ray,
if it were "stretched" in length.
The effect of this stretching is to multiply the
total density by 1/cosθ, where θ is
the angle by which the ray deviates from the vertical.
A special case occurs when the two ends
of the ray segment have the same value of y.
In this special case, we can treat the fog the same way we would treat uniform fog.
Overview of algorithm:
- Given ray origin V and surface point S:
- Compute density integral d along vertical interval Sy...Vy
- Compute stretch due to horizontal component: s = |V-S| / |Sy-Vy|
- Transparency from V to S is (0.5)sd
|
Cone tracing
The basic idea of cone tracing is to replace the ray
(V,W)
by the cone,
(V,W,α),
where α is the radial angle at which the
cone spreads out as it moves away from its origin point
V.
For any distance t along the cone's medial ray,
we consider not just the single point
V+tW,
but rather the entire disk around that point of radius
t tan(α).
Using a cone instead of a ray accounts for the fact that the extent of a
pixel is not a single point, but rather an area.
Cone tracing is also useful for modeling diffuse reflection and refraction, such as
reflection/refraction for non-smooth surfaces.
Tracing a cone to a sphere is slightly more complex than
tracing a ray to a sphere. In addition to either hitting or missing
the sphere, a cone can graze the sphere, in which case
we need to approximate the fraction of the cone that intersects the sphere.
As John Amanatides described in his paper
Ray Tracing with Cones,
we can quickly compute whether a cone intersects a sphere as follows:
-
Given a cone with radial spread angle α, and a sphere
centered at
C
of radius
R,
Compute the point
P
where ray
V+tW
most closely approaches
C.
-
Consider the disk
Dp
= {P , t tan(α)}
representing the cross section of the cone at distance
t.
This disk is centered at
P
and has radius
t tan(α).
-
Consider also the disk
Dc
= {C, R/cos(α)}
representing the visible extent of the sphere
at distance
t
along the cone.
This disk is centered at
C
and has radius
R/cos(α).
-
The cone will not intersect the sphere if the sum
of the two disk radii
t tan(α)
+
R / cos(α)
is less than the distance
|P-C|
from the cone to the sphere.
If the cone does intersect the sphere,
then we obtain the fraction of the
cone that has intersected the sphere b
computing the intersection
of disk
Dp
and disk
Dc
and comparing this value
with the entire area of disk
Dp.
|
Other kinds of implicit surfaces (advanced topic)
It is possible to create more sophisticated
types of implicit surface functions, which
allow more general shapes to be ray traced.
The basic idea is to create individual
functions that form simple shapes, such as spheres,
but which, when summed,
form surfaces that "melt" together into
more complex continuous shapes.
As we discussed in class, the first person
to publish a description of such a technique
was Jim Blinn in his 1982 paper
A Generalization of Algebraic Surface Drawing.
Unfortunately,
the technique described by Blinn could be
computationally expensive for ray tracing.
In contrast, the metaball technique, developed
by a team of graduate students under the direction of
Koichi Omura at the University of Osaka
in the 1980s,
is much less computationally expensive to render by ray tracing.
Given a point p, consider procedure D(p):
r = |p|
if r < 1/3
1 - 3*r2
else if r < 1
1.5 * (1-r)2
else
0
This procedure describes a smooth function
with both value and and gradient of zero
at a radius of 1, and which is zero everywhere that r > 1.
We can create very complex
smooth shapes by summing together many metaballs:
∑i
(
ai
D(Mi•p)
)
- ε
where ai and Mi
are the amplitude and transformation matrix of each metaball,
respectively.
Subtracting a small ε
from the sum (eg: ε = 0.1)
guarantees a smooth transition between
shapes and a non-zero gradient at the surface.
|
|
| |
Note that unless you want to
stretch the metaballs, you don't really need to do a full matrix transformation
for each metaball, since a metaball has spherical symmetry,
and therefore rotation is not needed.
Instead, you can just translate and scale, which is computationally less expensive.
To ray trace through such a sum of metaballs,
you can find the quadratic boundaries along the ray of each
transformed metaball. Each metaball will
contain two such boundaries -- an inner one and an
outer one, reflecting the three cases of the metaball procedure.
Finding such a boundary is essentially just ray tracing to a sphere.
Within each of the regions formed between successive boundaries along a ray,
the ray encounters only a single smooth function,
which is a sum of smooth functions from individual metaballs.
Finding the surface (if there is one)
between two successive boundaries
requires finding the roots of this function along the ray.
To shade a metaball surface you need to find its surface normal.
To do this,
first compute the gradient of
the function at the ray/surface intersection.
The surface normal can then be found
by normalizing this gradient to unit length.
|
Homework due October 16
- Homework due Oct 16 -- add whatever features to your ray tracer you'd like to try.
- These two weeks will round out the ray tracing portion of our class.
- You need to try interesting things to get an A in this class
- Just a technically correct bare minimum all the way through will not get you an A.
|
|