[Previous Section] [Next Section]
The OpenGL Application Programmer's Interface (API) was originally developed by Silicon Graphics, Inc (SGI). It is now an open standard that is widely available on most platforms. As an ``API'', it hides platform dependence from the user.
[Previous Section] [Next Section]
OpenGL is a graphics API. It is important to understand the computational model behind this API. This model is called the Graphics Pipeline Model, which is used for producing images of geometric models. The API provides geometric primitives and programming constructs for defining the models. The model is then rendered via a sequence of pipeline stages:
The stages can be affected by various state variables. E.g., when we say "create a vertex", the attributes for the new vertex will be taken from the current values for the state variable COLOR, SHADING, TRANSFORMATION, etc, These state variables can be read and modified.
PROS of state machine: flexibility in modeling. Each stage is controlled by its own state variables which expresses different objectives. The concept of a display processor is built in. CONS of state machines: It is harder to understand a command locally.
Primitive Number Types. The primitive number types are denoted GLint, GLfloat and GLdouble, corresponding the similar types in C/C++. Note the prefix ``GL'' in these names indicating a OpenGL primitive type.
Vertices: the atoms for geometric modeling. In OpenGL, the most basic geometric object is a vertex. This is basically a point, but it can be 2 or 3 dimensional. The types of the coordinates might be int or float. The obvious way to specify such points might be
Geometric Primitives.
Next, we can group vertices to form more complex geometric
shapes:
This vertex grouping construct can be used to define
other geometric primitives, and has the following
general syntax:
If we have to explicitly list all the vertices between
the glBegin() and glEnd(), this can be
painful for large models. Fortunately, OpenGL actually allow
iterative constructs. For instance, suppose you want
to graph the integer function f(x) for integers
0 ≤ x ≤ 99. You can use the following construct:
This construct will be illustrated by our program Simple.cc
below.
Attributes.
Each of our geometric objects have a fixed set of attributes
depending on their types. Each attribute is bound to some
value which the user can set. These attributes determine
the display properties of the object.
Thus a point has the color attribute and a size attribute.
Thus, we might say
glPointSize(2.0)
and this means that each point will be rendered as 2 pixels across
(default is 1.0).
Lines have color, thickness and type (solid, dashed, dotted) attributes.
For instance, to get a line thickness that is twice the default, do
glLineWidth(2.0)
Polygons have even more attributes.
If you need to find out various attributes, there are various
methods. The generic method is glGet.
Thus, glGet(GL_LINE_WIDTH) will tell you the
current line width.
Display Modes.
When we construct geometric objects, we can asked the objects
to be displayed immediately. This is called the immediate-mode.
A more sophisticated model assumes that there is a
special display processor whose job is to display data on
the screen, and has its own special display memory.
Geometric objects can be grouped into
display lists which are stored in display membory.
Each display list is associated with a unique identifier.
The graphics program
can now issue display instructions to the display processor,
just by specifying the identifiers of the
desired display list. Thus, there is no need to
resend all the details about the display list.
This is called the retained-mode. This is
obviously an important advantageous in thinwire situations.
In OpenGL, we can define and manipulate display lists.
They are defined similar to geometric objects, but
enclosed in glNewList and glEndList
instead of glBegin and glEnd.
Each list must have a unique identifier (an integer).
It also has a compilation flag. Suppose we want to define
a box. The format is therefore
[Previous Section]
[Next Section]
In OpenGL, the GUI aspects and the windowing support
is found in additional support libraries. The first
of these libraries is GLU (graphics utility library).
The latter is found in GLUT (GL Utility Toolkit).
Traditional interactions between a computer
program and the outside world (or the user) is essentially
predetermined by the program - the program prints
an output, or requests an input. Of course
the user's input can change the course of
events in the program, but the order of interaction
is predictable.
With the development of the GUI interfaces, we
are faced with a completely different interaction
model. The program interacts with a set of
logical input devices - these may be the traditional
keyboard, but it could also be the mouse and various
window widgets. Moreover, the order of the interactions
is quite unpredictable - the user can request a window
to be closed anytime, and every mouse motion is potentially
a request for interaction. This readiness to serve
is characteristic of GUI-based programs. In broad outline,
this is also typical of how operation systems provide services.
This model of interaction is often called an event-based model
or event-driven interaction. The idea is that
Let us now see how to construct
programs based on an event-driven interaction.
We will be specifically discussing OpenGL's version
of this model.
Assume that the mouse generates various events such as
move event (when a mouse is moved with
some depressed mouse button),
passive move event (when the
mouse is moved without any depressed mouse buttons),
mouse event when a mouse button is depressed or
released. Note that in this model, when we first
depress a mouse button, we generate an event, but
maintaining the depressed mouse button
generates no event. The next mouse event thing that can happen
is one of two things: the mouse moves or
the mouse button is released.
When a mouse event is generated, we return its position (among
other information that depends on the event) -
corresponding to some position on the screen.
Of course this ``position'' is a logical concept, since
the mouse is not literally on the screen. But as feedback to
the use, we display this ``position'' on the screen using
some mouse cursor.
But who is supposed to handle this event (i.e., the information
such as ``position'')?
The answer lies in call back functions,
In the GLUT model, we need to write a Mouse Event function whose
prototype is:
All that this call back does is to exit the window program
in case the depressed mouse button is the "GLUT_LEFT_BUTTON".
This function must then be registered for handling mouse
events:
What other minimal callback functions do we need to
construct and register?
[Previous Section]
[Next Section]
We now walk through
a rudimentary working C++-program called
simple.cc.
All that it does is to set up a window (using GLUT),
and display the graph of a function.
Note the use of a for-loop in setting up the
GL_POINTS geometric primitive.
Makefile Tool.
Another tool that we expect you to use to organize
all your programming projects in this
class is the make program. Here is the
Makefile
used for compiling and running our Simple.cc program.
Second Example: Rotate.c .
This is a slightly more elaborate example;
it is written in C instead of C++.
The program
rotate.c
and the associated Makefile
Makefile
This program shows two cubes, one inside the other.
Both are rotating independently. You can control
the rotations by using the mouse. There are toggles
to stop the rotation, to change the lighting
and shading. This program illustrates several
basic concepts:
(1) a simple geometric model (cubes),
(2) animation (rotation),
(3) lighting effects,
(4) use of mouse callback function.
To download the above files, click the
following links:
[simple/simple.cc]
[simple/Makefile]
[rotate/rotate.c]
[rotate/Makefile]
Vertex(int x, int y); // 2-dimensional
Vertex(float x, float y, float z); // 3-dimensional
The actual OpenGl syntax is similar:
glVertex2i(GLint x, GLint y);
glVertex3f(GLfloat x, GLfloat y, GLfloat z);
The name glVertex2i can be parsed to yield
insights into the data type.
First, the prefix gl indicate OpenGL functions,
just as the prefix GL indicate
OpenGL types. The suffixes 2i
and 3f indicate the dimension and number types:
``2-dimensional integer'' and
``3-dimensional float'' respectively.
If vec2 is an array of two double values,
and vec3 is an array of three int values, we could also
define vertices in the following two ways:
glVertex2dv( vec2 );
glVertex3iv( vec3 );
In general, the type of a glVertex has a suffix of two
characters ``nt'' or three characters ``ntv'' where
n is 2, 3 or 4, and t is i (int),
f (float) or d (double).
glBegin(GL_LINES); // GL_LINES is a defined constant
glVertex2f(x1, y1);
glVertex2f(x2, y2);
glVertex2f(x3, y3);
glVertex4f(x4, y4);
glEnd();
The above construct actually defines 2 lines. In general,
GL_LINES takes a list of 2n vertices, and pairs them
up to form n lines.
glBegin(<GLtype>); // GLtype is some defined constant
glVertex*(x1, y1); // the first vertex
glVertex*(x2, y2); // the second vertex
...
glVertex*(xn, yn); // the n-th vertex
glEnd();
where
GLtype
tells OpenGL how to interpret the list
of vertices. Here are some choices for GLtype:
GL_POINTS, GL_LINES, GL_LINE_STRIP, GL_LINE_LOOP
The first type, GL_POINTS,
is basically a sequence of unrelated points. The last
two types corresponds to a polygonal line and a closed polygonal loop,
respectively. In addition, there are also true ``polygon'' types:
GL_POLYGON, GL_TRIANGLES, GL_QUADS,
GL_TRIANGLE_STRIP, GL_QUAD_STRIP, GL_TRIANGLE_FAN
The difference between the closed polygonal loops and polygons
is that the latter has an interior (provided it is defined properly)
while closed loops has no notion interior.
We can choose to fill the interior with
some color or texture/pattern, and we can choose to display or not
display the edges of the polygon.
glBegin(GL_POINTS); // Plotting the function y=f(x)
for (int x=0; x<100; x++)
glVertex2d((GLint) x, (GLint) f(x));
glEnd();
glNewList( BOX, GL_COMPILE ); //BOX is unique int identifier
glBegin(GL_POLYGON);
glColor3f(1.0, 0.0, 0.2);
glVertex2f(-1.0, -1.0);
...
glVertex2f(-1.0, 1.0);
glEnd();
glEndList();
Instead of GL_COMPILE, we could have
GL_COMPILE_AND_EXECUTE. To use the above, we
execute the function
glCallList(BOX);
Note that as usual, the present state determines what
other transformations will apply to the BOX. If these
transformations change, the BOX will appear to move.
2 Event Model Programming
void mouse_callback( int button, int state, int x, int y)
{
if (button == GLUT_LEFT_BUTTON & state == GLUT_DOWN)
exit();
}
glutMouseFunc( mouse_callback );
int main(int argc, char **argv) {
glutInit(&argc, argv);
glutInitDisplayMode (GLUT_SINGLE | GLUT_RGB );
glutCreateWindow("square");
myinit();
glutReshapeFunc( myReshape ) ; // called when window is resized
glutMouseFunc(mouse ) ;
glutMainLoop();
}//main
3 First Examples
References