Sample Exam Problems from 2nd Half of Course

The final exam will be given Monday, May 7 from 10:00 to 11:50 in Warren Weaver, room 101. It is closed book and closed notes.

The exam is cumulative, covering the material from the entire course. Roughly 1/3 of the exam will be drawn from the first half of the course, and the other 2/3 will be drawn from the second half. Some sample problems from the second half of the course are given below.

Topics covered in second half of course:

Problem 1:

What is the result of doing alpha-beta pruning in the game tree shown below?

Problem 2:

Name three conditions that must hold on a game for the technique of MIN-MAX game-tree evaluation to be applicable.

Problem 3:

A. Give an example of a decision tree with two internal nodes (including the root), and explain how it classifies an example.
B. Give a high-level description of the ID3 algorithm to construct decision trees from training data. You need not give the definition of entropy or expected entropy.
C. What kinds of techniques can be used to counter the problem of over-fitting in decision trees?

Problem 4:

Consider the following data set with three Boolean predictive attributes, W,X,Y and Boolean classification C.
  W   X   Y   C
----------------
  T   T   T   T
  T   F   T   F
  T   F   F   F
  F   T   T   F
  F   F   F   T
We now encounter a new example: W=F, X=T, Y=F. If we apply the Naive Bayes method, what probability is assigned to the two values of C?

Problem 5:

"Local minima can cause difficulties for a feed-forward, back-propagation neural network." Explain. Local minima of what function of what arguments? Why do they create difficulties?

Problem 6:

Which of the following describes the process of task execution (classifying input signal) in a feed-forward, back-propagation neural network? Which describe the process of learning? (One answer is correct for each.)

Problem 7

Explain briefly (2 or 3 sentences) the use of a training set and a test set in evaluating learning programs.

Problem 8

Explain how the minimum description length (MDL) learning theory justifies the conjecture of
A. perfect classification hypotheses (i.e. classification hypotheses that always give the correct classification, given the values of the predictive attributes) for nominal classifications.
B. imperfect classification hypotheses (i.e. hypotheses that do better than chance) for nominal attributes.
C. approximate classification hypotheses for numeric classifications. (i.e. hypotheses that give answers that are nearly correct.)

Problem 9

Give an example of a Datalog program and a query that never returns an answer if the interpreter uses a depth-first search, but that will return an answer if the interpreter is modified to use an iterative deepening search. (Hint: there is an example that uses one rule and one fact.)

Problem 10

Describe the use of edge detection and of thresholding in low-level computer vision.

Problem 11

Describe briefly two ways in which texture can be used in vision systems.