Sample Exam Problems
The final exam will be given Tuesday, May 8 from 5:00 to 6:55 in
Warren Weaver, room 101. It is
closed book and closed notes.
- Blind search -- R+N chap 3, appendix B.2, handout on constrained
- Informed search --- R+N secs 4.1 except A* search, 4.2, 4.4
- Game playing -- R+N secs. 5.1-5.4
- Automated reasoning -- R+N chaps 6, 7, 9, handouts.
- Machine Learning
- 1R algorithm --- handout
- Naive Bayes --- handout
- Decision trees --- R+N sec 18.3
- Perceptrons/Back propagation networks --- R+N secs 19.1-19.4
- Evaluation --- handout
- Minimum description length learning --- handout
Explain the difference between ``pick" and ``choose'' in a non-deterministic
algorithm. Illustrate with an example. (You may use one of the examples
discussed in class.)
What is the result of doing alpha-beta pruning in the game tree shown
Name three conditions that must hold on a game for the technique of MIN-MAX
game-tree evaluation to be applicable.
Consider a domain where the individuals are people and languages.
Let L be the first-order language with the following primitives:
s(X,L) --- Person X speaks language L.
c(X,Y) --- Persons X and Y can communicate.
i(W,X,Y) --- Person W can serve as an interpreter between persons X and Y.
j,p,e,f --- Constants: Joe, Pierre, English, and French respectively.
A. Express the following statements in L:
B. Show that (vi) can be proven from (i)---(v) using backward-chaining
resolution. You must show the Skolemized form of each statement, and
every resolution that is used in the final proof. You need not show the
intermediate stages of Skolemization, or show resolutions that are
not used in the final proof.
- i. Joe speaks English.
- ii Pierre speaks French.
- iii. If X and Y both speak L, then X and Y can communicate.
- iv. If W can communicate both with X and with Y, then
W can serve as an interpreter between X and Y.
- v. For any two languages L and M, there is someone who speaks both
L and M.
- vi. There is someone who can interpret between Joe and Pierre.
A. Give an example of a decision tree with two internal nodes (including
the root), and explain how it classifies an example.
B. Describe the ID3 algorithm to construct decision
trees from training data.
C. What is the entropy of a classification C in table T? What is
the expected entropy of classification C if table T is split on
predictive attribute A?
D. What kinds of techniques can be used to counter the problem of over-fitting
in decision trees?
Consider the following data set with three Boolean predictive attributes,
W,X,Y and Boolean classification C.
W X Y C
T T T T
T F T F
T F F F
F T T F
F F F T
We now encounter a new example: W=F, X=T, Y=F. If we apply the Naive
Bayes method, what probability is assigned to the two values of C?
"Local minima can cause difficulties for a feed-forward, back-propagation
neural network." Explain. Local minima of what function of what arguments?
Why do they create difficulties?
Which of the following describes the process of task execution
(classifying input signal) in a feed-forward, back-propagation neural network?
Which describe the process of learning? (One answer is correct for each.)
- a. Activation levels are propagated from the inputs through the hidden
layers to the outputs.
- b. Activation levels are propagated from the outputs through the hidden
layers to the inputs.
- c. Weights on the links are modified based on messages propagated
from input to output.
- d. Weights on the links are modified based on messages propagated
from output to input.
- e. Connections in the network are modified, gradually shortening
the path from input to output.
- f. Weights at the input level are compared to the weights at the
output level, and modified to reduce the discrepancy.
Explain briefly (2 or 3 sentences) the use of a training
set and a test set in evaluating learning programs.
Explain how the minimum description length (MDL) learning theory
justifies the conjecture of
A. perfect classification hypotheses (i.e. classification
hypotheses that always give the correct classification, given the
values of the predictive attributes) for nominal classifications.
B. imperfect classification hypotheses (i.e. hypotheses that do better
than chance) for nominal attributes.
C. approximate classification hypotheses for numeric classifications.
(i.e. hypotheses that give answers that are nearly correct.)