Study sheet for Final Exam
The midterm will be given Wednesday, Dec. 17, 10:00-11:50 in WWH 101.
The exam is closed book and closed notes. It will be cumulative, covering
all the course material, but emphasizing the material from the second
The topics on the final exam are:
- I. Propositional Logic: R+N chapter 7 sections 7.1-7.4; the part of
section 7.6 dealing with DPLL; and the handouts on propositional
logic and the Davis-Putnam procedure.
- II. Predicate Calculus: R+N chapter 8; sections 9.1, 9.3, and 9.4;
and the handout Inference in Datalog.
- III. Probabilistic inference and Bayes' Law. R+N chapter 13 through 13.6.
- IV.. Machine learning.
- A. Naive Bayes. Handout from Mitchell. The very brief discussion in R+N
(p. 718) is not helpful.
- B. Nearest neighbors. Handout from Mitchell. See also R+N p. 733.
- C. Perceptrons and feed-forward, back-propagation neural networks.
R+N section 20.5. I do not expect you to remember the details
of the learning algorithm. What I want you to know is:
- The "feed forward" step: How the perceptron/feed forward network does
- The nature of learning: Weights on the links are adjusted so as to
reduce the total error over the training set.
- Backpropagation: Learning is carried out in a process that propagates
from the output layer to the input layer. The output cells compare the
computed answer to the correct answer, adjust the weights on their input
links, and then send messages backward on those links. The cells
in the hidden layer add up the messages they get, adjust the weights
on their input links, and then send message backward.
- The problem of overfitting.
- D. The ID3 algorithm. R+N section 18.3 and handouts. I don't expect
you to know the definition of
entropy or average entropy (though, if you plan to do graduate work
in Computer Science, entropy is important to know about.)
I do expect you to know:
- What a decision tree is, and how it is used for classification.
- The top-level ID3 algorithm.
- E. Evaluation. The use of a training set and a test set. R+N p. 660.
- F. Minimum description length learning. Handout.
Basic concept, and how it applies to classification learning.
- V. Natural Language Processing
- A. General structure of natural language interpretation. R+N 22.1
- B. Context-free grammar and chart parser. R+N 22.2, 22.3.
I don't expect you to memorize the whole recursive chart parser
algorithm. I do want you to know the significance of a dotted edge,
and the individual steps in constructing a dotted edge. For example,
you should know that if you have the edge [2,3, VP -> VG * NP]
and the rule NP -> Article Noun, then you construct the edge
[3,3, NP -> * Article Noun]; or that if you have the edge
[2,3,VP -> VG * NP]
and the edge [3,5,NP -> Article Noun *] then you construct the
new edge [2,5,VP -> VG NP *].
- C. Semantic analysis. Just the general idea. R+N 22.5 has a lot
more detail than I went into. I don't expect you to know that.
- D. Ambiguity and ambiguity resolution. Handout, also R+N 22.6.
I will only ask about material that has been covered in lecture.
I will not ask about: (a) the situation calculus;
(b) the material on advanced SAT engines that Prof. Barrett discussed;
(c) the K-gram model;
(d) the cognitive science readings.