Sample problems from second half of course
Let me emphasize that this is just a collection of sample problems,
not a sample final exam.
Multiple choice problems
Problem 1
Bayes' Law states that
 A. Prob(PQ) = Prob(P) / Prob(Q).
 B. Prob(PQ) = Prob(QP)
 C. Prob(PQ) = Prob(QP) / Prob(Q)
 D. Prob(PQ) = Prob(P) * Prob(QP) / Prob(Q)
 E. Prob(PQ) = Prob(Q) * Prob(QP) / Prob(P)
Problem 2
In Naive Bayes learning, we make the assumption that
 A. The classification attribute is independent of the predictive
attributes.
 B. The classification attribute depends only one predictive attribute.
 C. The predictive attributes are absolutely independent
 D. The predictive attributes are conditionally independent given
the classification attribute.
Problem 3
A support vector machine finds a linear separator that maximizes the "margin",
which is:
 A. The number of misclassified data points.
 B. The sum over all misclassified points of the distance from the point
to the separator.
 C. The sum over all misclassified points of the distance from the point
to the separator squared.
 D. The minimum distance from any point to the separator.
Problem 4
In the problem of tag elements E_{1} ...
E_{N} with tags T_{1} ... T_{N}, the Kgram
assumption is the assumption that
 A. E_{I} is independent of E_{IK}
 B. T_{I} is independent of T_{IK}
 C. E_{I} is conditionally independent of
E_{1} ... E_{IK} given E_{I+1K} ... E_{I1}
 D. T_{I} is conditionally independent of
T_{1} ... T_{IK} given T_{I+1K} ... T_{I1}
Problem 5
Learning takes place in a backpropagation network by
 A. Propagating activation levels from the input layer to the output layer.
 B. Propagating activation levels from the output layer to the input layer.
 C. Propagating modification to weights on the arcs from the input layer to the output layer.
 D. Propagating modification to weights on the arcs from the output layer to the input layer.
 E. Adding nodes and links in the hidden layers.
 F. Both adding and deleting nodes and links in the hidden layers.
Long Answer Problems
Problem 6
A. What conditional probabilities are recorded in the above Bayesian
network?
B. For each of the following statements, say whether it is true or false
in the above network:
B and C are independent absolutely.
B and C are independent given A.
B and C are independent given D.
A and D are independent absolutely.
A and D are independent given B.
A and D are independent given B and C.
C. Assuming that all the random variables are Boolean, show how Prob(B=T)
can be calculated in terms of the probabilities
recorded in the above network.
Problem 7
Datasets often contain instances with null values in some of the attributes.
Some classification learning algorithms are able to use such instances
in the training set; other algorithms must discard them.
 A. Can Naive Bayes make use of instances with null values in the
training set? Explain your answer.
 B. Can KNearest neighhbors make use of instances with null values in the
training set? Explain your answer.
Problem 8
The version of the ID3 algorithm in the class handout includes a test
"If AVG_ENTROPY(AS,C,T) is not substantially smaller than ENTROPY(C,T)''
then the algorithm constructs a leaf corresponding to the current state of T
and does not recur. "Substantially smaller" here, of course, is rather vague.
Is overfitting more likely to occur if this condition is changed
to require that "AVG_ENTROPY(AS,C,T) is much smaller than ENTROPY(C,T)" or
if the condition is changed to "AVG_ENTROPY(AS,C,T) is at all smaller than
ENTROPY(C,T)"? Explain your answer.
the disadvantage of eliminating the test?
Problem 9

A. What is the sparse data problem in using Naive Bayes for
classifying text? How is it solved?

B. What is the sparse data problem in using the kgram model for
tagging text? How is it solved?
Problem 10
The most common measure of the quality of a classifier is in terms of the
accuracy of its predictions. Explain why this is not always the best
measure and describe an alternative measure.