Midterm : Introduction to Artificial Intelligence

March 7, 2001

Part I: Multiple choice (7 points each)

Problem 1

The hill-climbing algorithm:
A. always terminates at the optimal solution.
B. always terminates at a local maximum
C. goes into an infinite loop if there is no solution.
D. runs in time BD where B is the branching factor and D is the depth.

Answer: B

Problem 2

Consider the following CFG grammar:
S -> NP VP
NP -> NG | NG "and" NG
NG -> pronoun | noun
VP -> verb | verb NP | VP "and" VP

Lexicon:
I : pronoun.
cook : noun, verb
eggs : noun
fish : noun, verb.
Which of the following parse tree are correct:
 
i.  S ---> NP ---> NG ---> pronoun ---> I
       |
       |-> VP ---> verb ---> cook
               |
               |-> NP ---> NG ---> noun ---> eggs
                       |
                       |-> "and"
                       |
                       |-> NG ---> noun ---> fish


ii. S ---> NP ---> NG ---> pronoun ---> I
       |
       |-> VP ---> verb ---> cook
               |
               |-> NP ---> NG ---> noun ---> eggs
                       |
                       |-> "and"
                       |
                       |-> VP ---> verb ---> fish


iii.S ---> NP ---> NG ---> pronoun ---> I
       |
       |-> VP ---> VP ---> verb ---> cook
               |       |       
               |       |-> NP ---> NG ---> noun ---> eggs
               |               
               |-> "and"
               |
               |-> VP ---> verb ---> fish


iv. S ---> NP ---> NG ---> pronoun ---> I
       |
       |-> VP ---> verb ---> cook
               |       
               |-> NP ---> NG ---> noun ---> eggs
               |       
               |-> "and"
               |
               |-> VP ---> verb ---> fish
A. All four.
B. Only (i)
C. (i), (iii), and (iv).
D. (i) and (iii).
E. (i) and (iv).

Answer: D

Problem 3

Problem 2 illustrates:
A. Anaphoric ambiguity.
B. Lexical ambiguity.
C. Semantic ambiguity.
D. Syntactic ambiguity
E. None of the above.

Answer: D

Problem 4

Suppose that we wish to tag words in a text with their part of speech. Let t1 ... tN be the tags, and let w1 ... wN be the words. In a probabilistic model we wish to:
A. Evaluate Prob(t1 ... tN | w1 ... wN).
B. Evaluate Prob(w1 ... wN | t1 ... tN).
C. Find t1 ... tN for which Prob(t1 ... tN | w1 ... wN) is maximal.
D. Find w1 ... wN for which Prob(t1 ... tN | w1 ... wN) is maximal.
E. Find t1 ... tN for which Prob(w1 ... tN | t1 ... tN) is maximal.
F. Find w1 ... wN for which Prob(w1 ... wN | t1 ... tN) is maximal.

Answer: C

Problem 5

In a chart parser, the "COMPLETER" module could combine edge [2,4,VP -> VG * NP] with
A. edge [2,4,VG -> modal verb *] to create edge [2,4,VP -> VG NP *]
B. edge [4,6,VG -> modal verb *] to create edge [2,6,VP -> VG NP *]
C. edge [2,6,VG -> modal verb *] to create edge [2,6,VP -> VG * NP]
D. edge[2,4,NP -> determiner noun *] to create create edge [2,4,VP -> VG NP *]
E. edge[4,6,NP -> determiner noun *] to create create edge [2,6,VP -> VG NP *]
F. edge[2,6,NP -> determiner noun *] to create create edge [2,6,VP -> VG * NP]

Answer: E

Part II: Long answer questions

Problem 6 (25 points)

The MAP-COLORING problem is defined as follows: Given a map of countries, and a fixed set of colors, assign a color to each region in the map in such a way that no two adjacent regions have the same color. For example, the map below can be colored from the set {Red, White, Blue} by the assignment: A is Red, B is White, C is Blue, D is White, E is Red.
                  A--C--B
                  | / \ |
                  |/   \|
                  D-----E
The following non-deterministic algorithm solves the MAP-COLORING problem
MC(in : R : set of regions;
        C : set of colors;
   out : O : array[region] of color;

O := empty;
for U in R do
   choose a color X in C such that no region bordering R has color X;
   O[U] := X;
endfor

Suppose that we apply this algorithm to the above problem. Suppose that the non-deterministic search is implemented as a depth-first search; that the regions R are ordered alphabetically; and that the the choices of color are considered in the order red, white, blue.

A. What is the first solution that is found?

Answer: A->red; B->white; C->blue; D->white; E->red.

B. Name one failure state that is encountered before the first solution is found.

Answer: A->red; B->red; C->white; D->blue is the first. There are others.

C. Is there any advantage to implementing this as an iterative deepening search rather than as a depth-first search? Explain your answer.

Answer: No, because the depth of the goal state is known from the start. (It is equal to the number of regions.)

Problem 7 (20 points)

Consider the following pair of sentences:
A. Joe wore a wool suit. ("suit" = pants and jacket)
B. The suit is in the court. ("suit" = lawsuit).
Explain how the disambiguation techniques of selectional restriction and frequency in context can be applied in these two sentences. Answer: The meaning "suit"=lawsuit can be excluded in sentence A by considering that the object of "wore" must be an item of clothing. The meaning "suit"=lawsuit is preferred in sentence B, as more frequent in the context of courts.

Problem 8 (20 points)

Consider the following two CFG grammars:

Grammar 1:

S -> NP VP
NP -> determiner noun PP*
VP -> verb  | verb NP
PP -> preposition NP

Grammar 2:

S -> NP VP
NP -> determiner noun {PP}
VP -> verb  | verb NP
PP -> preposition NP

Recall that PP* means "any number --- possibly zero --- of prepositional phrases," and that { PP } means "either zero or one prepositional phrase."

A. Consider the sentence "The man with the beard from the committee talked." Show that this sentence has two parses in grammar 1 and one parse in grammar 2.

Answer: Grammar 1 gives the following two parses:

S ---> NP ---> determiner ---> the
   |       |
   |       |-> noun ---> man
   |       |
   |       |-> PP ---> preposition ---> with
   |               |
   |               |-> NP ---> determiner ---> the
   |                       |
   |                       |-> noun ---> beard
   |                       |
   |                       |-> PP ---> preposition ---> from
   |                               |
   |                               |-> NP ---> determiner ---> the
   |                                       |
   |                                       |-> noun ---> committee
   |               
   |-> VP ---> verb ---> talked.

S ---> NP ---> determiner ---> the
   |       |
   |       |-> noun ---> man
   |       |
   |       |-> PP ---> preposition ---> with
   |       |       |
   |       |       |-> NP ---> determiner ---> the
   |       |               |
   |       |               |-> noun ---> beard
   |       |                
   |       |-> PP ---> preposition ---> from
   |               |
   |               |-> NP ---> determiner ---> the
   |                       |
   |                       |-> noun ---> committee
   |               
   |-> VP ---> verb ---> talked.
Grammar 2 gives only the first of these.

B. It is easily shown that the two grammars can parse the same sentences. That is, if a sentence can be parsed by grammar 1, then it can be parsed by grammar 2, and vice versa. However, as in part (A), grammar 1 often gives two or more different parse trees for sentences where grammar 2 gives only one. Explain why grammar 1 is, in fact, more suitable for a natural language system than grammar 2. Specifically, you should consider how these two grammars will interact with a compositional semantics.

Answer: In a compositional semantics the rule is that the interpretation of the structure

NP ---> determiner
    |  
    |-> noun
    |
    |-> PP1
    |
    ...
    |
    |-> PPk
is that each of the meanings of the PPi applies to the meaning of the noun. Thus, the two different parse trees given by grammar 1 have two different meanings: the first mean that the beard is from the committee and the second means that the man is from the committee. Depending on the sentence, either one of these may be correct; in this case, of course the second is correct. Grammar 2, by eliminating this parse tree, also eliminates the correct interpretation of the sentence.