CSCI-GA.2590 -- Natural Language Processing -- Spring 2013

The final exam

Most questions will be of one of the following types. Many of these correspond directly to questions asked for homework. In addition, there may be a few short answer questions corresponding to major points of a lecture; some of these are included in the list below. I may also ask a short (few sentence) essay question about an issue we have discussed in the lectures.

  1. English sentence structure: Label the constituents (NP, VP, PP, etc.) of an English sentence based on the grammar given in Chapter #12 (and summarized in the handout for homework #1). If the sentence is ambiguous, show its multiple parses. If the sentence violates some grammatical constraint, describe the constraint. (lecture #2, homework #1).
  2. Context-free grammar: Extend the context-free grammar to cover an additional construct, or to capture a grammatical constraint. (homework #1).
  3. Parsing: Given a very small context-free grammar, to step through the operation, or count the number of operations performed by a top-down backtracking parser, a bottom-up parser, or a chart parser (homework #2). What is the [time] complexity of these parsers? Convert the constituent structure into a dependency structure.(lecture #3)
  4. POS tagging: Tag a sentence using the Penn POS tags (homework #2).
  5. HMMs and the Viterbi decoder: Describe how POS tagging can be performed using a probabilistic model (J&M sec. 5.5 and chap 6; lecture #4). Create an HMM from some POS-tagged training data. Trace the operation of a Viterbi decoder. Compute the likelihood of a given tag sequence and the likelihood of generating a given sentence from an HMM (homework #3). What is the [time] complexity of the decoder?
  6. Chunkers and name taggers. Explain how BIO tags can be used to reduce chunking or name identification to a token-tagging task. Explain how chunking can be evaluated. (lecture #6).
  7. Maximum entropy: Explain how a maximum-entropy model can be used for tagging or chunking (lecture #6 and homework #6). Suggest some suitable features for each task.
  8. Jet: be able to extend, or trace the operation, of one of the Jet pattern sets we have distributed and discussed (for noun and verb groups, and for appointment events).  Analyze and correct a shortcoming in the appointment patterns (homework #8).
  9. Lexical semantics and word sense disambiguation: given two words, state their semantic relationship; given a word with two senses and small training set of contexts for each of the two senses, apply the naive Bayes procedure to resolve the sense of the word in a test case (J&M 20.2.2); given two words and a few sentences containing them, compute their cosine similarity (lecture #8).
  10. Reference resolution: analyze a reference resolution problem -- identify the type of anaphora and the constraints and preferences which would lead a system to select the correct antecedent (lecture #10).
  11. Probabilistic CFG: Train a probabilistic CFG from some parses; apply this PCFG to disambiguate a sentence. Explain how this PCFG can be extended to capture lexical information.  Compute lexically-conditioned probabilities.  (homework #8)
  12. Machine translation.Give the basic formula for noisy channel translation. Explain how an n-gram language model can be computed. What assumption is made by IBM Model 1?