G22.2590 - Natural Language Processing - Spring 2006 Prof. Grishman

Lecture 13 Outline

April 25, 2006

Discuss final exam outline.

Discourse Analysis: Planning

One approach to the analysis of narrative is through the use of plans:
  • We presume that the narrative describes a rational agent attepting to achieve a goal, and executing a sequence of actions to achieve that goal
  • However, the narrative only describes a select subset of the actions and goals.
  • Our objective in discourse analysis is to reconstruct the implicit actions and goals (and in so doing to resolve any ambiguities of syntax, logical form, or reference)
  • The problem is one of plan inference: to infer a plan from some of its steps and goals This is somewhat different from the typical planning problem in AI, where we start with an explicit goal and seek a plan (typically, the cheapest plan) which satisfies the goal, although in both cases we are searching a space of possible plans.

    In any planning problem, we have a set of predicates which describe the state of the system, and a set of actions, which affect the state of the system (making some predicates true or false). Some actions may be compound actions: they represent sequences of simpler actions. Also, actions have preconditions: predicates which must be true in order for the action to apply, and effects, predicates which become true when the action is performed.

    We can represent a plan by a tree, in which a goal dominates an action which achieves that goal, and an action dominates its precondition goals and (if it is a compound action) its constituent actions.

    Given a discourse, we ideally seek to create a plan (tree) in which the root (initial goal) is either an explicitly stated goal or is known to be a "plausible goal", and in which each sentence of the discourse can be tied to some action in the goal tree. (In reality we will normally not be able to connect all assertions in the discourse to the plan, but will prefer analyses which have the maximal connection to a plan.)

    Ex: Michelin Guide (from Wilensky): "Willa was hungry. She grabbed the Michelin Guide and got in her car."

    Trains domain (Allen, p. 484):
        "Jack needed to be in Rochester by noon.  He bought a ticket at the station."
        "Sue bought a ticket to Rochester.  She boarded the train at 4PM."

    Equipment failure reports.

    Managing Dialog (J&M 19.5)

    Pure "user initiative" systems maintain no model of dialog structure;  prior discourse only maintained to resolve anaphora and analyze fragments.

    (The other simple organization is pure "system initiative", where the system asks the questions and only accepts direct answers … a fancy menu system.)

    For mixed-initiative systems, we need to maintain some explicit representation of dialog goals in a goal stack.  There will be both system (task) goals and user goals

    For example, in an information gathering task, the system will have goals corresponding to the information it needs to gather (e.g., slots to fill in a form). If the top goal is such a system goal, the system will ask a question (to fill one slot in the form). The input will be analyzed with respect to this top goal:

    Dialog Analysis and Speech Acts (J&M 19.3)

    Simple dialog systems process questions at "face value". This can lead to "stonewalling" behavior such as (from Waltz):

    User: Are there summaries for January?
    System: Yes.
    User: Could I have the January summaries?
    System: Yes.
    User: I would like the January summaries.
    System: I understand.
    User: Where are the January summaries?
    System: On my disk.
    User: Can you give me the January summaries?
    System: Yes, I already told you that.
    To understand why this is peculiar (and inappropriate) we have to go beyond the literal meaning of an utterance and see the utterance as a communicative act. Several of the turns above are indirect speech acts:  the actual communicative action being performed (e.g., a request) is different from the surface form.  How can we interpret such utterances?

    Plan-inferential interpretation of speech acts

    Like other actions, communicative acts will have preconditions and effects. For example, in a communicative act of the Inform class a speaker may assert a proposition P with the effect that the hearer then believes P. (M&J p. 736). In an act of the Request class, a speaker requests that the hearer perform some action A, with the effect that the hearer then intends to perform A. Given such an interpretation of communicative acts, the task for the hearer is then to identify a plausible goal of the speaker from the communicative act he/she performs, and then to be responsive to that goal.

    For example, in the stonewalling above, the literal interpretation does not correspond to a plausible goal, so a cooperative system would seek a more likely goal (actually getting the summaries) and respond to that.

    If you ask a train agent "When does the train to Jamaica leave?" and he answers "3:15, track 27", it’s because he inferred that you wanted to get on that train and therefore needed to know where as well as when it left.

    Such interpretation, however, requires a very deep modeling of the domain, which is feasible only for very narrow domains.  More practical are 'cue-based' systems which explicitly encode rules or features for identifying the intended communicative act (for example, that "Can you X" is a request for the system to do X.)