CSCI-GA.2590 - Natural Language Processing - Spring 2013 Prof. Grishman

Lecture 10 Outline

April 9, 2013

Term projects:  importance of evaluation.  Separating development and test data.  Collecting data for interactive tasks ("Wizard of Oz" methods).
Error analysis.
For sentiment: alternative indicators. Fine grain (aspect-based) sentiment.

Sentiment Analysis

Survey article: Opinion mining and sentiment analysis Bo Pang and Lillian Lee (2008)

Our focus until now has been on extracting objective information -- "the facts". But there is a strong interest in extracting subjective information as well -- people's opinions about things and about other people. These include

Product may be a binary classification, a document ranking, or a classification with respect to particular features / issues.

Supervised methods

Particularly for product and service reviews, there is now a vast amount of coarse-grained labeled data (typical 1 to 5-star reviews). This permits a simple classifier using individaul words as features and a Naive Bayes classifier:
s' = argmaxs P(s | w1 ... wn) = argmaxs P(w1 ... wn | s) P(s) = argmaxs Πi P(wi | s) P(s)
(other classifiers such as Support Vector Machines can also be used). Training such a classifier in effect identifies particular words as strong indicators of positive or negative opinion.

Such corpus-trained methods often work better than trying to select indicator words by hand. Good indicators are often far from obvious. Furthermore, the significance and polarity of many words will be domain and sometimes product specific.

Part-of-speech information may help -- distinguishing nouns and adjectives with the same spelling serves as a simple form of word sense disambiguation. More refined indicators (bigrams, dependency bigrams, ...) have a mixed track record.

Semi-supervised methods

In some cases training data is not available -- for low-resource languages, for new domains, for specific aspects of products. Semi-supervised methods can then be used to build lists of indicators.

These methods start with a seed set. However, just expanding the seeds based on distributional similarity may lead to errors (why?), so other evidence of similar indicators is required. Hatzivassiloglou and McKeown used conjunction patterns ... "X and Y" (for adjectives X and Y) suggests that X and Y have the same polarity, while "X but Y" suggests they have opposite polarity. WordNet and machine-readable dictionaries have also been used to find related indicators.

Discourse.  Until now we considered the structure and meaning of sentences in isolation.  We now turn to issues primarily connected with multi-sentence text -- discourse.

Reference Resolution (J&M 21.3-8)

Terminology

Types of referring expressions

Complications

Resolving pronoun reference

One of the first procedures for resolving pronouns statistically (Ge, Hale, and Charniak WVLC 1998) maximized the product of four factors These factors together gave about 83% accurate resolution.

Resolving other referring expressions

Anaphora resolution in Jet

Using anaphora resolution for extraction:  an example

In many cases, we want to be able to retrieve an argument from context when it is not part of the immediate syntactic structure.  A simple way of doing this is to generate a zero anaphor (an ngroup constituent not spanning any text) and then let reference resolution map it to an entity.  We have created a version of the AppointPatterns which uses this method to collect organization names and, in some cases, people names.