Lecture 5: Evaluation

Precision and Recall

D = set of all documents
Q = set of documents retrieved
R = set of relevant documents

QR -- True positives.
Q(D-R) -- False positives. (Irrelevant documents retrieved)
(D-Q)R -- False negatives. (Relavant documents omitted)
(D-Q)(D-R) -- True negatives. (Irrelevant documents omitted)

Percentage correct = (|QR| + |(D-Q)(D-R)|) / D.
Not a good measure; counts false positives and false negatives equally.
E.g. suppose |R|=3.
Q1 has two relevant documents and three irrelevant documents.
Q2 returns one irrelevant document.
Then both are making the same number of errors (4), but clearly Q1 is better than Q2.

Standard measures in IR (in fact, in all applications where the objective is to find a set of solutions):
Precision = |QR| / |Q| -- fraction of retrieved documents that are relevant = 1 - (fraction of retrieved documents that are false positives).
Recall = |QR| / |R| -- fraction of relevant documents that are retrieved = 1 - (fraction of relevant documents that are false negatives).

In the above example Q1 has precision 2/5 and recall 2/3. Q2 has precision and recall = 0.

If Q1 subset Q2, then Recall(Q2) > = Recall(Q1). Prec(Q2) can be either greater or less than Q2. If you consider the precision over the first K documents returned for K = 1, 2, ... then the precision goes up every time dK is relevant and down every time it is irrelevant, so graph is sawtoothed. But on the whole precision tends to go down, so there is a trade-off between recall and precision as you get more documents.

Smoothed precision: Plot precision only at points when documents have been found; interpolate in between. Set precision(0)=1. Then precision can be monotonically decreasing, and will tend to be so except possibly at beginning.

Probabilistic model. Suppose that the matcher returns a measure of the "quality" of the document for the query. Suppose that this measured quality has some value in the following sense:
If q1 > q2, then Prob(d in R | qual(d)=q1) > Prob(d in R | qual(d) = q2)
Let QT = { d | qual(d) > = T }.
Then the expected value of precision(QT) is a decreasing function of T. The expected value of recall(QT) is an increasing function of T, but concave downward.

Other tradeoffs

Choices other than threshhold also tend to trade off precision vs. recall. (Of course, if they don't trade-off then just go with the better of the two: win-win.) E.g. stemming, inclusion of synonyms tends to increase recall at cost of precision.

Problems with Precision and Recall

Alternative measures

F-measure: Harmonic mean of precision and recall:
1/F = average(1/p,1/r)
F = 2pr/(p+r).
If either p or r is small then F is small. If p and r are close then F is about the average of p and r.

Generalized precision: Value of information obtained for user / cost of examining results.

Generalized recall: Value of information obtained / Value of optimal results (or: value of entire Web for user's current need)

Average precision: Average of precision at 20% recall, 50%, 80%. Or average of precision at recall = 0%, 10%, 20% ... 90%, 100%. (Since recall is does not attain these value exactly, and since recall remains constant until next relevant document found, and thus same value of recall can have several values of precision, take max precision or avg precision, and interpolate. Similarly, precision at recall = 0% is extrapolated.)

Precision over first K documents: (or average relevance over first K documents). User model: User will only read first K documents.

Rank of Kth relevant document (or rank such that sum of relevance = K). User model: User will read until he has gotten K relevant documents (or documents whose total relevance is K).

Weighted precision: sumK in Q rel(dK) / |Q|.

Weighted recall: sumK in Q rel(dK) / sumK in R rel(dK) /

Order diminishing sum: Value of search is sumK rel(dK) * pK. User model: User starts reading at beginning, at each step continues with probability p.

Total content precision: Total information relevant to query in pages retrieved divided by |Q| (or divided by total reading time). Of course, this is hard to quantify. You can, for example, prepare a list of questions on the subject matter, and measure "Total information" as the fraction of questions that can be answered from the retrieved texts/

Total content recall: Total information relevant to query in pages retrieved divided by total information relevant to query in Web.

Estimating R

A document is not found for one of two reasons.

First case is hopeless. (We will talk later about how to estimate the size of this set, but no way to estimate # of relevant documents.)

Second case:

Relevance

Measured:

Experimental design

Ecologically valid: users observed as they use web for their own purposes. The more interference, the less ecologically valid. (Just informing users that they are observed alters their behavior; however, there can be privacy issues if they are observed without being informed.)

Controlled experiment: Users carry out task specified by experimenter in controlled setting. Much more information per task, much more demanding of user, possible to design narrowly focussed experiment, less clearly representative of "normal" use.

Significance of experiment

Failures and errors occur for the following causes: Different experiments detect different combinations of these. For example:

Tester specifies query; test subject read first 30 pages; labels each page "relevant", "irrelevant", or category of failure (e.g. "bad link"; "too long to download" etc.) This tests separately 3.A possibly combined with 3.E; 2.C; 7.A; 7.B.

Tester is aware (not through search engine) of a valuable page; runs a variety of queries; tabulates fraction of queries for which page is in top 100. Combines 2.A, 3.B, 3.C, 3.D.

Tester specifies list of questions in some subject area to be answered in fixed time period; test subjects use search engine as best they can. This combines pretty much all possible errors.

Evaluating Clusters

Formal measures: Normalize all vectors to length 1. Assume fixed number of clusters.

Variable number of clusters: Any of the above + suitable penalty for more clusters.

Formal measures test adequacy of clustering algorithm, but not relevance of measures to actual significance.

Ask subjects to cluster, compare systems of clusterings.

Ask subjects to evaluate similarity of all pairs of documents. Correlate these similarities with clustering (e.g. average similarity within cluster / average similarity between clusters)

Ask subjects whether system of clustering seems natural or useful.

For clustering of responses to query:

Query Languages

Query Lang. Features from Standard Search Engines

(Data in this table taking from Search Engines for the World Wide Web, 3rd edition by Alfred and Emily Glossbrenner, Peachpit Press 2001 and may be out of date. In any case, it is not a complete list of all features. I'm obviously not concered with specific syntax.)

Advanced Query Languages

Squeal: A Structured Query language for the Web Ellen Spertus and Lynn Andrea Stein

Presents Web as relational database. SQL syntax for queries.

Tables (This is a simplified account.)

Page table: url, contents, bytes, when.
Tag table: url, tag_id, name (e.g. H1), startOffset, endOffset
Att(ribute) table: tag\_id, name, value
Link table: source_url, anchor, destination_url, hstruct, lstruct (header/list structure in which the link occurs).
Parse table: url_value, component (either "host", "port", "path", or "ref"), value, depth.

Examples

// Example 1: What pages contain the word "hypertext" and contain a picture?
SELECT url
FROM page p, tag t
WHERE p.contents LIKE "%hypertext%" 
AND t.url = p.url 
AND t.name = "IMG"

// Example 2: What tags appear on the page "http://www9.org"?

SELECT name
FROM tag
WHERE url = "http://www9.org"

// Example 3: What are the values of the SRC attributes associated with IMG tabs
on "http:www9.org"

SELECT a.value
FROM att a, tag t
WHERE t.url = "http://www9.org"
AND t.name = "IMG"
AND a.tag_id = t.tag_id
AND a.name="SRC"

// Example 4: What pages are pointed to by "http://www9.org"?
SELECT destination_url
FROM link
WHERE source_url = "http://www9.org"

// Example 5: What pages are pointed to via hyperlinks with anchor test 
"Web conference"
SELECT destination_url
FROM link
WHERE anchor = "Web conference"

Implementation

Download specified pages or call standard search engine as appropriate. Follow links if necessary and download further pages. Parse all downloaded pages into temporary local database. Answer query from database.

e.g. Example 2 What tags appear on the page "http://www9.org"?

SELECT name
FROM tag
WHERE url = "http://www9.org"
Download page, parse, answer question.

// Example 1: What pages contain the word "hypertext" and contain a picture?

SELECT url
FROM page p, tag t
WHERE p.contents LIKE "%hypertext%" 
AND t.url = p.url 
AND t.name = "IMG"
Call search engine with query "hypertext"; download files (all files? seems hard to believe); create local database; answer queries from database.

Note that a complete database of this kind could be easily constructed in the course of creating a Web search index. That is, if Google wanted to support this query engine, it could do so at little additional cost.

Issues: