|
SIGMOD 2008
2008 ACM SIGMOD International Conference on Management of Data
June 9-12 2008
Vancouver
Canada
|
Review: 1 |
Reviewer:
| Narain Gehani |
Email:
| gehani@adm.njit.edu |
Organization:
| New Jersey Institute of Technology |
Review:
| Question | Response |
1 | Overall Rating |
Weak Accept
|
2 | Reject due to technical incorrectness |
No
|
3 | Novelty |
Low
|
4 | Technical Depth |
Medium
|
5 | Is the relevant and reasonably well known state of the art fairly treated (note this may come from outside the database literature and even outside computer science)? |
to a limited extent
|
6 | Experimental results should meet the following expectations: deal with materially relevant cases (e.g., updates as well as queries, different scales) allowing for space limitations; statistically meaningful results; and use standard datasets or benchmarks (as opposed to a cherry-picked one) when possible. How did the experiments rate? |
adequate
|
7 | If experiments were presented, would you want them checked for repeatability (as might be required in a future sigmod)? |
yes
|
8 | Presentation |
Adequate
|
9 | Reviewer Confidence |
Low
|
10 | Name of External Reviewer (if applicable) |
|
11 | Summary of the paper's main contributions and impact (up to one paragraph) |
Database search technique that returns a ranked list of the terms and phrases that are suggestive ("reminiscent")of the search keywords. Presumably this information can be used to narrow or expand a query when retrieving data. They then give algorithms to produce this list efficiently - without doing any joins.
|
12 | Three strong points of the paper (please number them S1,S2,S3) |
|
13 | Three weak points of the paper (please number them W1,W2,W3) |
W1. Presentation could be better - there are typos, wording can be improved, etc.
see in detailed comments
|
14 | Detailed comments (please number each point) |
In the abstract, the authors say that they are proposing a new type of search problem over relational databases. Probably, "problem" should be "technique". In any case, the "new type of search ..." is not explained.
Writing could be improved
Typos etc.
- Why are the words "Search Engine" (line 2, Intro) capitalized
- knowledge on --> knowledge of (line 4, Intro)
- Example 1: "operation system" should be "operating system"
|
15 | Comments for the Program Committee |
did not read the math.
|
16 | Is this paper a candidate for the Best Paper Award? |
No
|
17 | Would author feedback be useful for this Review? (if "Yes", please answer Q. 18) |
No
|
18 | List specific clarifications you seek from the Authors (if you have answered "Yes" to Q. 17) |
|
|
Review: 2 |
Reviewer:
| Tomasz Imielinski |
Email:
| timielinski@ask.com |
Organization:
| Ask.com and Rutgers University |
Review:
| Question | Response |
1 | Overall Rating |
|
2 | Reject due to technical incorrectness |
|
3 | Novelty |
|
4 | Technical Depth |
|
5 | Is the relevant and reasonably well known state of the art fairly treated (note this may come from outside the database literature and even outside computer science)? |
|
6 | Experimental results should meet the following expectations: deal with materially relevant cases (e.g., updates as well as queries, different scales) allowing for space limitations; statistically meaningful results; and use standard datasets or benchmarks (as opposed to a cherry-picked one) when possible. How did the experiments rate? |
|
7 | If experiments were presented, would you want them checked for repeatability (as might be required in a future sigmod)? |
|
8 | Presentation |
|
9 | Reviewer Confidence |
|
10 | Name of External Reviewer (if applicable) |
|
11 | Summary of the paper's main contributions and impact (up to one paragraph) |
|
12 | Three strong points of the paper (please number them S1,S2,S3) |
|
13 | Three weak points of the paper (please number them W1,W2,W3) |
|
14 | Detailed comments (please number each point) |
|
15 | Comments for the Program Committee |
|
16 | Is this paper a candidate for the Best Paper Award? |
|
17 | Would author feedback be useful for this Review? (if "Yes", please answer Q. 18) |
|
18 | List specific clarifications you seek from the Authors (if you have answered "Yes" to Q. 17) |
|
|
Review: 3 |
Reviewer:
| Susan Dumais |
Email:
| sdumais@microsoft.com |
Organization:
| Microsoft Research |
Review:
| Question | Response |
1 | Overall Rating |
Weak Reject
|
2 | Reject due to technical incorrectness |
No
|
3 | Novelty |
Medium
|
4 | Technical Depth |
Low
|
5 | Is the relevant and reasonably well known state of the art fairly treated (note this may come from outside the database literature and even outside computer science)? |
yes, allowing for space limitations
|
6 | Experimental results should meet the following expectations: deal with materially relevant cases (e.g., updates as well as queries, different scales) allowing for space limitations; statistically meaningful results; and use standard datasets or benchmarks (as opposed to a cherry-picked one) when possible. How did the experiments rate? |
not good
|
7 | If experiments were presented, would you want them checked for repeatability (as might be required in a future sigmod)? |
no or not applicable
|
8 | Presentation |
Adequate
|
9 | Reviewer Confidence |
Medium
|
10 | Name of External Reviewer (if applicable) |
|
11 | Summary of the paper's main contributions and impact (up to one paragraph) |
The paper describes a new idea, called Keyword Reminiscence Search”. The basic idea is that in some applications it might be desirable to retrieve related terms instead of joined tuple trees, in response to a keyword query. Three simple methods for identify related keywords are described and evaluated. The applications are not particularly well motivated for me. And the evaluation consists of assessing the overlap of the proposed method with conventional keyword searches over databases. If this is the gold standard, why not use standard methods directly?
|
12 | Three strong points of the paper (please number them S1,S2,S3) |
S1: Describes a new problem, namely returning words associated with tuple trees that match keyword queries, which the authors call Keyword Reminiscence Search.
S2: Three simple techniques for ranking associated terms are proposed and evaluated.
|
13 | Three weak points of the paper (please number them W1,W2,W3) |
W1: I’m not sure how important a problem this is. Are searchers looking for associated keywords or joined tuple trees?
W2: The datasets considered have simple schema, involving at most three relations. How realistic is this?
W3: The “gold standard” for evaluation is the top-20 terms returned by a conventional keyword search over the database. If this is what you want to retrieve, why not do so directly?
|
14 | Detailed comments (please number each point) |
The paper described a new functionality, Keyword Reminiscence Search, which amounts to returning related keywords rather tuples in response to a keyword database search. My main concern is that there is little motivation for the problem (who wants this and for what kinds of task). This kind of background should shape the approaches used. I also have concerns, detailed above, about the evaluation which involves datasets with simple schemas and a gold standard (top-20 terms returned by a conventional keyword search over the database) that is not well motivated.
|
15 | Comments for the Program Committee |
|
16 | Is this paper a candidate for the Best Paper Award? |
No
|
17 | Would author feedback be useful for this Review? (if "Yes", please answer Q. 18) |
Yes
|
18 | List specific clarifications you seek from the Authors (if you have answered "Yes" to Q. 17) |
|
|
|