Workshop Information

Goals

  • Learn how to customize information seeking systems (IR and QA) for optimal performance on any queries.
  • Initially focus on automatic expansion techniques for relevance feedback and pseudo-relevance feedback.
  • Extend this to other tools and apply the results to QA in different ways, such as including support for recall oriented QA.

Areas of Focus:

  1. Individual Topic Failure Analysis - Intensive manual examination of retrieval runs (1-2 topics per day) to understand why systems fail, and what information systems need to succeed.
  2. Retrieval/QA Multi-system Experiments - Run multiple systems in a controlled fashion to isolate system factors for later manual and statistical analysis.
  3. Analyze Multi-system Experiments - Discover how system factors affect reliability of system performance,
  4. Statistical Analysis of Multi-System Experiments - Statistically isolate system and topic performance issues. Discover if we can statistically categorize topics in a fashion that matches the failure causes of the individual topic failure analysis.
  5. Develop publically available tools in support of all 4 areas above.
Results in each of the first first four areas will drive further developments in the other areas.

Timetable

  • Weeks 1-2: Concentrate on query expansion using pseudo-relevance feedback in IR environment.
  • Weeks 3-4: Query expansion using relevance feedback and other tools.
  • Weeks 5-6: Apply results to QA.




This page written by Jeff Terrace and Rob Warren
IR Workshop, NRRC, ARDA, 6/16/03-8/23/03
Modified by Ben Strauss
NIST, 6/05-7/05
National Institute of Standards and Technology Home
privacy policy | security notice | accessibility statement | disclaimer | FOIA
Last updated 05 Apr 2011 16:44 | Created June 2003
Questions? Contact: ria@nist.gov