|
Workshop Information
Goals
- Learn how to customize information seeking systems (IR and QA) for optimal performance on any queries.
- Initially focus on automatic expansion techniques for relevance feedback and pseudo-relevance feedback.
- Extend this to other tools and apply the results to QA in different ways, such as
including support for recall oriented QA.
Areas of Focus:
- Individual Topic Failure Analysis - Intensive manual examination of
retrieval runs (1-2 topics per day) to understand why systems fail,
and what information systems need to succeed.
- Retrieval/QA Multi-system Experiments - Run multiple systems in a
controlled fashion to isolate system factors for later manual and
statistical analysis.
- Analyze Multi-system Experiments - Discover how system factors
affect reliability of system performance,
- Statistical Analysis of Multi-System Experiments - Statistically
isolate system and topic performance issues. Discover if we can
statistically categorize topics in a fashion that matches the failure
causes of the individual topic failure analysis.
- Develop publically available tools in support of all 4 areas above.
Results in each of the first first four areas will drive further
developments in the other areas.
Timetable
- Weeks 1-2: Concentrate on query expansion using pseudo-relevance
feedback in IR environment.
- Weeks 3-4: Query expansion using relevance feedback and other tools.
- Weeks 5-6: Apply results to QA.
| |
This page written by Jeff Terrace and Rob Warren
IR Workshop, NRRC, ARDA, 6/16/03-8/23/03
Modified by Ben Strauss
NIST, 6/05-7/05
|
|