Changes

Jump to: navigation, search

2011talks Submissions

805 bytes added, 01:56, 14 November 2010
ndushay added "practical relevancy testing" talk proposal
At Code4Lib, we will present our current system, discuss the challenges we face, and our future development plans.
 
== Practical Relevancy Testing ==
* Naomi Dushay, Stanford University Libraries, ndushay at stanford dot edu
 
Evaluating search result relevancy is difficult for any sizable amount of data, since human vetted ideal search results are essentially non-existent. This is true even for library collections, despite dedicated librarians and their familiarity with our collections.
 
So how can we evaluate if search engine configuration changes (e.g. boosting, field analysis, search analysis settings) are an improvement? How can we ensure the results for query A don’t degrade while we try to improve results for query B?
 
Why yes, Virginia, automatable tests are the answer.
 
This talk will show you how you can easily write these tests from your hidden goldmine of human vetted relevancy rankings.
Anonymous user

Navigation menu