With some delays, because of the holidays and traveling, but the short paper that Aleksandr Chuklin and I published at the ACM SIGIR Workshop on Gathering Efficient Assessments of Relevance (GEAR) is online. It’s called “The anatomy of relevance: topical, snippet and perceived relevance in search result evaluation. And you can find it here.

Currently, the quality of a search engine is often determined using so-called topical relevance, i.e., the match between the user intent (expressed as a query) and the \emph{content} of the document. In this work we want to draw attention to two aspects of retrieval system performance affected by the \emph{presentation} of results: result attractiveness (“perceived relevance”) and immediate usefulness of the snippets (“snippet relevance”). Perceived relevance may influence discoverability of good topical documents and seemingly better rankings may in fact be less useful to the user if good-looking snippets lead to irrelevant documents or vice-versa. And result items on a search engine result page (SERP) with high snippet relevance may add towards the total utility gained by the user even without the need to click those items.

We start by motivating the need to collect different aspects of relevance (topical, perceived and snippet relevances) and how these aspects can improve evaluation measures. We then discuss possible ways to collect these relevance aspects using crowdsourcing and the challenges arising from that.