“Relative Confidence Sampling for Efficient On-Line Ranker Evaluation” by Masrour Zoghi, Shimon Whiteson, Maarten de Rijke and Remi Munos is available online now.
A key challenge in information retrieval is that of on-line ranker evaluation: determining which one of a finite set of rankers performs the best in expectation on the basis of user clicks on presented document lists. When the presented lists are constructed using interleaved comparison methods, which interleave lists proposed by two different candidate rankers, then the problem of minimizing the total regret accumulated while evaluating the rankers can be formalized as a K-armed dueling bandits problem. In this paper, we propose a new method called relative confidence sampling (RCS) that aims to reduce cumulative regret by being less conservative than existing methods in eliminating rankers from contention. In addition, we present an empirical comparison between RCS and two state-of-the-art methods, relative upper confidence bound andSAVAGE. The results demonstrate that RCS can substantially outperform these alternatives on several large learning to rank datasets.