The important dates for WSDM 2017 (paper submission deadline, notification date) are fixed. Here’s a new flyer with the details.
Artem Grotov and I will be teaching a half-day tutorial on online learning to rank for information retrieval at SIGIR 2016.
During the past 10–15 years offline learning to rank has had a tremendous influence on information retrieval, both scientifically and in practice. Recently, as the limitations of offline learning to rank for information retrieval have become apparent, there is increased attention for online learning to rank methods for information retrieval in the community. Such methods learn from user interactions rather than from a set of labeled data that is fully available for training up front.
Today’s search engines have developed into complex systems that combines hundreds of ranking criteria with the aim of producing the optimal result list in response to users’ queries. For automatically tuning optimal combinations of large numbers of ranking criteria, learning to rank (LTR) has proved invaluable. For a given query, each document is represented by a feature vector. The features may be query dependent, document dependent or capture the relationship between the query and documents. The task of the learner is to find a model that combines these features such that, when this model is used to produce a ranking for an unseen query, user satisfaction is maximized.
Traditionally, learning to rank algorithms are trained in batch mode, on a complete dataset of query and document pairs with their associated manually created relevance labels. This setting has a number of disadvantages and is impractical in many cases. First, creating such datasets is expensive and therefore infeasible for smaller search engines, such as small web-store search engines. Second, it may be impossible for experts to annotate documents, as in the case of personalized search. Third, the relevance of documents to queries can change over time, like in a news search engine.
Online learning to rank addresses all of these issues by incrementally learning from user feedback in real time. Online learning is closely related to active learning, incremental learning, and counterfactual learning. However, online learning is more difficult because the agent has to balance exploration and exploitation: actions with unknown performance have to be explored to learn better solutions.
There is a growing body of established methods for online learning to rank for information retrieval. The time is right to organize and present this material to a broad audience of interested information retrieval researchers, whether junior or senior, whether academic or industrial. The online learning to rank methods available today have been proposed by different communities, in machine learning and information retrieval. A key aim of the tutorial is to bring these together and offer a unified perspective. To achieve this we illustrate the core and state of the art methods in online learning to rank, their theoretical foundations and real-world applications, as well as existing online learning algorithms that have not been used by information retrieval community so far.
SIGIR 2016 will feature a workshop on Neural Information Retrieval. In recent years, deep neural networks have yielded significant performance improvements in application areas such as speech recognition and computer vision. They have also had an impact in natural language applications such as machine translation, image caption generation and conversational agents. Our focus with the Neu-IR workshop is on the applicability of deep neural networks to information retrieval. There are two complementary dimensions to this: one involves demonstrating performance improvements on public or private information retrieval datasets, the other concerns thinking about deep neural network architectures and what they tell us about information retrieval problems. Neu-IR (pronounced “new IR”) will be a highly interactive full day workshop that focuses on advances and challenges along both dimensions.
See http://research.microsoft.com/neuir2016, where further information will be shared.