Maarten de Rijke

Information retrieval

Category: Uncategorized (page 1 of 2)

Aan de slag!

Morgen, 8 oktober, presenteert VNO-NCW een `Nationale AI Coalitie’. Het Nederlandse bedrijfsleven steekt de komende zeven jaar ruim €1 mrd in kunstmatige intelligentie (AI). Van de overheid wordt ook een flinke bijdrage gevraagd. Het Financieele Dagblad besteedt aandacht aan de verwachte aankondigingen van deze week.


Genoeg gepraat. Genoeg gepolderd. Aan de slag.

UAI 2019 paper on safe online learning to re-rank via implicit click feedback online

BubbleRank: Safe Online Learning to Re-Rank via Implicit Click Feedback by Chang Li, Branislav Kveton, Tor Lattimore, Ilya Markov, Maarten de Rijke, Csaba Szepesvari, and Masrour Zoghi is online now at this location.

In the paper we study the problem of safe on-line learning to re-rank, where user feedback is used to improve the quality of displayed lists. Learning to rank has traditionally been studied in two settings. In the offline setting, rankers are typically learned from relevance labels created by judges. This approach has generally be- come standard in industrial applications of ranking, such as search. However, this approach lacks exploration and thus is limited by the information content of the offline training data. In the online setting, an algorithm can experiment with lists and learn from feedback on them in a sequential fashion. Bandit algorithms are well-suited for this setting but they tend to learn user preferences from scratch, which results in a high initial cost of exploration. This poses an additional challenge of safe exploration in ranked lists. We propose BubbleRank, a bandit algorithm for safe re-ranking that combines the strengths of both the offline and online settings. The algorithm starts with an initial base list and improves it online by gradually exchanging higher-ranked less attractive items for lower-ranked more attractive items. We prove an up- per bound on the n-step regret of BubbleRankthat degrades gracefully with the quality of the initial base list. Our theoretical findings are supported by extensive experiments on a large-scale real-world click dataset.

The paper will be presented at UAI 2019: Conference on Uncertainty in Artificial Intelligence, July 2019.

Using process mining for understanding the structure of interaction processes

Svitlana Vakulenko explains how we have recently used process mining techniques to understand the structure of interaction processes, which will in turn help us to improve information-seeking dialogue systems. We extract a new model of information-seeking dialogues, QRFA, for Query, Request, Feedback, Answer. The QRFA model better reflects conversation flows observed in real information-seeking conversations than models proposed previously. QRFA allows us to identify malfunctioning in dialogue system transcripts as deviations from the expected conversation flow described by the model via conformance analysis.

Read the full post.

Investeer in kennisbasis AI of word een toeschouwer

In een opiniestuk voor NRC Handelsblad en NRC Next beargumenteer ik dat artificiële intelligentie ons leven hoe dan ook zal veranderen en dat Nederland voor de keuze staat om vol mee te doen in de ontwikkeling van AI en het spel mee te bepalen, of om de bank te blijven zitten. Wie niet actief meedoet, heeft geen invloed – niet op het spel en al helemaal niet op de spelregels. Investeer in de AI-kennisbasis, investeer in talent. Kom van de bank!

Het hele stuk is hier te vinden.

Learning to answer questions by taking broader contexts into account

Mostafa Dehghani has posted an explanation of our recent work on TraCRNet (“tracker net”) to learn how to answer questions from multiple, possible long documents. TraCRNet uses the universal transformer and is able to go beyond understanding a set of input documents separately and combine their information in multiple steps. TraCRNet is highly parallellizable and far more robust against noisy input than previous proposals for addressing the question answering task.

See this page for the full post.

FACTS-IR Workshop @ SIGIR 2019

SIGIR 2019 will host a workshop to explore challenges in responsible information retrieval system development and deployment. The focus will be on determining actionable research agendas on five key dimensions of responsible information retrieval: fairness, accountability, confidentiality, transparency, and safety. Rather than just a mini-conference, this workshop will be an event during which participants will also be expected to work. The workshop aims to bring together a diverse set of researchers and practitioners interested in helping to develop the technical research agenda for responsible information retrieval.

The web site for the workshop is live now.

How to optimize ranking systems by directly interacting with users

Harrie Oosterhuis has written an accessible summary of our recent work on pairwise differentiable gradient descent (PDGD), an online learning to rank method that he published at CIKM 2018, with a follow-up paper to come at ECIR 2019 in April. With the introduction of the PDGD algorithm, ranking systems can now be optimized from user interactions far more effectively than previously possible. Additionally, PDGD can also optimize neural models to a greater effect, something previous methods couldn’t do. We expect that the development of ranking systems will benefit from this contribution on the long term. Not only because of improved performance, but also because the possibility to optimize more complex models opens the door to many different possibilities.

See this page for the full post.

WWW 2019 paper on evaluation metrics for web image search online

Grid-based Evaluation Metrics for Web Image Search by Xiaohui Xie, Jiaxin Mao, Yiqun Liu, Maarten de Rijke, Yunqiu Shao, Zixin Ye, Min Zhang, and Shaoping Ma is online now at this location.

Compared to general web search engines, web image search engines display results in a different way. In web image search, results are typically placed in a grid-based manner rather than a sequential result list. In this scenario, users can view results not only in a vertical direction but also in a horizontal direction. Moreover, pagination is usually not (explicitly) supported on image search search engine result pages (SERPs), and users can view results by scrolling down without having to click a “next page” button. These differences lead to different interaction mechanisms and user behavior patterns, which, in turn, create challenges to evaluation metrics that have originally been developed for general web search. While considerable effort has been invested in developing evaluation metrics for general web search, there have been relatively less effort to construct grid-based evaluation metrics.

To inform the development of grid-based evaluation metrics for web image search, we conduct a comprehensive analysis of user behavior so as to uncover how users allocate their attention in a grid-based web image search result interface. We obtain three findings: (1) “Middle bias”: Confirming previous studies, we find that image results in the horizontal middle positions may receive more attention from users than those in the leftmost or rightmost positions. (2) “Slower decay”: Unlike web search, users’ attention does not decrease monotonically or dramatically with the rank position in image search, especially within a row. (3) “Row skipping”: Users may ignore particular rows and directly jump to results at some distance. Motivated by these observations, we propose correspond- ing user behavior assumptions to capture users’ search interaction processes and evaluate their search performance. We show how to derive new metrics from these assumptions and demonstrate that they can be adopted to revise traditional list-based metrics like Discounted Cumulative Gain (DCG) and Rank-Biased Precision (RBP). To show the effectiveness of the proposed grid-based metrics, we compare them against a number of list-based metrics in terms of their correlation with user satisfaction. Our experimental results show that the proposed grid-based evaluation metrics better reflect user satisfaction in web image search.

The paper will be presented at The Web Conference 2019.

WWW 2019 paper on outfit recommendation online

Improving Outfit Recommendation with Co-supervision of Fashion Generation by Yujie Lin, Pengjie Ren, Zhumin Chen, Zhaochun Ren, Jun Ma, and Maarten de Rijke is now available at this location.

The task of fashion recommendation includes two main challenges:visual understanding and visual matching. Visual understanding aims to extract effective visual features. Visual matching aims to model a human notion of compatibility to compute a match between fashion items. Most previous studies rely on recommendation loss alone to guide visual understanding and matching. Although the features captured by these methods describe basic characteristics (e.g., color, texture, shape) of the input items, they are not directly related to the visual signals of the output items (to be recommended). This is problematic because the aesthetic characteristics (e.g., style, design), based on which we can directly infer the output items, are lacking. Features are learned under the recommendation loss alone, where the supervision signal is simply whether the given two items are matched or not.

To address this problem, we propose a neural co-supervision learning framework, called the FAshion Recommendation Machine (FARM). FARM improves visual understanding by incorporating the supervision of generation loss, which we hypothesize to be able to better encode aesthetic information. FARM enhances visual matching by introducing a novel layer-to-layer matching mechanism to fuse aesthetic information more effectively, and meanwhile avoiding paying too much attention to the generation quality and ignoring the recommendation performance.

Extensive experiments on two publicly available datasets show that FARM outperforms state-of-the-art models on outfit recom- mendation, in terms of AUC and MRR. Detailed analyses of gener- ated and recommended items demonstrate that FARM can encode better features and generate high quality images as references to improve recommendation performance.

The paper will be presented at the The Web Conference 2019.

WWW 2019 paper on diversity of dialogue response generation online

Improving Neural Response Diversity with Frequency-Aware Cross-Entropy Loss by Shaojie Jiang, Pengjie Ren, Christof Monz, and Maarten de Rijke is online now at this location.

Sequence-to-Sequence (Seq2Seq) models have achieved encouraging performance on the dialogue response generation task. However, existing Seq2Seq-based response generation methods suffer from a low-diversity problem: they frequently generate generic responses, which make the conversation less interesting. In this paper, we address the low-diversity problem by investigating its connection with model over-confidence reflected in predicted distributions. Specifically, we first analyze the influence of the commonly used Cross-Entropy (CE) loss function, and find that the CE loss function prefers high-frequency tokens, which results in low-diversity responses. We then propose a Frequency-Aware Cross-Entropy (FACE) loss function that improves over the CE loss function by incorporating a weighting mechanism conditioned on token frequency. Extensive experiments on benchmark datasets show that the FACE loss function is able to substantially improve the diversity of existing state-of-the-art Seq2Seq response generation methods, in terms of both automatic and human evaluations.

The paper will be presented at The Web Conference 2019.

« Older posts

© 2019 Maarten de Rijke

Theme by Anders NorenUp ↑