Maarten de Rijke

Information retrieval

Author: mdr (page 1 of 15)

Using process mining for understanding the structure of interaction processes

Svitlana Vakulenko explains how we have recently used process mining techniques to understand the structure of interaction processes, which will in turn help us to improve information-seeking dialogue systems. We extract a new model of information-seeking dialogues, QRFA, for Query, Request, Feedback, Answer. The QRFA model better reflects conversation flows observed in real information-seeking conversations than models proposed previously. QRFA allows us to identify malfunctioning in dialogue system transcripts as deviations from the expected conversation flow described by the model via conformance analysis.

Read the full post.

Investeer in kennisbasis AI of word een toeschouwer

In een opiniestuk voor NRC Handelsblad en NRC Next beargumenteer ik dat artificiële intelligentie ons leven hoe dan ook zal veranderen en dat Nederland voor de keuze staat om vol mee te doen in de ontwikkeling van AI en het spel mee te bepalen, of om de bank te blijven zitten. Wie niet actief meedoet, heeft geen invloed – niet op het spel en al helemaal niet op de spelregels. Investeer in de AI-kennisbasis, investeer in talent. Kom van de bank!

Het hele stuk is hier te vinden.

Learning to answer questions by taking broader contexts into account

Mostafa Dehghani has posted an explanation of our recent work on TraCRNet (“tracker net”) to learn how to answer questions from multiple, possible long documents. TraCRNet uses the universal transformer and is able to go beyond understanding a set of input documents separately and combine their information in multiple steps. TraCRNet is highly parallellizable and far more robust against noisy input than previous proposals for addressing the question answering task.

See this page for the full post.

FACTS-IR Workshop @ SIGIR 2019

SIGIR 2019 will host a workshop to explore challenges in responsible information retrieval system development and deployment. The focus will be on determining actionable research agendas on five key dimensions of responsible information retrieval: fairness, accountability, confidentiality, transparency, and safety. Rather than just a mini-conference, this workshop will be an event during which participants will also be expected to work. The workshop aims to bring together a diverse set of researchers and practitioners interested in helping to develop the technical research agenda for responsible information retrieval.

The web site for the workshop is live now.

How to optimize ranking systems by directly interacting with users

Harrie Oosterhuis has written an accessible summary of our recent work on pairwise differentiable gradient descent (PDGD), an online learning to rank method that he published at CIKM 2018, with a follow-up paper to come at ECIR 2019 in April. With the introduction of the PDGD algorithm, ranking systems can now be optimized from user interactions far more effectively than previously possible. Additionally, PDGD can also optimize neural models to a greater effect, something previous methods couldn’t do. We expect that the development of ranking systems will benefit from this contribution on the long term. Not only because of improved performance, but also because the possibility to optimize more complex models opens the door to many different possibilities.

See this page for the full post.

WWW 2019 paper on evaluation metrics for web image search online

Grid-based Evaluation Metrics for Web Image Search by Xiaohui Xie, Jiaxin Mao, Yiqun Liu, Maarten de Rijke, Yunqiu Shao, Zixin Ye, Min Zhang, and Shaoping Ma is online now at this location.

Compared to general web search engines, web image search engines display results in a different way. In web image search, results are typically placed in a grid-based manner rather than a sequential result list. In this scenario, users can view results not only in a vertical direction but also in a horizontal direction. Moreover, pagination is usually not (explicitly) supported on image search search engine result pages (SERPs), and users can view results by scrolling down without having to click a “next page” button. These differences lead to different interaction mechanisms and user behavior patterns, which, in turn, create challenges to evaluation metrics that have originally been developed for general web search. While considerable effort has been invested in developing evaluation metrics for general web search, there have been relatively less effort to construct grid-based evaluation metrics.

To inform the development of grid-based evaluation metrics for web image search, we conduct a comprehensive analysis of user behavior so as to uncover how users allocate their attention in a grid-based web image search result interface. We obtain three findings: (1) “Middle bias”: Confirming previous studies, we find that image results in the horizontal middle positions may receive more attention from users than those in the leftmost or rightmost positions. (2) “Slower decay”: Unlike web search, users’ attention does not decrease monotonically or dramatically with the rank position in image search, especially within a row. (3) “Row skipping”: Users may ignore particular rows and directly jump to results at some distance. Motivated by these observations, we propose correspond- ing user behavior assumptions to capture users’ search interaction processes and evaluate their search performance. We show how to derive new metrics from these assumptions and demonstrate that they can be adopted to revise traditional list-based metrics like Discounted Cumulative Gain (DCG) and Rank-Biased Precision (RBP). To show the effectiveness of the proposed grid-based metrics, we compare them against a number of list-based metrics in terms of their correlation with user satisfaction. Our experimental results show that the proposed grid-based evaluation metrics better reflect user satisfaction in web image search.

The paper will be presented at The Web Conference 2019.

WWW 2019 paper on outfit recommendation online

Improving Outfit Recommendation with Co-supervision of Fashion Generation by Yujie Lin, Pengjie Ren, Zhumin Chen, Zhaochun Ren, Jun Ma, and Maarten de Rijke is now available at this location.

The task of fashion recommendation includes two main challenges:visual understanding and visual matching. Visual understanding aims to extract effective visual features. Visual matching aims to model a human notion of compatibility to compute a match between fashion items. Most previous studies rely on recommendation loss alone to guide visual understanding and matching. Although the features captured by these methods describe basic characteristics (e.g., color, texture, shape) of the input items, they are not directly related to the visual signals of the output items (to be recommended). This is problematic because the aesthetic characteristics (e.g., style, design), based on which we can directly infer the output items, are lacking. Features are learned under the recommendation loss alone, where the supervision signal is simply whether the given two items are matched or not.

To address this problem, we propose a neural co-supervision learning framework, called the FAshion Recommendation Machine (FARM). FARM improves visual understanding by incorporating the supervision of generation loss, which we hypothesize to be able to better encode aesthetic information. FARM enhances visual matching by introducing a novel layer-to-layer matching mechanism to fuse aesthetic information more effectively, and meanwhile avoiding paying too much attention to the generation quality and ignoring the recommendation performance.

Extensive experiments on two publicly available datasets show that FARM outperforms state-of-the-art models on outfit recom- mendation, in terms of AUC and MRR. Detailed analyses of gener- ated and recommended items demonstrate that FARM can encode better features and generate high quality images as references to improve recommendation performance.

The paper will be presented at the The Web Conference 2019.

WWW 2019 paper on diversity of dialogue response generation online

Improving Neural Response Diversity with Frequency-Aware Cross-Entropy Loss by Shaojie Jiang, Pengjie Ren, Christof Monz, and Maarten de Rijke is online now at this location.

Sequence-to-Sequence (Seq2Seq) models have achieved encouraging performance on the dialogue response generation task. However, existing Seq2Seq-based response generation methods suffer from a low-diversity problem: they frequently generate generic responses, which make the conversation less interesting. In this paper, we address the low-diversity problem by investigating its connection with model over-confidence reflected in predicted distributions. Specifically, we first analyze the influence of the commonly used Cross-Entropy (CE) loss function, and find that the CE loss function prefers high-frequency tokens, which results in low-diversity responses. We then propose a Frequency-Aware Cross-Entropy (FACE) loss function that improves over the CE loss function by incorporating a weighting mechanism conditioned on token frequency. Extensive experiments on benchmark datasets show that the FACE loss function is able to substantially improve the diversity of existing state-of-the-art Seq2Seq response generation methods, in terms of both automatic and human evaluations.

The paper will be presented at The Web Conference 2019.

WWW 2019 paper on visual learning to rank online

ViTOR: Learning to Rank Webpages Based on Visual Features by Bram van den Akker, Ilya Markov, and Maarten de Rijke is available online now at this location.

The visual appearance of a webpage carries valuable information about the page’s quality and can be used to improve the performance of learning to rank (LTR). We introduce the Visual learning TO Rank (ViTOR) model that integrates state-of-the-art visual features extraction methods: (i) transfer learning from a pre-trained image classification model, and (ii) synthetic saliency heat maps generated from webpage snapshots. Since there is currently no public dataset for the task of LTR with visual features, we also introduce and release the ViTOR dataset, containing visually rich and diverse webpages. The ViTOR dataset consists of visual snapshots, non-visual features and relevance judgments for ClueWeb12 webpages and TREC Web Track queries. We experiment with the proposed ViTOR model on the ViTOR dataset and show that it significantly improves the performance of LTR with visual features.

The paper will be presented at The Web Conference 2019.

ECIR 2019 paper on information-seeking dialogues online

QRFA: A data-driven model of information-seeking dialogues by Svitlana Vakulenko, Kate Revoredo, Claudio Di Ciccio, and Maarten de Rijke is available online now at this location.

Understanding the structure of interaction processes helps us to improve information-seeking dialogue systems. Analyzing an interaction process boils down to discovering patterns in sequences of alternating utterances exchanged between a user and an agent. Process mining techniques have been successfully applied to analyze structured event logs, discovering the underlying process models or evaluating whether the observed behavior is in conformance with the known process. In this paper, we apply process mining techniques to discover patterns in conversational transcripts and extract a new model of information-seeking dialogues, QRFA, for Query, Request, Feedback, Answer. Our results are grounded in an empirical evaluation across multiple conversational datasets from different domains, which was never attempted before. We show that the QRFA model better reflects conversation flows observed in real information-seeking conversations than models proposed previously. Moreover, QRFA allows us to identify malfunctioning in dialogue system transcripts as deviations from the expected conversation flow described by the model via conformance analysis.

The paper will be presented at ECIR 2019.

« Older posts

© 2019 Maarten de Rijke

Theme by Anders NorenUp ↑