Maarten de Rijke

Information retrieval

Category: Publications (page 1 of 8)

Paper on incremental sparse Bayesian ordinal regression published in Neural Networks

“Incremental sparse Bayesian ordinal regression” by Chang Li and Maarten de Rijke has been published in the October 2018 issue of Neural Networks. See the journal’s site.

Ordinal Regression (OR) aims to model the ordering information between different data categories, which is a crucial topic in multi-label learning. An important class of approaches to OR models the problem as a linear combination of basis functions that map features to a high-dimensional non-linear space. However, most of the basis function-based algorithms are time consuming. We propose an incremental sparse Bayesian approach to OR tasks and introduce an algorithm to sequentially learn the relevant basis functions in the ordinal scenario. Our method, called Incremental Sparse Bayesian Ordinal Regression (ISBOR), automatically optimizes the hyper-parameters via the type-II maximum likelihood method. By exploiting fast marginal likelihood optimization, ISBOR can avoid big matrix inverses, which is the main bottleneck in applying basis function-based algorithms to OR tasks on large-scale datasets. We show that ISBOR can make accurate predictions with parsimonious basis functions while offering automatic estimates of the prediction uncertainty. Extensive experiments on synthetic and real word datasets demonstrate the efficiency and effectiveness of ISBOR compared to other basis function-based OR approaches.

CIKM 2018 paper on Web-based Startup Success Prediction online

Web-based Startup Success Prediction by Boris Sharchilev, Michael Roizner, Andrey Rumyantsev, Denis Ozornin, Pavel Serdyukov, Maarten de Rijke is online now at this page.

In the paper we consider the problem of predicting the success of startup companies at their early development stages. We formulate the task as predicting whether a company that has already secured initial (seed or angel) funding will attract a further round of investment in a given period of time. Previous work on this task has mostly been restricted to mining structured data sources, such as databases of the startup ecosystem consisting of investors, incubators and startups. Instead, we investigate the potential of using web-based open sources for the startup success prediction task and model the task using a very rich set of signals from such sources. In particular, we enrich structured data about the startup ecosystem with information from a business- and employment-oriented social networking service and from the web in general. Using these signals, we train a robust machine learning pipeline encompassing multiple base models using gradient boosting. We show that utilizing companies’ mentions on the Web yields a substantial performance boost in comparison to only using structured data about the startup ecosystem. We also provide a thorough analysis of the obtained model that allows one to obtain insights into both the types of useful signals discoverable on the Web and market mechanisms underlying the funding process.

The paper will be presented at CIKM 2018 in October 2018.

SCAI 2018 paper on Understanding the Low-Diversity Problem of Chatbots online

Why are Sequence-to-Sequence Models So Dull? Understanding the Low-Diversity Problem of Chatbots by Shaojie Jiang and Maarten de Rijke is available online now at this location.

Diversity is a long-studied topic in information retrieval that usually refers to the requirement that retrieved results should be non-repetitive and cover different aspects. In a conversational setting, an additional dimension of diversity matters: an engaging response generation system should be able to output responses that are diverse and interesting. Sequence-to-sequence (Seq2Seq) models have been shown to be very effective for response generation. However, dialogue responses generated by Seq2Seq models tend to have low diversity. In this paper, we review known sources and existing approaches to this low-diversity problem. We also identify a source of low diversity that has been little studied so far, namely model over-confidence. We sketch several directions for tackling model over-confidence and, hence, the low-diversity problem, including confidence penalties and label smoothing.

The paper will be presented at the SCAI 2018 workshop at EMNLP 2018 in October 2018.

CIKM 2018 paper on Differentiable Unbiased Online Learning to Rank online

Differentiable Unbiased Online Learning to Rank by Harrie Oosterhuis and Maarten de Rijke is available online now at this location.

Online Learning to Rank (OLTR) methods optimize rankers based on user interactions. State-of-the-art OLTR methods are built specifically for linear models. Their approaches do not extend well to non-linear models such as neural networks. We introduce an entirely novel approach to OLTR that constructs a weighted differentiable pairwise loss after each interaction: Pairwise Differentiable Gradient Descent (PDGD). PDGD breaks away from the traditional approach that relies on interleaving or multileaving and extensive sampling of models to estimate gradients. Instead, its gradient is based on inferring preferences between document pairs from user clicks and can optimize any differentiable model. We prove that the gradient of PDGD is unbiased w.r.t. user document pair preferences. Our experiments on the largest publicly available Learning to Rank (LTR) datasets show considerable and significant improvements under all levels of interaction noise. PDGD outperforms existing OLTR methods both in terms of learning speed as well as final convergence. Furthermore, unlike previous OLTR methods, PDGD also allows for non-linear models to be optimized effectively. Our results show that using a neural network leads to even better performance at convergence than a linear model. In summary, PDGD is an efficient and unbiased OLTR approach that provides a better user experience than previously possible.

The paper will be presented at CIKM 2018 in October 2018.

CIKM 2018 paper on Calibration: A Simple Way to Improve Click Models online

Calibration: A Simple Way to Improve Click Models by Alexey Borisov, Julia Kiseleva, Ilya Markov, and Maarten de Rijke is available online now at this location.

In the paper we show that click models trained with suboptimal hyperparameters suffer from the issue of bad calibration. This means that their predicted click probabilities do not agree with the observed proportions of clicks in the held-out data. To repair this discrepancy, we adapt a non-parametric calibration method called isotonic regression. Our experimental results showthat isotonic regression significantly improves click models trained with suboptimal hyperparameters in terms of perplexity, and that it makes click models less sensitive to the choice of hyperparameters. Interestingly, the relative ranking of existing click models in terms of their predictive performance changes depending on whether or not their predictions are calibrated. Therefore, we advocate that calibration becomes a mandatory part of the click model evaluation protocol.

The paper will be presented at CIKM 2018 in October 2018.

CIKM 2018 paper on Attentive Encoder-based Extractive Text Summarization online

Attentive Encoder-based Extractive Text Summarization by Chong Feng, Fei Cai, Honghui Chen, and Maarten de Rijke is available online now at this location.

In previous work on text summarization, encoder-decoder architectures and attention mechanisms have both been widely used. Attention-based encoder-decoder approaches typically focus on taking the sentences preceding a given sentence in a document into account for document representation, failing to capture the relationships between a sentence and sentences that follow it in a document in the encoder. We propose an attentive encoder-based summarization (AES) model to generate article summaries. AES can generate a rich document representation by considering both the global information of a document and the relationships of sentences in the document. A unidirectional recurrent neural network (RNN) and a bidirectional RNN are considered to construct the encoders, giving rise to unidirectional attentive encoder-based summarization (Uni-AES) and bidirectional attentive encoder-based summarization (Bi-AES), respectively. Our experimental results show that Bi-AES outperforms Uni-AES. We obtain substantial improvements over a relevant start-of-the-art baseline.

The paper will be presented at CIKM 2018 in October 2018.

CIKM 2018 paper on Integrating Text Matching and Product Substitutability within Product Search online

Mix ‘n Match: Integrating Text Matching and Product Substitutability within Product Search by Christophe Van Gysel, Maarten de Rijke, and Evangelos Kanoulas is available online now at this location.

Two products are substitutes if both can satisfy the same consumer need. Intrinsic incorporation of product substitutability—where substitutability is integrated within latent vector space models—is in contrast to the extrinsic re-ranking of result lists. The fusion of text matching and product substitutability objectives allows latent vector space models to mix and match regularities contained within text descriptions and substitution relations. We introduce a method for intrinsically incorporating product substitutability within latent vector space models for product search that are estimated using gradient descent; it integrates flawlessly with state-of-the-art vector space models. We compare our method to existing methods for incorporating structural entity relations, where product substitutability is incorporated extrinsically by re-ranking. Our method outperforms the best extrinsic method on four benchmarks. We investigate the effect of different levels of text matching and product similarity objectives, and provide an analysis of the effect of incorporating product substitutability on product search ranking diversity. Incorporating product substitutability information improves search relevance at the cost of diversity.

The paper will be presented at CIKM 2018 in October 2018.

RecSys 2018 paper on preference elicitation as an optimization problem online

The following RecSys 2018 paper on preference elicitation as an optimization problem is online now:

  • Anna Sepliarskaia, Julia Kiseleva, Filip Radlinski, and Maarten de Rijke. Preference elicitation as an optimization problem. In RecSys 2018: The ACM Conference on Recommender Systems, page 172–180. ACM, October 2018. Bibtex, PDF
    @inproceedings{sepliarskaia-preference-2018,
    Author = {Sepliarskaia, Anna and Kiseleva, Julia and Radlinski, Filip and de Rijke, Maarten},
    Booktitle = {RecSys 2018: The ACM Conference on Recommender Systems},
    Date-Added = {2018-07-10 09:40:05 +0000},
    Date-Modified = {2018-10-27 09:29:19 +0200},
    Month = {October},
    Pages = {172--180},
    Publisher = {ACM},
    Title = {Preference elicitation as an optimization problem},
    Year = {2018}}

The new user coldstart problem arises when a recommender system does not yet have any information about a user. A common solution to it is to generate a profile by asking the user to rate a number of items. Which items are selected determines the quality of the recommendations made, and thus has been studied extensively. We propose a new elicitation method to generate a static preference questionnaire (SPQ) that poses relative preference questions to the user. Using a latent factor model, we show that SPQ improves personalized recommendations by choosing a minimal and diverse set of questions. We are the first to rigorously prove which optimization task should be solved to select each question in static questionnaires. Our theoretical results are confirmed by extensive experimentation. We test the performance of SPQ on two real-world datasets, under two experimental conditions: simulated, when users behave according to a latent factor model (LFM), and real, in which only real user judgments are revealed as the system asks questions. We show that SPQ reduces the necessary length of a questionnaire by up to a factor of three compared to state-of-the-art preference elicitation methods. Moreover, solving the right optimization task, SPQ also performs better than baselines with dynamically generated questions.

ISWC 2018 paper on measuring semantic coherence of a conversation online

The following ISWC 2018 paper on measuring semantic coherence of a conversation is online now:

  • Svitlana Vakulenko, Maarten de Rijke, Michael Cochez, Vadim Savenkov, and Axel Polleres. Measuring semantic coherence of a conversation. In ISWC 2018: 17th International Semantic Web Conference, page 634–651. Springer, October 2018. Bibtex, PDF
    @inproceedings{vakulenko-measuring-2018,
    Author = {Vakulenko, Svitlana and de Rijke, Maarten and Cochez, Michael and Savenkov, Vadim and Polleres, Axel},
    Booktitle = {ISWC 2018: 17th International Semantic Web Conference},
    Date-Added = {2018-05-26 04:41:16 +0000},
    Date-Modified = {2018-10-27 09:32:14 +0200},
    Month = {October},
    Pages = {634--651},
    Publisher = {Springer},
    Title = {Measuring semantic coherence of a conversation},
    Year = {2018}}

Conversational systems have become increasingly popular as a way for people to interact with computers. To be able to provide intelligent responses, conversational systems must correctly model the structure and semantics of a conversation. We introduce the task of measuring semantic (in)coherence in a conversation with respect to background knowledge, which relies on the identification of semantic relations between concepts introduced during a conversation. We propose and evaluate graph-based and machine learning-based approaches for measuring semantic coherence using knowledge graphs, their vector space embeddings and word embedding models, as sources of background knowledge. We demonstrate how these approaches are able to uncover different coherence patterns in conversations on the Ubuntu Dialogue Corpus.

Now on arXiv: Explainable Fashion Recommendation with Joint Outfit Matching and Comment Generation

Yujie Lin, Pengjie Ren, Zhumin Chen, Zhaochun Ren, Jun Ma, and I published “Explainable Fashion Recommendation with Joint Outfit Matching and Comment Generation” on arXiv. Most previous work on fashion recommendation focuses on designing visual features to enhance recommendations. Existing work neglects user comments of fashion items, which have been proved effective in generating explanations along with better recommendation results. We propose a novel neural network framework, neural fashion recommendation (NFR), that simultaneously provides fashion recommendations and generates abstractive comments. NFR consists of two parts: outfit matching and comment generation. For outfit matching, we propose a convolutional neural network with a mutual attention mechanism to extract visual features of outfits. The visual features are then decoded into a rating score for the matching prediction. For abstractive comment generation, we propose a gated recurrent neural network with a cross-modality attention mechanism to transform visual features into a concise sentence. The two parts are jointly trained based on a multi-task learning framework in an end-to-end back-propagation paradigm. Extensive experiments conducted on an existing dataset and a collected real-world dataset show NFR achieves significant improvements over state-of-the-art baselines for fashion recommendation. Meanwhile, our generated comments achieve impressive ROUGE and BLEU scores in comparison to human-written comments. The generated comments can be regarded as explanations for the recommendation results. We release the dataset and code to facilitate future research. You can find the paper here.

« Older posts

© 2018 Maarten de Rijke

Theme by Anders NorenUp ↑