Artificial intelligence & Information retrieval

Month: December 2018

AAAI 2019 paper on repeat aware recommendation online

RepeatNet: A Repeat Aware Neural Recommendation Machine for Session-based Recommendation by Pengjie Ren, Zhumin Chen, Jing Li, Zhaochun Ren, Jun Ma, and Maarten de Rijke is online now at this location.

Recurrent neural networks for session-based recommendation have attracted a lot of attention recently because of their promising performance. Repeat consumption is a common phenomenon in many recommendation scenarios (e.g., e-commerce, music, and TV program recommendations), where the same item is re-consumed repeatedly over time. However, no previous studies have emphasized repeat consumption with neural networks. An effective neural approach is needed to decide when to perform repeat recommendation. In this paper, we incorporate a repeat-explore mechanism into neural networks and propose a new model, called RepeatNet, with an encoder-decoder structure. RepeatNet integrates a regular neural recommendation approach in the decoder with a new repeat recommendation mechanism that can choose items from a user’s history and recommends them at the right time. We report on extensive experiments on three benchmark datasets. RepeatNet outperforms state-of-the-art baselines on all three datasets in terms of MRR and Recall. Furthermore, as the dataset size and the repeat ratio increase, the improvements of RepeatNet over the baselines also increase, which demonstrates its advantage in handling repeat recommendation scenarios.

The paper will be presented at AAAI 2019.

AAAI 2019 paper on dialogue generation online

Dialogue generation: From imitation learning to inverse reinforcement learning by Ziming Li, Julia Kiseleva, and Maarten de Rijke is online now at this location.

The performance of adversarial dialogue generation models relies on the quality of the reward signal produced by the discriminator. The reward signal from a poor discriminator can be very sparse and unstable, which may lead the gen- erator to fall into a local optimum or to produce nonsense replies. To alleviate the first problem, we first extend a re- cently proposed adversarial dialogue generation method to an adversarial imitation learning solution. Then, in the framework of adversarial inverse reinforcement learning, we propose a new reward model for dialogue generation that can provide a more accurate and precise reward signal for generator train- ing. We evaluate the performance of the resulting model with automatic metrics and human evaluations in two annotation settings. Our experimental results demonstrate that our model can generate more high-quality responses and achieve higher overall performance than the state-of-the-art.

The paper will be presented at AAAI 2019.

© 2022 Maarten de Rijke

Theme by Anders NorenUp ↑