{"id":2189,"date":"2019-01-03T09:53:52","date_gmt":"2019-01-03T09:53:52","guid":{"rendered":"https:\/\/staff.fnwi.uva.nl\/m.derijke\/?p=2189"},"modified":"2019-02-23T09:56:51","modified_gmt":"2019-02-23T09:56:51","slug":"wsdm-2019-paper-on-off-policy-evaluation-online","status":"publish","type":"post","link":"https:\/\/staff.fnwi.uva.nl\/m.derijke\/wsdm-2019-paper-on-off-policy-evaluation-online\/","title":{"rendered":"WSDM 2019 paper on off-policy evaluation online"},"content":{"rendered":"\n<p><em>When people change their mind: Off-policy evaluation in non-stationary recommendation environments<\/em> by Rolf Jagerman, Ilya Markov, and\u00a0Maarten de Rijke is online now <a href=\"https:\/\/staff.fnwi.uva.nl\/m.derijke\/wp-content\/papercite-data\/pdf\/jagerman-when-2019.pdf\">at this location<\/a>.<\/p>\n\n\n\n<p>We consider the novel problem of evaluating a recommendation policy offline in environments where the reward signal is non- stationary. Non-stationarity appears in many Information Retrieval (IR) applications such as recommendation and advertising, but its effect on off-policy evaluation has not been studied at all. We are the first to address this issue. First, we analyze standard off-policy estimators in non-stationary environments and show both theoretically and experimentally that their bias grows with time. Then, we propose new off-policy estimators with moving averages and show that their bias is independent of time and can be bounded. Furthermore, we provide a method to trade-off bias and variance in a principled way to get an off-policy estimator that works well in both non-stationary and stationary environments. We experiment on publicly available recommendation datasets and show that our newly proposed moving average estimators accurately capture changes in non-stationary environments, while standard off-policy estimators fail to do so.<\/p>\n\n\n\n<p>The paper will be presented at WSDM 2019.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>When people change their mind: Off-policy evaluation in non-stationary recommendation environments by Rolf Jagerman, Ilya Markov, and\u00a0Maarten de Rijke is online now at this location. We consider the novel problem of evaluating a recommendation policy offline in environments where the reward signal is non- stationary. Non-stationarity appears in many Information Retrieval (IR) applications such as&#8230;<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"_links":{"self":[{"href":"https:\/\/staff.fnwi.uva.nl\/m.derijke\/wp-json\/wp\/v2\/posts\/2189"}],"collection":[{"href":"https:\/\/staff.fnwi.uva.nl\/m.derijke\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/staff.fnwi.uva.nl\/m.derijke\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/staff.fnwi.uva.nl\/m.derijke\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/staff.fnwi.uva.nl\/m.derijke\/wp-json\/wp\/v2\/comments?post=2189"}],"version-history":[{"count":1,"href":"https:\/\/staff.fnwi.uva.nl\/m.derijke\/wp-json\/wp\/v2\/posts\/2189\/revisions"}],"predecessor-version":[{"id":2190,"href":"https:\/\/staff.fnwi.uva.nl\/m.derijke\/wp-json\/wp\/v2\/posts\/2189\/revisions\/2190"}],"wp:attachment":[{"href":"https:\/\/staff.fnwi.uva.nl\/m.derijke\/wp-json\/wp\/v2\/media?parent=2189"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/staff.fnwi.uva.nl\/m.derijke\/wp-json\/wp\/v2\/categories?post=2189"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/staff.fnwi.uva.nl\/m.derijke\/wp-json\/wp\/v2\/tags?post=2189"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}