{"id":1950,"date":"2018-08-28T17:02:57","date_gmt":"2018-08-28T17:02:57","guid":{"rendered":"https:\/\/staff.fnwi.uva.nl\/m.derijke\/?p=1950"},"modified":"2018-09-01T15:03:32","modified_gmt":"2018-09-01T15:03:32","slug":"cikm-2018-paper-on-attentive-encoder-based-extractive-text-summarization-online","status":"publish","type":"post","link":"https:\/\/staff.fnwi.uva.nl\/m.derijke\/cikm-2018-paper-on-attentive-encoder-based-extractive-text-summarization-online\/","title":{"rendered":"CIKM 2018 paper on Attentive Encoder-based Extractive Text Summarization online"},"content":{"rendered":"<p><em>Attentive Encoder-based Extractive Text Summarization<\/em> by Chong Feng, Fei Cai, Honghui Chen, and Maarten de Rijke is available online now at <a href=\"https:\/\/staff.fnwi.uva.nl\/m.derijke\/wp-content\/papercite-data\/pdf\/feng-attentive-2018.pdf\">this location<\/a>.<\/p>\n<p>In previous work on text summarization, encoder-decoder architectures and attention mechanisms have both been widely used. Attention-based encoder-decoder approaches typically focus on taking the sentences preceding a given sentence in a document into account for document representation, failing to capture the relationships between a sentence and sentences that follow it in a document in the encoder. We propose an attentive encoder-based summarization (AES) model to generate article summaries. AES can generate a rich document representation by considering both the global information of a document and the relationships of sentences in the document. A unidirectional recurrent neural network (RNN) and a bidirectional RNN are considered to construct the encoders, giving rise to unidirectional attentive encoder-based summarization (Uni-AES) and bidirectional attentive encoder-based summarization (Bi-AES), respectively. Our experimental results show that Bi-AES outperforms Uni-AES. We obtain substantial improvements over a relevant start-of-the-art baseline.<\/p>\n<p>The paper will be presented at CIKM 2018 in October 2018.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Attentive Encoder-based Extractive Text Summarization by Chong Feng, Fei Cai, Honghui Chen, and Maarten de Rijke is available online now at this location. In previous work on text summarization, encoder-decoder architectures and attention mechanisms have both been widely used. Attention-based encoder-decoder approaches typically focus on taking the sentences preceding a given sentence in a document&#8230;<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[6],"tags":[],"_links":{"self":[{"href":"https:\/\/staff.fnwi.uva.nl\/m.derijke\/wp-json\/wp\/v2\/posts\/1950"}],"collection":[{"href":"https:\/\/staff.fnwi.uva.nl\/m.derijke\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/staff.fnwi.uva.nl\/m.derijke\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/staff.fnwi.uva.nl\/m.derijke\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/staff.fnwi.uva.nl\/m.derijke\/wp-json\/wp\/v2\/comments?post=1950"}],"version-history":[{"count":1,"href":"https:\/\/staff.fnwi.uva.nl\/m.derijke\/wp-json\/wp\/v2\/posts\/1950\/revisions"}],"predecessor-version":[{"id":1951,"href":"https:\/\/staff.fnwi.uva.nl\/m.derijke\/wp-json\/wp\/v2\/posts\/1950\/revisions\/1951"}],"wp:attachment":[{"href":"https:\/\/staff.fnwi.uva.nl\/m.derijke\/wp-json\/wp\/v2\/media?parent=1950"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/staff.fnwi.uva.nl\/m.derijke\/wp-json\/wp\/v2\/categories?post=1950"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/staff.fnwi.uva.nl\/m.derijke\/wp-json\/wp\/v2\/tags?post=1950"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}