Neural attention model with keyword memory for abstractive document summarization

Research output: Contribution to journalArticlepeer-review

2 Scopus citations

Abstract

Abstractive summarization is the task of creating summaries by generating a set of novel sentences based on the information extracted from the original document, while most of summarization researches are based on extractive or compressive approaches. These approaches extract phrases from the original document and concatenate them by post-processing and cannot truly encapsulate the contents of summaries, because they only reuse the phrases in the given document. Moreover, there are limits for paraphrasing and re-organizing of the original contents with the current Natural Language Processing (NLP) techniques. With these reasons, we propose a novel abstractive summarization method. The main goal of our paper is to generate a long sequence of words with coherent sentences by reflecting the key concepts of the original document and the contents of summaries. To achieve this goal, we propose an attention mechanism that uses Document Content Memory for learning the language model effectively. To evaluate its effectiveness, the proposed methods are compared with other language models and an extractive summarization method. We demonstrate that our proposed methods improve summarization results in ROUGE score using ACL dataset. The experimental results show that our proposed methods using keyword memory are effective to generate long sequence summary.

Original languageEnglish
Article numbere5433
JournalConcurrency and Computation: Practice and Experience
Volume32
Issue number18
DOIs
StatePublished - 25 Sep 2020

Keywords

  • abstractive summarization
  • attention mechanism
  • document content memory
  • generative approach
  • language model

Fingerprint

Dive into the research topics of 'Neural attention model with keyword memory for abstractive document summarization'. Together they form a unique fingerprint.

Cite this