Speech-act classification using a convolutional neural network based on POS tag and dependency-relation bigram embedding

Donghyun Yoo, Youngjoong Ko, Jungyun Seo

Research output: Contribution to journalArticlepeer-review

4 Scopus citations

Abstract

In this paper, we propose a deep learning based model for classifying speech-acts using a convolutional neural network (CNN). The model uses some bigram features including parts-of-speech (POS) tags and dependency-relation bigrams, which represent syntactic structural information in utterances. Previous classification approaches using CNN have commonly exploited word embeddings using morpheme unigrams. However, the proposed model first extracts two different bigram features that well reflect the syntactic structure of utterances and then represents them as a vector representation using a word embedding technique. As a result, the proposed model using bigram embeddings achieves an accuracy of 89.05%. Furthermore, the accuracy of this model is relatively 2.8% higher than that of competitive models in previous studies.

Original languageEnglish
Pages (from-to)3081-3084
Number of pages4
JournalIEICE Transactions on Information and Systems
VolumeE100D
Issue number12
DOIs
StatePublished - Dec 2017
Externally publishedYes

Keywords

  • Bigram embedding
  • Convolutional neural network
  • Dependency-relation
  • Speech-act classification
  • Word embedding

Fingerprint

Dive into the research topics of 'Speech-act classification using a convolutional neural network based on POS tag and dependency-relation bigram embedding'. Together they form a unique fingerprint.

Cite this