Skip Act Vectors: integrating dialogue context into sentence embeddings

Abstract : This paper compares several approaches for computing dialogue turn embeddings and evaluate their representation capacities in two dialogue act related tasks within a hierarchical Recurrent Neural Network architecture. These turn em-beddings can be produced explicitely or implicitely by extracting the hidden layer of a model trained for a given task. We introduce skip-act, a new dialogue turn em-beddings approach, which are extracted as the common representation layer from a multi-task model that predicts both the previous and the next dialogue act. The models used to learn turn embeddings are trained on a large dialogue corpus with light supervision, while the models used to predict dialog acts using turn embeddings are trained on a sub-corpus with gold dialogue act annotations. We compare their performances for predicting the current dialogue act as well as their ability to predict the next dialogue act, which is a more challenging task that can have several applica-tive impacts. With a better context representation, the skip-act turn embeddings are shown to outperform previous approaches both in terms of overall F-measure and in terms of macro-F1, showing regular improvements on the various dialogue acts.
Complete list of metadatas

Cited literature [18 references]  Display  Hide  Download
Contributor : Jeremy Auguste <>
Submitted on : Friday, May 10, 2019 - 11:55:22 AM
Last modification on : Wednesday, May 22, 2019 - 1:33:54 AM


Files produced by the author(s)


  • HAL Id : hal-02125259, version 1



Jeremy Auguste, Frédéric Béchet, Geraldine Damnati, Delphine Charlet. Skip Act Vectors: integrating dialogue context into sentence embeddings. Tenth International Workshop on Spoken Dialogue Systems Technology, Apr 2019, Syracuse, Italy. ⟨hal-02125259⟩



Record views


Files downloads