Monotonic Multihead Attention

Xutai Ma, Juan Miguel Pino, James Cross, Liezl Puzon, Jiatao Gu

Keywords: attention, machine translation, transformer

Wed Session 4 (17:00-19:00 GMT) [Live QA] [Cal]
Wed Session 5 (20:00-22:00 GMT) [Live QA] [Cal]

Abstract: Simultaneous machine translation models start generating a target sequence before they have encoded or read the source sequence. Recent approach for this task either apply a fixed policy on transformer, or a learnable monotonic attention on a weaker recurrent neural network based structure. In this paper, we propose a new attention mechanism, Monotonic Multihead Attention (MMA), which introduced the monotonic attention mechanism to multihead attention. We also introduced two novel interpretable approaches for latency control that are specifically designed for multiple attentions. We apply MMA to the simultaneous machine translation task and demonstrate better latency-quality tradeoffs compared to MILk, the previous state-of-the-art approach.

Similar Papers

Logic and the 2-Simplicial Transformer
James Clift, Dmitry Doryn, Daniel Murfet, James Wallbridge,
On Identifiability in Transformers
Gino Brunner, Yang Liu, Damian Pascual, Oliver Richter, Massimiliano Ciaramita, Roger Wattenhofer,
Adaptive Structural Fingerprints for Graph Attention Networks
Kai Zhang, Yaokang Zhu, Jun Wang, Jie Zhang,