On Identifiability in Transformers

Gino Brunner, Yang Liu, Damian Pascual, Oliver Richter, Massimiliano Ciaramita, Roger Wattenhofer

Keywords: attention, generation, interpretability, nlp, self attention, transformer

Tues Session 2 (08:00-10:00 GMT) [Live QA] [Cal]
Tues Session 3 (12:00-14:00 GMT) [Live QA] [Cal]

Abstract: In this paper we delve deep in the Transformer architecture by investigating two of its core components: self-attention and contextual embeddings. In particular, we study the identifiability of attention weights and token embeddings, and the aggregation of context into hidden tokens. We show that, for sequences longer than the attention head dimension, attention weights are not identifiable. We propose effective attention as a complementary tool for improving explanatory interpretations based on attention. Furthermore, we show that input tokens retain to a large degree their identity across the model. We also find evidence suggesting that identity information is mainly encoded in the angle of the embeddings and gradually decreases with depth. Finally, we demonstrate strong mixing of input information in the generation of contextual embeddings by means of a novel quantification method based on gradient attribution. Overall, we show that self-attention distributions are not directly interpretable and present tools to better understand and further investigate Transformer models.

Similar Papers

Logic and the 2-Simplicial Transformer
James Clift, Dmitry Doryn, Daniel Murfet, James Wallbridge,
Are Transformers universal approximators of sequence-to-sequence functions?
Chulhee Yun, Srinadh Bhojanapalli, Ankit Singh Rawat, Sashank Reddi, Sanjiv Kumar,
Lite Transformer with Long-Short Range Attention
Zhanghao Wu, Zhijian Liu, Ji Lin, Yujun Lin, Song Han,
Monotonic Multihead Attention
Xutai Ma, Juan Miguel Pino, James Cross, Liezl Puzon, Jiatao Gu,