Multilingual Alignment of Contextual Word Representations

Steven Cao, Nikita Kitaev, Dan Klein

Keywords: nlp, transfer learning, transformer, word embedding, word embeddings

Thurs Session 4 (17:00-19:00 GMT) [Live QA] [Cal]
Thurs Session 5 (20:00-22:00 GMT) [Live QA] [Cal]

Abstract: We propose procedures for evaluating and strengthening contextual embedding alignment and show that they are useful in analyzing and improving multilingual BERT. In particular, after our proposed alignment procedure, BERT exhibits significantly improved zero-shot performance on XNLI compared to the base model, remarkably matching pseudo-fully-supervised translate-train models for Bulgarian and Greek. Further, to measure the degree of alignment, we introduce a contextual version of word retrieval and show that it correlates well with downstream zero-shot transfer. Using this word retrieval task, we also analyze BERT and find that it exhibits systematic deficiencies, e.g. worse alignment for open-class parts-of-speech and word pairs written in different scripts, that are corrected by the alignment procedure. These results support contextual alignment as a useful concept for understanding large multilingual pre-trained models.

Similar Papers

Incorporating BERT into Neural Machine Translation
Jinhua Zhu, Yingce Xia, Lijun Wu, Di He, Tao Qin, Wengang Zhou, Houqiang Li, Tieyan Liu,
A Baseline for Few-Shot Image Classification
Guneet Singh Dhillon, Pratik Chaudhari, Avinash Ravichandran, Stefano Soatto,
StructBERT: Incorporating Language Structures into Pre-training for Deep Language Understanding
Wei Wang, Bin Bi, Ming Yan, Chen Wu, Jiangnan Xia, Zuyi Bao, Liwei Peng, Luo Si,
Cross-lingual Alignment vs Joint Training: A Comparative Study and A Simple Unified Framework
Zirui Wang, Jiateng Xie, Ruochen Xu, Yiming Yang, Graham Neubig, Jaime G. Carbonell,