Are Pre-trained Language Models Aware of Phrases? Simple but Strong Baselines for Grammar Induction

Taeuk Kim, Jihun Choi, Daniel Edmiston, Sang-goo Lee

Keywords: nlp

Wed Session 1 (05:00-07:00 GMT) [Live QA] [Cal]
Wed Session 3 (12:00-14:00 GMT) [Live QA] [Cal]

Abstract: With the recent success and popularity of pre-trained language models (LMs) in natural language processing, there has been a rise in efforts to understand their inner workings. In line with such interest, we propose a novel method that assists us in investigating the extent to which pre-trained LMs capture the syntactic notion of constituency. Our method provides an effective way of extracting constituency trees from the pre-trained LMs without training. In addition, we report intriguing findings in the induced trees, including the fact that pre-trained LMs outperform other approaches in correctly demarcating adverb phrases in sentences.

Similar Papers

StructBERT: Incorporating Language Structures into Pre-training for Deep Language Understanding
Wei Wang, Bin Bi, Ming Yan, Chen Wu, Jiangnan Xia, Zuyi Bao, Liwei Peng, Luo Si,
Strategies for Pre-training Graph Neural Networks
Weihua Hu, Bowen Liu, Joseph Gomes, Marinka Zitnik, Percy Liang, Vijay Pande, Jure Leskovec,
Generalization through Memorization: Nearest Neighbor Language Models
Urvashi Khandelwal, Omer Levy, Dan Jurafsky, Luke Zettlemoyer, Mike Lewis,