Sequential Latent Knowledge Selection for Knowledge-Grounded Dialogue

Byeongchang Kim, Jaewoo Ahn, Gunhee Kim

Keywords: generation, nlp, transformer

Thurs Session 1 (05:00-07:00 GMT) [Live QA] [Cal]
Thurs Session 2 (08:00-10:00 GMT) [Live QA] [Cal]
Thursday: Natural Language

Abstract: Knowledge-grounded dialogue is a task of generating an informative response based on both discourse context and external knowledge. As we focus on better modeling the knowledge selection in the multi-turn knowledge-grounded dialogue, we propose a sequential latent variable model as the first approach to this matter. The model named sequential knowledge transformer (SKT) can keep track of the prior and posterior distribution over knowledge; as a result, it can not only reduce the ambiguity caused from the diversity in knowledge selection of conversation but also better leverage the response information for proper choice of knowledge. Our experimental results show that the proposed model improves the knowledge selection accuracy and subsequently the performance of utterance generation. We achieve the new state-of-the-art performance on Wizard of Wikipedia (Dinan et al., 2019) as one of the most large-scale and challenging benchmarks. We further validate the effectiveness of our model over existing conversation methods in another knowledge-based dialogue Holl-E dataset (Moghe et al., 2018).

Similar Papers

Low-Resource Knowledge-Grounded Dialogue Generation
Xueliang Zhao, Wei Wu, Chongyang Tao, Can Xu, Dongyan Zhao, Rui Yan,
Differentiable Reasoning over a Virtual Knowledge Base
Bhuwan Dhingra, Manzil Zaheer, Vidhisha Balachandran, Graham Neubig, Ruslan Salakhutdinov, William W. Cohen,
Non-Autoregressive Dialog State Tracking
Hung Le, Richard Socher, Steven C.H. Hoi,
The Early Phase of Neural Network Training
Jonathan Frankle, David J. Schwab, Ari S. Morcos,