SZTAKI HLT | Comprehensive Conditioning of Neural Conversational Models

Comprehensive Conditioning of Neural Conversational Models

Richárd Csáky
Dec. 14, 2017, 8:15
MTA SZTAKI (Lágymányosi u. 11, Budapest) Room 306

Neural network based approaches to conversational modeling have been prevalent in the last three years. While there have been a multitude of techniques proposed in order to augment the performance of dialog agents, open-domain chatbots still tend to produce generic and safe responses, without much diversity [Li et al., 2015, Vinyals and Le, 2015]. This is caused by the learning target not being well-defined for training conversational models. In my work I plan to explore ideas meant to address this issue. Namely, building dialog models that are conditioned on more prior information, like persona, mood, world-knowledge, and outside factors. This should, in theory, ensure that the models do not simply average out ambiguities in the dataset, and thus a more natural and diverse chatbot could be created.