Language Models are Open Knowledge Graphs
Knowledge Graphs (aka ontologies) represent knowledge in a human-readable form. They are usually hand-crafted resources, focus on domain knowledge and have a great added value in real-world NLP applications. The construction and maintenance of Knowledge Graphs are very expensive. On the other hand, neural language models (e.g., BERT, GPT-2/3) learns language representations without human supervision. They've revolutionised NLP recently, by achieving state-of-the-art accuracies in many applications without any feature engineering. Moreover, several pre-trained language models are freely available. In this talk, we shall summarise the finding of four recent papers which aims to automatically extract Knowledge Graphs from pre-trained language models.