Investigation of the relationship between sparse word representations and their interpretability
Link on slack and mailing list.
The next speakers of the HLT seminar are Gábor Berend and György Turán: Investigation of the relationship between sparse word representations and their interpretability.
Models representing the meaning of words with the help of continuous vectors have become the main tool of natural language processing. The use of continuous word vectors has made it possible to achieve outstanding results in many applications, but the interpretability of the (word) representations used in such models is rather limited. Our goal is to create vector meaning representations where the individual coordinates of the vectors can be interpreted directly by associating them with ordinary concepts and properties, relying on information theory methods. We also generalize our method, originally for static word embeddings, to contextual embeddings.