SZTAKI HLT | Indirect Probing Experiments

Indirect Probing Experiments

Endre Hamerlik
Feb. 28, 2024, 15:00
Budapest

In this seminar at the HLT Seminar series, we will delve into the ongoing work inspired by Ács (2023) on perturbed probing experiments, examining the ways machine learning models, specifically masked language models (MLMs), behave when subjected to perturbations. Our exploration will shed light on not yet fully explained phenomena such as the role of randomly weighted MLMs in language understanding and generation, and the intriguing pattern of left-context dependence observed in pretrained MLMs. We will discuss the implications of these findings and address crucial open questions in the field. Key among these is the quest to understand the reasons behind the asymmetric dependence on context in MLMs, and the overarching question of the validity of probing as a method to interpret these complex models. The seminar aims to not only highlight these recent discoveries but also to foster discussion that could pave the way for future breakthroughs in the understanding of MLMs.

Video

Slides

Acs, J., Hamerlik, E., Schwartz, R., Smith, N.A., & Kornai, A. (2023). Morphosyntactic probing of multilingual BERT models. Natural Language Engineering. Published online 2023:1-40. doi:10.1017/S1351324923000190