
Karim Lasri
Graduate student @ Ecole Normale Superieure
syntactic structure
number agreement
interpretability of language models
neural language models; probing; word order
2
presentations
13
number of views
SHORT BIO
Transformer-based neural architectures bear lots of promises as they seem to address a wide range of linguistic tasks after learning a language model. However, the level of abstraction they reach after their training is still opaque. My main research focus is understanding better how neural language models generalize. What linguistic properties do these architectures acquire during learning ? How is linguistic information encoded in their intermediate representation spaces?
Presentations

Word Order Matters When You Increase Masking
Karim Lasri and 2 other authors

Subject Verb Agreement Error Patterns in Meaningless Sentences: Humans vs. BERT
Karim Lasri