profile picture

Karim Lasri

Graduate student @ Ecole Normale Superieure

syntactic structure

number agreement

interpretability of language models

neural language models; probing; word order

2

presentations

13

number of views

SHORT BIO

Transformer-based neural architectures bear lots of promises as they seem to address a wide range of linguistic tasks after learning a language model. However, the level of abstraction they reach after their training is still opaque. My main research focus is understanding better how neural language models generalize. What linguistic properties do these architectures acquire during learning ? How is linguistic information encoded in their intermediate representation spaces?

Presentations

Word Order Matters When You Increase Masking

Karim Lasri and 2 other authors

Subject Verb Agreement Error Patterns in Meaningless Sentences: Humans vs. BERT

Karim Lasri

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved