
Dara Bahri
Research Scientist @ Google Research
language models
transformers
text generation
optimization
pretraining
re-ranking
masked language model
encoder-decoder model
unsupervised dependency parser
unsupervised constituency parsing
convolutions
sharpness-aware minimization
4
presentations
3
number of views
SHORT BIO
Research Scientist at Google Research working on language models and topics in reliable deep learning.
Presentations

ED2LM: Encoder-Decoder to Language Model for Faster Document Re-ranking Inference
Kai Hui and 10 other authors

Sharpness-Aware Minimization Improves Language Model Generalization
Dara Bahri and 2 other authors

StructFormer: Joint Unsupervised Induction of Dependency and Constituency Structure from Masked Language Modeling
Yikang Shen and 5 other authors

Are Pretrained Convolutions Better than Pretrained Transformers?
Yi Tay and 6 other authors