profile picture

Dara Bahri

Research Scientist @ Google Research

language models

transformers

text generation

optimization

pretraining

re-ranking

masked language model

encoder-decoder model

unsupervised dependency parser

unsupervised constituency parsing

convolutions

sharpness-aware minimization

4

presentations

3

number of views

SHORT BIO

Research Scientist at Google Research working on language models and topics in reliable deep learning.

Presentations

ED2LM: Encoder-Decoder to Language Model for Faster Document Re-ranking Inference

Kai Hui and 10 other authors

Sharpness-Aware Minimization Improves Language Model Generalization

Dara Bahri and 2 other authors

StructFormer: Joint Unsupervised Induction of Dependency and Constituency Structure from Masked Language Modeling

Yikang Shen and 5 other authors

Are Pretrained Convolutions Better than Pretrained Transformers?

Yi Tay and 6 other authors

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved