profile picture

Daniel Cer

Industry Research Scientist @ Google

prompt tuning

sentence embedding

sentence embeddings

semantic similarity

soft prompt transfer

parameter-efficient methods

large encoders

language-agnostic

parameter-efficient tuning

bert

zero-shot cross-lingual generation

5

presentations

13

number of views

1

citations

SHORT BIO

Daniel Cer is a senior research scientist at Google Research. His work focuses on representation learning using deep learning methods for natural language processing (NLP) tasks such as semantic similarity, question answering (QA), semantic retrieval (SR), bi-text mining and text classification.

Presentations

Language-agnostic BERT Sentence Embedding

Daniel Cer and 4 other authors

Sentence-T5: Scalable Sentence Encoders from Pre-trained Text-to-Text Models

Daniel Cer and 6 other authors

SPoT: Better Frozen Model Adaptation through Soft Prompt Transfer

Daniel Cer and 4 other authors

A Simple and Effective Method To Eliminate the Self Language Bias in Multilingual Representations

Ziyi Yang and 3 other authors

Universal Sentence Representation Learning with Conditional Masked Language Model

Ziyi Yang and 4 other authors

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved