profile picture

Patrick Lewis

multilingual

efficiency

multilinguality

safety

adaptation

large language model

pre-trained language models

llm

sequence labeling

instruction tuning

neural retrieval

toxicity

efficient nlp

information retrieval

cross-lingual

6

presentations

4

number of views

SHORT BIO

Bio: Patrick Lewis is a Research Scientist at Meta AI in London. Patrick recently completed his PhD at UCL and FAIR, under the supervision of Sebastian Riedel and Pontus Stenetorp. Patrick has worked extensively in Question Answering, and more broadly in Knowledge-intensive NLP – NLP tasks that require a substantial amount of world knowledge to do well on, and which an average human would require access to a search engine, library or external knowledge source to do well. Patrick is interested in how to build and evaluate models that precisely access, apply and attribute knowledge, in terms of accuracy, updateability, controllability and interpretability.

Presentations

From One to Many: Expanding the Scope of Toxicity Mitigation in Language Models

Beyza Ermis and 3 other authors

On the Challenges of Using Black-Box APIs for Toxicity Evaluation in Research | VIDEO

Luiza Amador Pozzobon and 3 other authors

Mini-Model Adaptation: Efficiently Extending Pretrained Models to New Languages via Aligned Shallow Training

Kelly Marchisio and 3 other authors

Mini-Model Adaptation: Efficiently Extending Pretrained Models to New Languages via Aligned Shallow Training

Kelly Marchisio and 3 other authors

Task-aware Retrieval with Instructions

Akari Asai and 7 other authors

Boosted Dense Retriever

Patrick Lewis and 5 other authors

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved