profile picture

Shrimai Prabhumoye

Research Scientist @ NVIDIA

language models

code-switching

ethics

grammatical error correction

interpretability

knowledge

large language models

memorization

prompting

intervention

counterfactual

toxicity

multilingualism

dialogue model

multi-stage

4

presentations

SHORT BIO

I am a Research Scientist with the Applied Deep Learning Research Group at Nvidia where I work on building large language models (LLM). I also work on interesting applications of LLMs such as dialogue systems and QA systems, as well as reducing bias and toxicity of LLMs. Before that, I graduated with a PhD from Language Technologies Institute, School of Computer Science, Carnegie Mellon University. My thesis focused on controllable text generation with a focus on style, content and structure, as well as its ethical considerations. I co-designed the Computational Ethics for NLP course which was offered for the first time in Spring 2018 at CMU.

Presentations

Data, Data Everywhere: A Guide for Pretraining Dataset Construction

Jupinder Parmar and 8 other authors

LLM-Evolve: Evaluation for LLM’s Evolving Capability on Benchmarks

Jiaxuan You and 5 other authors

Adding Instructions during Pretraining: Effective way of Controlling Toxicity in Language Models

Shrimai Prabhumoye and 3 other authors

Multi-Stage Prompting for Knowledgeable Dialogue Generation

Zihan Liu and 6 other authors

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved