profile picture

Eve Fleisig

PhD Student @ UC Berkeley

harm measurement

annotator disagreement

ai fairness

benchmark

fairness

mechanical turk

position paper

hate speech

large language models

machine learning

data collection

toxicity detection

in-context learning

large language model

ai ethics

8

presentations

1

number of views

SHORT BIO

Eve Fleisig is a third-year PhD student at UC Berkeley, advised by Rediet Abebe and Dan Klein. Her research lies at the intersection of natural language processing and AI ethics, with a focus on preventing societal harms of text generation models and improving large language model evaluation. Previously, she received a B.S. in computer science from Princeton University. She is a Berkeley Chancellor’s Fellow and recipient of the NSF Graduate Research Fellowship.

Presentations

Accurate and Data-Efficient Toxicity Prediction when Annotators Disagree

Harbani Jaggi and 3 other authors

Linguistic Bias in ChatGPT: Language Models Reinforce Dialect Discrimination

Eve Fleisig and 5 other authors

The Perspectivist Paradigm Shift: Assumptions and Challenges of Capturing Human Labels

Eve Fleisig and 3 other authors

Ghostbuster: Detecting Text Ghostwritten by Large Language Models

Vivek Verma and 3 other authors

Centering the Margins: Outlier-Based Identification of Harmed Populations in Toxicity Detection

Vyoma Raman and 2 other authors

Incorporating Worker Perspectives into MTurk Annotation Practices for NLP

Olivia Huang and 2 other authors

When the Majority is Wrong: Modeling Annotator Disagreement for Subjective Tasks

Eve Fleisig and 2 other authors

FairPrism: Evaluating Fairness-Related Harms in Text Generation

Eve Fleisig and 8 other authors

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved