profile picture

Isabelle Augenstein

Professor @ University of Copenhagen

fact checking

explainability

nlp

survey

historical documents

retrieval-augmented generation

checklist

ai ethics

language modeling

probing

misinformation

bias

interpretability

scholarly document processing

harms

43

presentations

49

number of views

2

citations

SHORT BIO

Isabelle Augenstein is a full professor at the University of Copenhagen, Department of Computer Science, where she heads the Copenhagen Natural Language Understanding research group as well as the Natural Language Processing section. She is also a co- lead of the Pioneer Centre for Artificial Intelligence. Her main research interests are fair and accountable NLP, including challenges such as explainability, factuality and bias detection.

Presentations

Investigating Human Values in Online Communities

Nadav Borenstein and 3 other authors

Measuring and Benchmarking Large Language Models’ Capabilities to Generate Persuasive Language

Amalie Brogaard Pauli and 2 other authors

Specializing Large Language Models to Simulate Survey Response Distributions for Global Populations

Yong Cao and 5 other authors

Evaluating Input Feature Explanations through a Unified Diagnostic Evaluation Framework

Jingyi Sun and 2 other authors

Revealing Fine-Grained Values and Opinions in Large Language Models

Dustin Wright and 5 other authors

Grammatical Gender’s Influence on Distributional Semantics: A Causal Perspective

Karolina Stanczak and 4 other authors

From Internal Conflict to Contextual Adaptation of Language Models

Sara Vera Marjanovic and 5 other authors

Social Bias Probing: Fairness Benchmarking for Language Models

Marta Marchiori Manerba and 3 other authors

Factcheck-Bench: Fine-Grained Evaluation Benchmark for Automatic Fact-checkers

Yuxia Wang and 12 other authors

Can Transformer Language Models Learn $n$-gram Language Models?

Anej Svete and 4 other authors

Understanding Fine-grained Distortions in Reports of Scientific Findings

Amelie Wuehrl and 3 other authors

Investigating the Impact of Model Instability on Explanations and Uncertainty

Sara Vera Marjanovic and 2 other authors

Revealing the Parametric Knowledge of Language Models: A Unified Framework for Attribution Methods

Haeun Yu and 2 other authors

Semantic Sensitivities and Inconsistent Predictions: Measuring the Fragility of NLI Models

Erik Arakelyan and 2 other authors

Why Should This Article Be Deleted? Transparent Stance Detection in Multilingual Wikipedia Editor Discussions | VIDEO

Lucie-Aimée Kaffee and 2 other authors

PHD: Pixel-Based Language Modeling of Historical Documents

Nadav Borenstein and 3 other authors

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved