profile picture

Dawei Zhu

weak supervision

noisy labels

machine translation

low resource

natural language inference

self-training

re-ranking

politics

uncertainty quantification

conformal prediction

calibration

label noise

alignment

training dynamics

spurious correlations

10

presentations

1

number of views

SHORT BIO

Ph.D. student at Saarland University. Main research focus: low-resource machine learning in NLP, weak supervision, machine translation

Presentations

LawBench: Benchmarking Legal Knowledge of Large Language Models

Zhiwei Fei and 11 other authors

From Coarse to Fine: Impacts of Feature-Preserving and Feature-Compressing Connectors on Perception in Multimodal Models

Junyan Lin and 3 other authors

Robust Pronoun Fidelity with English LLMs: Are they Reasoning, Repeating, or Just Biased?

Vagrant Gautam and 4 other authors

Fine-Tuning Large Language Models to Translate: Will a Touch of Noisy Data in Misaligned Languages Suffice?

Dawei Zhu and 5 other authors

Assessing “Implicit” Retrieval Robustness of Large Language Models

Xiaoyu Shen and 4 other authors

A Preference-driven Paradigm for Enhanced Translation with Large Language Models

Dawei Zhu and 5 other authors

Weaker Than You Think: A Critical Look at Weakly Supervised Learning

Dawei Zhu and 4 other authors

Meta Self-Refinement for Robust Learning with Weak Supervision

Dawei Zhu and 3 other authors

Analysing the Noise Model Error for Realistic Noisy Label Data

Michael A. Hedderich and 2 other authors

Exploring Reward Model Strength's Impact on Language Models

Yanjun Chen and 5 other authors

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved