profile picture

Wenyue Hua

large language model

benchmark

reasoning

privacy

counterfactual reasoning

language model

model interpretability

ranking

safety

efficient

artifacts

question answering

computational complexity

evaluation benchmark

adversarial attacks

8

presentations

5

number of views

SHORT BIO

Wenyue Hua is a Ph.D. candidate in Computer Science from Rutgers University, specializing in natural language processing and large-language-model-based agents. Her research delves into multi-agent systems, model editing methods, and safety in large foundation models. Her works are published in ICLR, Neurips, EMNLP, TACL, SIGIR, etc.

Presentations

TrustAgent: Towards Safe and Trustworthy LLM-based Agents through Agent Constitution

Wenyue Hua and 6 other authors

BattleAgent: Multi-modal Dynamic Emulation on Historical Battles to Complement Historical Analysis

Shuhang Lin and 9 other authors

MultiAgent Collaboration Attack: Investigating Adversarial Attacks in Large Language Model Collaborations via Debate

Alfonso Amayuelas and 5 other authors

NPHardEval: Dynamic Benchmark on Reasoning Ability of Large Language Models via Complexity Classes

Lizhou Fan and 4 other authors

Propagation and Pitfalls: Reasoning-based Assessment of Knowledge Editing through Counterfactual Tasks

Wenyue Hua and 5 other authors

The Impact of Reasoning Step Length on Large Language Models

Mingyu Jin and 7 other authors

Discover, Explain, Improve: An Automatic Slice Detection Benchmark for Natural Language Processing

Wenyue Hua and 1 other author

System 1 + System 2 = Better World: Neural-Symbolic Chain of Logic Reasoning

Wenyue Hua and 1 other author

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved