profile picture

Moxin Li

PhD student @ National University of Singapore

spurious correlation

counterfactual training

hypothetical examples.

machine reading comprehension

e-commerce

large language models

mathematical reasoning

retrieval augmented generation

large language model

hypothetical question answering

counterfactual thinking

neural discrete reasoning

prompt optimization

robustness

mathematical nlp

7

presentations

2

number of views

SHORT BIO

I am a third-year PhD student from NUS. I am interested in QA, causal NLP, and large language models.

Presentations

Dual-Phase Accelerated Prompt Optimization

Muchen Yang and 7 other authors

Gotcha! Don't trick me with unanswerable questions! Self-aligning Large Language Models for Proactively Responding to Unknown Questions

Yang Deng and 4 other authors

Evaluating Mathematical Reasoning of Large Language Models: A Focus on Error Identification and Correction

Xiaoyuan Li and 5 other authors

Robust Prompt Optimization for Large Language Models Against Distribution Shifts

Moxin Li and 5 other authors

Hypothetical Training for Robust Machine Reading Comprehension of Tabular Context

Moxin Li

Hypothetical Training for Robust Machine Reading Comprehension of Tabular Context

Moxin Li

Learning to Imagine: Integrating Counterfactual Thinking in Neural Discrete Reasoning

Moxin Li and 5 other authors

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved