profile picture

Thanh Nguyen-Tang

reinforcement learning

machine learning

offline reinforcement learning

function approximation

instance-dependent bounds

2

presentations

SHORT BIO

Thanh Nguyen-Tang is a postdoctoral research fellow in the Department of Computer Science at Johns Hopkins University. His research focuses on algorithmic and theoretical foundations of modern machine learning, aiming to build data-efficient, deployment-efficient, and robust AI systems. His current research topics are reinforcement learning, learning under distributional shifts, adversarial robust learning, probabilistic deep learning, and representation learning. He has published his works in various top-tier conferences in machine learning including NeurIPS, ICLR, AISTATS, AAAI, and TMLR. Thanh finished his Ph.D. in Computer Science at the Applied AI Institute at Deakin University, Australia.

Presentations

On Instance-Dependent Bounds for Offline Reinforcement Learning with Linear Function Approximation

Thanh Nguyen-Tang and 4 other authors

Distributional Reinforcement Learning via Moment Matching

Thanh Nguyen-Tang and 2 other authors

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved