
Thanh Nguyen-Tang
reinforcement learning
machine learning
offline reinforcement learning
function approximation
instance-dependent bounds
2
presentations
SHORT BIO
Thanh Nguyen-Tang is a postdoctoral research fellow in the Department of Computer Science at Johns Hopkins University. His research focuses on algorithmic and theoretical foundations of modern machine learning, aiming to build data-efficient, deployment-efficient, and robust AI systems. His current research topics are reinforcement learning, learning under distributional shifts, adversarial robust learning, probabilistic deep learning, and representation learning. He has published his works in various top-tier conferences in machine learning including NeurIPS, ICLR, AISTATS, AAAI, and TMLR. Thanh finished his Ph.D. in Computer Science at the Applied AI Institute at Deakin University, Australia.
Presentations

On Instance-Dependent Bounds for Offline Reinforcement Learning with Linear Function Approximation
Thanh Nguyen-Tang and 4 other authors

Distributional Reinforcement Learning via Moment Matching
Thanh Nguyen-Tang and 2 other authors