
Te-Lin Wu
constrained decoding
pre-training
multimodal
ar/vr
paraphrasing
instructions
prompt engineering
multimodal dialogue
llm prompting
pre-conditions
action-knowledge
instructional manuals
video-language modeling
5
presentations
1
number of views
SHORT BIO
Te-Lin (Albert) is a computer science PhD candidate at the University of California, Los Angeles (UCLA), where he focuses on multimodal NLP research, specifically in multimodal representation learning models, grounded task-instruction understanding, LLM with multimodal abilities, and common sense reasoning, under the supervision of Professor Nanyun (Violet) Peng. Prior to UCLA, he received his master's degree in Electrical Engineering at Stanford University, where he worked with Professor Silvio Savarese on computer vision research.
Presentations

VDebugger: Harnessing Execution Feedback for Debugging Visual Programs
Xueqing Wu and 6 other authors

SIMMC-VR: A Task-oriented Multimodal Dialog Dataset with Situated and Immersive VR Streams
Te-Lin Wu

Learning Action Conditions from Instructional Manuals for Instruction Understanding
Te-Lin Wu

Understanding Multimodal Procedural Knowledge by Sequencing Multimodal Instructional Manuals
Te-Lin Wu and 5 other authors

MELINDA: A Multimodal Dataset for Biomedical Experiment Method Classification
Te-Lin Wu and 4 other authors