
Mohit Bansal
summarization
llms
compositional generalization
data augmentation
interpretability
code generation
alignment
language generation
benchmark
continual learning
conversation
factuality
large language models
commonsense reasoning
dataset
86
presentations
103
number of views
2
citations
SHORT BIO
Dr. Mohit Bansal is the John R. & Louise S. Parker Professor and the Director of the MURGe-Lab (UNC-NLP Group) in the Computer Science department at UNC Chapel Hill. He received his PhD from UC Berkeley in 2013 and his BTech from IIT Kanpur in 2008. His research expertise is in natural language processing and multimodal machine learning, with a particular focus on multimodal generative models, grounded and embodied semantics, language generation and Q&A/dialogue, and interpretable and generalizable deep learning. He is a recipient of IIT Kanpur Young Alumnus Award, DARPA Director's Fellowship, NSF CAREER Award, Google Focused Research Award, Microsoft Investigator Fellowship, Army Young Investigator Award (YIP), DARPA Young Faculty Award (YFA), and outstanding paper awards at ACL, CVPR, EACL, COLING, and CoNLL. He has been a keynote speaker for the AACL 2023 and INLG 2022 conferences. His service includes ACL Executive Committee, ACM Doctoral Dissertation Award Committee, CoNLL Program Co-Chair, ACL Americas Sponsorship Co-Chair, and Associate/Action Editor for TACL, CL, IEEE/ACM TASLP, and CSL journals. Webpage: https://www.cs.unc.edu/~mbansal/
Presentations

A Simple LLM Framework for Long-Range Video Question-Answering
Ce Zhang and 6 other authors

Knowledge-Aware Reasoning over Multimodal Semi-structured Tables
Suyash Vardhan Mathur and 7 other authors

Explaining and Improving Contrastive Decoding by Extrapolating the Probabilities of a Huge and Hypothetical LM
Haw-Shiuan Chang and 4 other authors

LLM Self-Correction with DeCRIM: Decompose, Critique, and Refine for Enhanced Following of Instructions with Multiple Constraints
Thomas Palmeira Ferraz and 9 other authors

Opening Session
Yaser Al-Onaizan and 3 other authors

The Unreasonable Effectiveness of Easy Training Data for Hard Tasks
Peter Hase and 3 other authors

Soft Self-Consistency Improves Language Models Agents
Han Wang and 3 other authors

Inducing Systematicity in Transformers by Attending to Structurally Quantized Embeddings
Yichen Jiang and 2 other authors

The Power of Summary-Source Alignments
Ori Ernst and 7 other authors

ReConcile: Round-Table Conference Improves Reasoning via Consensus among Diverse LLMs
Justin Chen and 2 other authors

Evaluating Very Long-Term Conversational Memory of LLM Agents
Adyasha Maharana and 5 other authors

REFINESUMM: Self-Refining MLLM for Generating a Multimodal Summarization Dataset
Vaidehi Ramesh Patil and 4 other authors

Prompting Vision-Language Models For Aspect-Controlled Generation of Referring Expressions
Danfeng Guo and 7 other authors

ADaPT: As-Needed Decomposition and Planning with Language Models
Archiki Prasad and 6 other authors

Branch-Solve-Merge Improves Large Language Model Evaluation and Generation
Swarnadeep Saha and 5 other authors

VLN-Video: Utilizing Driving Videos for Outdoor Vision-and-Language Navigation
Jialu Li and 3 other authors