profile picture

Qifan Wang

attribute value extraction

large language model

generation

question answering

end-to-end

reasoning

knowledge distillation

few-shot learning

weak supervision

natural language processing

text classification

task-oriented dialog

question generation

self-supervised learning

data augmentation

20

presentations

33

number of views

SHORT BIO

I am a Research Scientist at Meta AI, leading a team building innovative Deep Learning and Natural Language Processing models for Recommendation System. Before joining Meta, I worked as a Research Engineer at Google Research, focusing on deep domain representations and large-scale object understanding. I also worked at Intel Labs for two years. I received my PhD in computer science from Purdue University in 2015. Prior to that, I obtained both my MS and BS degrees in computer science from Tsinghua University. My research interests include deep learning, natural language processing, information retrieval, data mining, and computer vision. I have co-authored over 80 publications in top-tier conferences and journals, including NeurIPS, SIGKDD, WWW, SIGIR, AAAI, IJCAI, ACL, EMNLP, CVPR, WSDM, CIKM, ECCV, TPAMI, TKDE and TOIS. I also serve as area chairs, program committee members, editorial board members, and reviewers for academic conferences and journals.

Presentations

MPT: Multimodal Prompt Tuning for Zero-shot Instruction Learning

Taowen Wang and 13 other authors

Direct Multi-Turn Preference Optimization for Language Agents

Wentao Shi and 4 other authors

InternalInspector $I^2$: Robust Confidence Estimation in LLMs through Internal States

Mohammad Beigi and 9 other authors

Vision-Flan: Scaling Human-Labeled Tasks in Visual Instruction Tuning

Zhiyang Xu and 8 other authors

Multimodal Instruction Tuning with Conditional Mixture of LoRA

Ying Shen and 5 other authors

MART: Improving LLM Safety with Multi-round Automatic Red-Teaming

Suyu Ge and 7 other authors

LM-Infinite: Zero-Shot Extreme Length Generalization for Large Language Models

Chi Han and 6 other authors

RESPROMPT: Residual Connection Prompting Advances Multi-Step Reasoning in Large Language Models

Song Jiang and 10 other authors

LLM-Rec: Personalized Recommendation via Prompting Large Language Models

Hanjia Lyu and 9 other authors

Ameli: Enhancing Multimodal Entity Linking with Fine-Grained Attributes

Barry Menglong Yao and 7 other authors

AD-KD: Attribution-Driven Knowledge Distillation for Language Model Compression

Siyue Wu and 4 other authors

Disentangled Phonetic Representation for Chinese Spelling Correction

Zihong Liang and 2 other authors

MixPAVE: Mix-Prompt Tuning for Few-shot Product Attribute Value Extraction

Qifan Wang

RankCSE: Unsupervised Sentence Representations Learning via Learning to Rank

Jiduan Liu and 8 other authors

MUSTIE: Multimodal Structural Transformer for Web Information Extraction

Qifan Wang

Orders Are Unwanted: Dynamic Deep Graph Convolutional Network for Personality Detection

Tao Yang and 3 other authors

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Lectures
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved