profile picture

Junyang Lin

robustness

text generation

large language models

text-to-sql

multitask

cross-modal retrieval

text recognition

multimodal pre-training

image-text retrieval

relation alignment

prompt tuning

preference learning

multimodal pretrained model

multimodal pretrained models

supervised fine-tuning

6

presentations

SHORT BIO

Junyang Lin is a staff engineer in DAMO Academy, Alibaba Group. He graduated from Peking University. His research interests are on natural language processing and multimodal representation learning, with a focus on large-scale pretraining. He has published articles on NeurIPS, ICML, ACL, etc. Previously, he developed the extremely large-scale pretrained model M6, unified multimodal multitask model OFA, cross-modal representation model Chinese CLIP, etc. Recently, he is leading the development of the large language model, Qianwen, and working on pretraining, alignment, multimodal integration and AI agent.

Presentations

Fine-Tuning Language Models with Collaborative and Semantic Experts

Binyuan Hui and 6 other authors

Can Large Language Models Always Solve Easy Problems if They Can Solve Harder Ones?

Zhe Yang and 6 other authors

Synthesizing Text-to-SQL Data from Weak and Strong LLMs

Jiaxi Yang and 5 other authors

Prompt Tuning for Unified Multimodal Pretrained Models

Junyang Lin and 1 other author

Transferring General Multimodal Pretrained Models to Text Recognition

Junyang Lin

Learning Relation Alignment for Calibrated Cross-modal Retrieval

Shuhuai Ren and 7 other authors

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved