profile picture

Daniel Campos

Graduate student @ The University of Illinois at Urbana-Champaign

pruning

summarization

dense retrieval

efficient inference

efficient

language-model

representation alignment

efficiency inference

3

presentations

SHORT BIO

Hailing from Mexico Daniel started his NLP journey with his BS in CS from RPI. He then worked at Microsoft on Ranking at Bing with LLM(back when they had 2 commas) and helped build out popular datasets like MSMARCO and TREC Deep Learning. While at Microsoft he got his MS in Computational Linguistics from the University of Washington with a focus on Curriculum Learning for Language Models. Most recently, he has been pursuing his Ph.D. at the University of Illinois Urbana Champaign focusing on efficient inference for LLMs and robust dense retrieval. During his Ph.D., he worked for companies like Neural Magic, Walmart, Qualtrics, and Mendel.AI and now works on bringing LLMs to search at Neeva.

Presentations

oBERTa: Improving Sparse Transfer Learning via improved initialization, distillation, and pruning regimes Daniel Campos, Alexandre Marques, Mark Kurtz and Cheng Xiang Zhai (dcampos3@illinois.edu) Status: Accept

Daniel Campos

Quick Dense Retrievers Consume KALE: Post Training Kullback–Leibler Alignment of Embeddings for Asymmetrical dual encoders Daniel Campos, Alessandro Magnani and Chengxiang Zhai (dcampos3@illinois.edu) Status: Accept

Daniel Campos

To Asymmetry and Beyond: Structured Pruning of Sequence to Sequence Models for Improved Inference Efficiency Daniel Campos and ChengXiang Zhai (dcampos3@illinois.edu) Status: Accept

Daniel Campos

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved