profile picture

Ivan Kobyzev

knowledge distillation

efficient training

large language models

regularization

efficient methods

optimal transport

low-rank

dynamic

adaptation

hallucination

alignment

position embedding

efficient nlp

linear transformer

long document classification

5

presentations

3

number of views

SHORT BIO

Ivan Kobyzev got his Ph.D. in Pure Mathematics and has been applying math skills to Deep Learning theory and practice since then. During his postdoc at the University of Waterloo and his industry positions, he researched various domains like Generative Models, Cognitive Computing, and Graph Neural Networks. At Huawei’s NLP team, Ivan is working on the Optimization and Efficient Training of Language Models.

Presentations

Resonance RoPE: Improving Context Length Generalization of Large Language Models

Suyuchen Wang and 4 other authors

OTTAWA: Optimal TransporT Adaptive Word Aligner for Hallucination and Omission Translation Errors Detection

Chenyang Huang and 5 other authors

Efficient Classification of Long Documents via State-Space Models | VIDEO

Peng Lu and 4 other authors

DyLoRA: Parameter-Efficient Tuning of Pre-trained Models using Dynamic Search-Free Low-Rank Adaptation

Mojtaba Valipour and 3 other authors

Do we need Label Regularization to Fine-tune Pre-trained Language Models?

Ivan Kobyzev and 7 other authors

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved