
3
presentations
SHORT BIO
I am fourth year at MS/Ph.D. integrated program in the Artificial Intelligence Hardware & Algorithm lab at Hanyang University. I am fortunate to be advised by Professor Jungwook Choi. My research interest lies in the area of Transformer based Model compression (Quantization, Knowledge Distillation) and LLM (generative language model, PEFT, visual-language model).
Presentations

RA-LoRA: Rank-Adaptive Parameter-Efficient Fine-Tuning for Accurate 2-bit Quantized Large Language Models
Minsoo Kim and 3 other authors

Teacher Intervention: Improving Convergence of Quantization Aware Training for Ultra-Low Precision Transformers
Minsoo Kim and 4 other authors

Understanding and Improving Knowledge Distillation for Quantization Aware Training of Large Transformer Encoders
Minsoo Kim