Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
My research focuses on making LLM training and serving widely accessible to academia and smaller organizations by reducing dependence on proprietary data and heavy compute. In this talk, I will present a coherent framework that unifies data curation without labels, zero-data self-evolution, reward calibration for reliable confidence estimation, and computation-efficient language model inference.
