UNDERLINE DOI: https://doi.org/10.48448/qxwn-z474
technical paper
Compression of Generative Pre-trained Language Models via Quantization
Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
