Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
We introduce Spire, a speech-augmented language model (LM) capable of both translating and transcribing speech input from English into 10 other languages as well as translating text input in both language directions. Spire integrates the speech modality into an existing multilingual LM (MLM) via speech discretization and continued pre-training using only 42.5K hours of speech. In particular, we adopt the pretraining framework of MLMs and treat discretized speech input as an additional translation language. This approach not only equips the MLM with speech capabilities, but also preserves its strong text-only performance. We achieve this using significantly less data than existing speech LMs, demonstrating that discretized speech input integration as an additional language is feasible during LM adaptation. We will make our code and models available to the community.