profile picture

Mostafa Dehghani

Google DeepMind

multi-task learning

transformers

pretraining

convolutions

adapters

hyper networks

parameter-efficient fine-tuning

2

presentations

8

number of views

Presentations

Parameter-efficient Multi-task Fine-tuning for Transformers via Shared Hypernetworks

Rabeeh Karimi mahabadi and 3 other authors

Are Pretrained Convolutions Better than Pretrained Transformers?

Yi Tay and 6 other authors

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved