Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
We present NeuroAda, a new featherlight adaptation framework that activates each neuron's potential through reconfigurable, magnitude-based selection. By identifying the top-weights per neuron prior to training and introducing sparse bypass connections only for those selected, NeuroAda enables fine-grained, task-agnostic updates without architectural changes or full gradient computation and storage. This design combines structural simplicity with high memory efficiency. Empirical results on 23+ datasets show that NeuroAda yields state-of-the-art accuracy with as little as trainable parameters while reducing CUDA memory usage by up to 60%.