Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
Recent advances have focused on training high-performance spiking neural networks (SNNs) by leveraging surrogate gradient (SG) learning to estimate the derivatives of non-differentiable spiking activity. However, the distribution of neuronal membrane potentials varies across timesteps and progressively deviates toward both sides of the firing threshold during training. When threshold and SG remain fixed, this may lead to imbalanced spike firing rates and diminished gradient signals, which prevent SNNs from performing well. To address these issues, we propose a novel dual-stage synergistic learning algorithm that achieves forward adaptive thresholding and backward dynamic SG. In forward propagation, we adaptively adjust thresholds based on the distribution of membrane potential dynamics (MPD) at each timestep, which enriches neuronal diversity and effectively balances firing rates across layers. In backward propagation, drawing from the association between MPD and SG, we dynamically optimize the SG driven by thresholds to enhance gradient estimation through spatio-temporal alignment, effectively mitigating gradient information loss. Experimental results demonstrate that our method achieves significant performance improvements. Moreover, it allows neurons to fire stable proportions of spikes at each timestep and increases the proportion of neurons that obtain gradients in deeper layers. Code is available at Supplementary Materials.
