technical paper

AAAI 2024

Vancouver , Canada

Enhancing the Robustness of Spiking Neural Networks with Stochastic Gating Mechanisms | VIDEO

keywords:

stochasticity

spiking neural networks

adversarial robustness

Spiking neural networks (SNNs) exploit neural spikes to provide solutions for low-power intelligent applications on neuromorphic hardware. Although SNNs have high computational efficiency due to spiking communication, they still lack resistance to adversarial attacks and noise perturbations. In the brain, neuronal responses generally possess stochasticity induced by ion channels and synapses, while the role of stochasticity in computing tasks is poorly understood. Inspired by this, we elaborate a stochastic gating spiking neural model for layer-by-layer spike communication, introducing stochasticity to SNNs. Through theoretical analysis, our gating model can be viewed as a regularizer that prevents error amplification under attacks. Meanwhile, our work can explain the robustness of Poisson coding. Experimental results prove that our method can be used alone or with existing robust enhancement algorithms to improve SNN robustness and reduce SNN energy consumption. We hope our work will shed new light on the role of stochasticity in the computation of SNNs. Our code is available at https://github.com/DingJianhao/StoG-meets-SNN/.

Downloads

SlidesPaperTranscript English (automatic)

Next from AAAI 2024

A Closer Look at Curriculum Adversarial Training: From an Online Perspective
technical paper

A Closer Look at Curriculum Adversarial Training: From an Online Perspective

AAAI 2024

Weiwei LiuLianghe Shi
Lianghe Shi and 1 other author

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Lectures
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2023 Underline - All rights reserved