Lecture image placeholder

Premium content

Access to this content requires a subscription. You must be a premium user to view this content.

Monthly subscription - $9.99Pay per view - $4.99Access through your institutionLogin with Underline account
Need help?
Contact us
Lecture placeholder background
VIDEO DOI: https://doi.org/10.48448/tdq0-xw43

technical paper

MMM 2022

November 07, 2022

Minneapolis, United States

Fully Parallel Convolution with Chains of Spin

Convolutional neural networks (CNN) are state of the art algorithms for image processing. Despite a small number of synaptic weights, the CNNs remain computationally costly to train in software due to the necessity to exchange huge data flows between memory and processing units. Unconventional hardware is suited to address this limitation.
In neuromorphic spintronics, it has been previously demonstrated that the spin diode effect can be used to apply a synaptic weight on a radiofrequency signal 1. Here we go a step beyond, to show an experimental implementation of a convolutional layer employing chains of spin-diodes. We design a compact architecture of 3 chains with 3 spin-diodes connected in series, that performs a padded convolution with a filter of 3 pixels on 5 input pixels multiplexed in frequency in one RF signal (See Figure 1). The synaptic weights corresponding to the filter are encoded by a small frequency detuning between the inputs and the spin-diodes resonance and are controlled in parallel using currents in 3 strip lines.
This architecture exploits the intrinsic weight redundancy of convolutions (all filters share the same weights) to enhance the compacity of the hardware and greatly simplify the process of updating a weight, only requiring updating a the current in one strip line.
We will also present processed experimental results highlighting the scalability of the proposed architecture. An error on the performed convolution as low as 0.28% was achieved (See Figure 2).
The proposed architecture enables us to both reduce the size of this hardware implementation while, at the same time, performing the convolution in one timestep contrary to previous time-multiplexed implementations. According to a previous study 3 we can achieve to decrease by one order of magnitude the energy consumption compared to current GPUs and two orders of magnitude in operating latency. This proof of concept of spintronic CNN, opens the path to spintronic neural networks that can exploit the power of convolutional layers in a fully parallel, compact, and energy efficient way.


Transcript English (automatic)

Next from MMM 2022

Large Dzyaloshinskii Moriya interaction at the h
technical paper

Large Dzyaloshinskii Moriya interaction at the h

MMM 2022

Banan El-Kerdi and 6 other authors

07 November 2022

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)


  • All Lectures
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2023 Underline - All rights reserved