Blog: Gradient descent in spiking neural networks for low-power inference

栏目: IT技术 · 发布时间: 4年前

内容简介:TL;DR: Brains use spikes for communication. Spiking neurons are energy efficient because ultra sparse temporal processing. Spiking neurons are cool because they know about time. Spikes good.If you enjoyed this incredibly brief overview of SNNs, and happen

Putting the Neural back into Networks

Part 1: Why spikes?

Jan 10 ·4min read

Blog: Gradient descent in spiking neural networks for low-power inference

Photo by Silas Baisch on Unsplash

N ot long ago, one of the gods of modern machine learning made a slightly controversial statement. In the final slide of his ISSCC 2019 keynote [ 1 ], Yann LeCun [ 2 , 3 , 4 ] (that’s “Mr CNN” to you) said he was skeptical about the usefulness of spiking neural networks, as almost a throwaway remark.

He also threw some shade at a hardware field called “Neuromorphic Engineering”, asking why engineers would bother to build chips for algorithms that don’t work?

“Fair enough,” you might think. Except that we do have sophisticated working examples of spiking neural networks, performing incredibly complicated tasks from sensory processing to motor control, along with high-level planning and general intelligence. In fact, this approach is so adaptable that it can be scaled right down for simple sense-react tasks; can be applied in small mobile agents that need to interact with their environment; can autonomously learn complex tasks including games such as Starcraft and Go; and can even make fairly convincing presentations of high-level cognition and consciousness.

All this with incredible energy efficiency of around 4×10¹¹ synaptic operations per second per Watt (SynOps/s/W).

Dad-jokes aside, of course I’m referring to biological nervous systems.

Blog: Gradient descent in spiking neural networks for low-power inference

The most efficient computational structure in the known universe

One engineering lesson we can learn from biological neural systems is that communication is expensive. Neurons in the brain communicate with essentially binary “spikes” of electricity passing from one neuron to several partners via synapses. These spikes travel over long distances compared with the scale of single cells, and cost energy to generate and propagate. Reflecting this, neurons in the brain fire very sparsely, and connect to each other sparsely.

In contrast, the current breed of artificial neural networks connect densely between layers, and operate on a “frame-like” basis — all neurons compute and send their output on each time step, even when nothing much is changing in their input. Nevertheless, recent successes in training deep ANNs on very challenging problems shows that the mathematical tools for building ANNs are vey useful.

Ns, ANs and SNs

S tandard artificial neurons (ANs) are a small blob of linear algebra, that instantaneously transform some real-valued inputs into a real-valued output. A very common formulation is given by

y = H(W · x + b)

with x the inputs; y the output, b a bias input, and where H(x) is a common transfer function such as tanh , a sigmoid or a rectified-linear function. The crucial thing to note is than ANs have no concept of time — all inputs are processed instantaneously.

Blog: Gradient descent in spiking neural networks for low-power inference

Left: A simple artificial neuron. Inputs and outputs are computed instantaneously. Right: A leaky integrate-and-fire (LIF) spiking neuron. Inputs and outputs, as well as internal state, are explicitly defined as temporal signals.

Spiking neurons, on the other hand, are more like little clockwork devices that care implicitly about time, and mimic biological neurons in the simplest possible way. Shown above is a leaky integrate-and-fire (LIF) neuron. This neuron receives inputs as a series of spike trains Sᵢ(t) , via synapses which integrate the spikes and decay over time. Each neuron also has an internal state V_m(t) , which integrates synaptic inputs. When the internal state crosses a threshold V_th , a neuron emits a spike event Sₒ(t) and the neuron state V_m is decreased.

Formally, we have:

Synaptic inputs: τ_syn· d I_syn /dt + I_syn =

Sᵢ(t)

Neuron state : _m · d V_m /dt +

V_m(t) = W · I_syn + b

Spike production: if V_m(t) > V_th → Sₒ(t) = 1, V_m(t) = V_m(t) - 1

So we’ve got a spiking neuron, what can we do with it? In the next post, we’ll look at how to deal with the more complex dynamics of a spiking neuron during training.

→ Part 2: More spikes more problems

TL;DR: Brains use spikes for communication. Spiking neurons are energy efficient because ultra sparse temporal processing. Spiking neurons are cool because they know about time. Spikes good.

If you enjoyed this incredibly brief overview of SNNs, and happen to be passing through Lausanne in January 2020, then come by our workshop for spike-based signal processing at Applied Machine Learning Days!

We’ll run a hands-on workshop for building audio and vision processing systems with spiking neural networks. Come build your own interactive models! Or just come by and talk with us about SNNs and sub-mW ML inference.

About aiCTX

aiCTX is a fabless semiconductor neuromorphic engineering firm, building ultra-low-power (sub-mW) hardware for real-time machine learning inference. We design, consult and deploy applications in vision processing; audio processing; bio-signal processing and much more.


以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持 码农网

查看所有标签

猜你喜欢:

本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们

网络经济的十种策略

网络经济的十种策略

(美)凯文・凯利 / 肖华敬/任平 / 广州出版社 / 2000-06 / 26.00元

全书介绍网络经济的十个新游戏规则,分别是:蜜蜂比狮子重要;级数比加法重要;普及比稀有重要;免费比利润重要;网络比公司重要;造山比登山重要;空间比场所重要;流动比平衡重要;关系比产能重要;机会比效率重要!一起来看看 《网络经济的十种策略》 这本书的介绍吧!

JSON 在线解析
JSON 在线解析

在线 JSON 格式化工具

MD5 加密
MD5 加密

MD5 加密工具