A New Brain-Inspired Learning Method for AI Saves Memory and Energy

栏目: IT技术 · 发布时间: 4年前

Despite the frequent analogies, today’s AI operates on very different principles to the human brain. Now researchers have proposed a new learning method more closely tied to biology, which they think could help us approach the brain’s unrivaled efficiency.

Modern deep learning is at the very least biologically-inspired, encoding information in the strength of connections between large networks of individual computing units known as neurons. Probably the biggest difference, though, is the way these neurons communicate with each other.

Artificial neural networks are organized into layers, with each neuron typically connected to every neuron in the next layer. Information passes between layers in a highly synchronized fashion as numbers falling in a range that determines the strength of the connection between pairs of neurons.

Biological neurons, on the other hand, communicate by firing off electrical impulses known as spikes, and each neuron does so on its own schedule. Connections are not neatly divided into layers and feature many feedback loops that means the output of a neuron often ends up impacting its input somewhere down the line.

This spike-based approach is vastly more energy efficient, which is why training the most powerful AI requires kilowatts of electricity while the brain uses just 20 watts. That’s led to growing interest in the development of artificial spiking neural networks as well as so-called neuromorphic hardware—computer chips that mimic the physical organization and principles of the brain—that could run them more efficiently.

But our understanding of these spike-based approaches is still underdeveloped, and they struggle to reach the performance of more conventional artificial neural nets. Now though, researchers from the Graz University of Technology in Austria think they may have found a way to approach the power of deep learning using a biological plausible learning approach that works with spiking neural networks.

In deep learning the network is trained by getting it to make predictions on the data and then assessing how far off it is. This error is then fed backwards through the network to guide adjustments in the strength of connections between neurons. This process is calledbackpropagation, and over many iterations will tune the network until it makes accurate predictions.

A similar approach can be applied to spiking neural networks, but it requires huge amounts of memory. It’s also clear that this is not how the brain solves the learning problem, because it requires error signals to be sent backwards in both time and space across the synapses between neurons, which is clearly impossible.

That prompted the researchers, who are part of the Human Brain Project, to look at two features that have become clear in experimental neuroscience data: each neuron retain s a memory of previous activity in the form of molecular markers that slowly fade with time; and the brain provides top-down learning signals using things like the neurotransmitter dopamine that modulate s the behavior of groups of neurons.

In a paper in Nature Communications , the Austrian team describe s how they create d artificial analogues of these two features to create a new learning paradigm they call e-prop. While the approach learns slower than backpropagation-based methods, it achieves comparable performance.

More importantly, it allows online learning. That means that rather than processing big batches of data at once, which requires constant transfer to and from memory that contributes significantly to machine learning’s energy bills, the approach simply learns from data as it becomes available. That dramatically cuts the amount of memory and energy it requires, which makes it far more practical to use for on-chip learning in smaller mobile devices.

The team is now working with researchers from Intel to integrate the approach with the next version of the company’s neuromorphic chip Loihi , which is optimized for spiking networks. They’re also teaming up with fellow Human Brain Project researchers at the University of Manchester to apply e-prop to the neuromorphic supercomputer SpiNNaker .

There’s still a long way to go before the technique can match the power of today’s leading AI. But if it helps us start to approach the efficiencies we see in biological brains, it might not be long before AI is everywhere.

Image Credit: Gerd Altmann  from  Pixabay


以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持 码农网

查看所有标签

猜你喜欢:

本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们

跨平台桌面应用开发:基于Electron与NW.js

跨平台桌面应用开发:基于Electron与NW.js

【丹】Paul B. Jensen / Goddy Zhao / 2018-3 / 99

《跨平台桌面应用开发:基于Electron与NW.js》是一本同时介绍 Electron和 NW.js的图书,这两者是目前流行的支持使用 HTML、CSS 和 JavaScript 进行桌面应用开发的框架。书中包含大量的编码示例,而且每个示例都是五脏俱全的实用应用,作者对示例中的关键代码都做了非常详细的解释和说明,可让读者通过实际的编码体会使用这两款框架开发桌面应用的切实感受。除此之外,在内容上,......一起来看看 《跨平台桌面应用开发:基于Electron与NW.js》 这本书的介绍吧!

图片转BASE64编码
图片转BASE64编码

在线图片转Base64编码工具

随机密码生成器
随机密码生成器

多种字符组合密码

UNIX 时间戳转换
UNIX 时间戳转换

UNIX 时间戳转换