Beating Atari Pong on a Raspberry Pi Without Backpropagation

栏目: IT技术 · 发布时间: 4年前

内容简介:Hello,In ourprevious post, we showed that we can now play Atari games from pixels on low-power hardware such as the Raspberry Pi. We can do so in an online, continually-learning fashion.However, the version of

Hello,

In ourprevious post, we showed that we can now play Atari games from pixels on low-power hardware such as the Raspberry Pi. We can do so in an online, continually-learning fashion.

However, the version of OgmaNeo2 used in that post still used backpropagation for a part of the algorithm just for reinforcement learning. It used a “routing” method to perform backpropagation despite the heavy sparsity in order to approximate a value function. This works reasonably well, but has some drawbacks:

  • Sacrifices biological plausibility
  • Can have exploding/vanishing gradients
  • Runs slower (backwards pass is slow)
  • Limits the hierarchy to reinforcement learning only (inelegant integration with time series prediction/world model building)

We have now completely removed backpropagation from our algorithm, and the resulting algorithm performs better than before (and runs faster)!

The new algorithm relies entirely on the bidirectional temporal nature of the hierarchy to perform credit assignment. The reinforcement learning occurs at the “bottom” (input/output) layer of the hierarchy only. All layers above learn to predict the representation of the layer directly below one timestep ahead of time. The reinforcement learning layer simply selects actions based on the state of the first layer and the feedback from the layers above. For more information on our technology, see our whitepaper ( DRAFT ) .

Here we have a video of the agent playing Atari Pong on a Raspberry Pi 4. It found an exploitable position, although sometimes it will randomly miss and have to play “normally” as well. Training is actually ongoing in this video, since training and inference are about the same speed in OgmaNeo2. It is not shown in this video, but the agent has managed to get a perfect game several times.

Pong on a Pi

Our agent is comprised of only 2 layers in our “exponential memory” structure as well as an additional third layer for the image encoder. Our CSDRs are all of size 4x4x32 (width x height x column size), including the image encoder. The rough architecture of the Pong Agent is shown below.

Beating Atari Pong on a Raspberry Pi Without Backpropagation

We have gone ahead and released the version of OgmaNeo2 used in the video (master branch). As mentioned previously, a handy feature of this newest, backprop-free version is that one can perform both time series prediction and reinforcement learning with the same hierarchy.

Finally, here is a peak at what will hopefully become our next demo.

Beating Atari Pong on a Raspberry Pi Without Backpropagation

Until next time!


以上所述就是小编给大家介绍的《Beating Atari Pong on a Raspberry Pi Without Backpropagation》,希望对大家有所帮助,如果大家有任何疑问请给我留言,小编会及时回复大家的。在此也非常感谢大家对 码农网 的支持!

查看所有标签

猜你喜欢:

本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们

破茧成蝶:用户体验设计师的成长之路

破茧成蝶:用户体验设计师的成长之路

刘津、李月 / 人民邮电出版社 / 2014-7 / 69.00

市面上已经有很多专业的用户体验书籍,但解决用户体验设计师在职场中遇到的众多现实问题的图书并不多见。本书从用户体验设计师的角度出发,系统地介绍了其职业生涯中的学习方法、思维方式、工作流程等,覆盖了用户体验设计基础知识、设计师的角色和职业困惑、工作流程、需求分析、设计规划和设计标准、项目跟进和成果检验、设计师职业修养以及需要具备的意识等,力图帮助设计师解决在项目中遇到的一些常见问题,找到自己的职业成长......一起来看看 《破茧成蝶:用户体验设计师的成长之路》 这本书的介绍吧!

URL 编码/解码
URL 编码/解码

URL 编码/解码

html转js在线工具
html转js在线工具

html转js在线工具