How DeepMind’s UNREAL Agent Performed 9 Times Better Than Experts on Atari

栏目: IT技术 · 发布时间: 5年前

内容简介:We can think of auxiliary tasks as “side quests.” Although they don’tOverall, the goal is to maximize the sum of two terms:where the superscript c denotes an auxiliary control task reward. Here are the two control tasks used by UNREAL:

Auxiliary Control Tasks

We can think of auxiliary tasks as “side quests.” Although they don’t directly help achieve the overall goal, they help the agent learn about environment dynamics and extract relevant information. In turn, that helps the agent learn how to achieve the desired overall end state. We can also view them as additional pseudo-reward functions for the agent to interact with.

Overall, the goal is to maximize the sum of two terms:

  1. The expected cumulative extrinsic reward
  2. The expected cumulative sum of auxiliary rewards
Overall Maximization Goal

where the superscript c denotes an auxiliary control task reward. Here are the two control tasks used by UNREAL:

  • Pixel Changes (Pixel Control): The agent tries to maximize changes in pixel values since these changes often correspond to important events.
  • Network Features (Feature Control): The agent tries to maximize the activation of all units in a given layer. This can force the policy and value networks to extract more task-relevant, high-level information.

For more details on how these tasks are defined and learned, feel free to skim this paper [1]. For now, just know that the agent tries to find accurate Q value functions to best achieve these auxiliary tasks, using auxiliary rewards defined by the user.

Okay, perfect! Now we just add the extrinsic and auxiliary rewards then run A3C using the sum as a newly defined reward! Right?

How UNREAL is Clever

In actuality, UNREAL does something different. Instead of training a single policy to optimize this reward, it trains a policy for each of the tasks on top of the base A3C policy . While all auxiliary task policies share some network components with the base A3C agent, they each also add individual components to define separate policies.

For example, the “Pixel Control” task has a deconvolutional network after the shared convolutional network and LSTM. The output defines the Q-values for the pixel control policy. (Skim [1] for details on the implementation)

Each of the policies optimizes an n-step Q-learning loss:

Auxiliary Control Loss Using N-Step Q

Even more amazingly, we never explicitly use these auxiliary control task policies. Even though we discover which actions optimize each of the auxiliary tasks, we only use the base A3C agent’s actions in the environment. Then, you may think, all this auxiliary training was for nothing!

Not quite. The key is that there are shared parts of the architecture between the A3C agent and auxiliary control tasks! As we optimize policies over the auxiliary tasks, we are changing parameters that are also used by the base agent. This has, what I like to call, a “nudging effect.”

Updating shared components not only helps learn auxiliary tasks but also better equips the agent to solve the overall problem by extracting relevant information from the environment.

In other words, we get more information from the environment than if we did not use auxiliary tasks.


以上所述就是小编给大家介绍的《How DeepMind’s UNREAL Agent Performed 9 Times Better Than Experts on Atari》,希望对大家有所帮助,如果大家有任何疑问请给我留言,小编会及时回复大家的。在此也非常感谢大家对 码农网 的支持!

查看所有标签

猜你喜欢:

本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们

独角兽之路

独角兽之路

三节课产品社区 / 电子工业出版社 / 2016-7 / 79.00元

对2~3年以上经验的互联网人来说,最好的学习可能不是听课,而是分析各类真实的产品和运营案例。而《独角兽之路:20款快速爆发且极具潜力的互联网产品深度剖析(全彩)》正好提供了对滴滴出行、百度外卖、懂球帝、快手App等20款极具代表性的准独角兽产品的发展路径的深度分析。 通过阅读《独角兽之路:20款快速爆发且极具潜力的互联网产品深度剖析(全彩)》,你可以发现互联网产品发展的背后,或许存在着某些共......一起来看看 《独角兽之路》 这本书的介绍吧!

MD5 加密
MD5 加密

MD5 加密工具

XML 在线格式化
XML 在线格式化

在线 XML 格式化压缩工具