How DeepMind’s UNREAL Agent Performed 9 Times Better Than Experts on Atari

栏目: IT技术 · 发布时间: 5年前

内容简介:We can think of auxiliary tasks as “side quests.” Although they don’tOverall, the goal is to maximize the sum of two terms:where the superscript c denotes an auxiliary control task reward. Here are the two control tasks used by UNREAL:

Auxiliary Control Tasks

We can think of auxiliary tasks as “side quests.” Although they don’t directly help achieve the overall goal, they help the agent learn about environment dynamics and extract relevant information. In turn, that helps the agent learn how to achieve the desired overall end state. We can also view them as additional pseudo-reward functions for the agent to interact with.

Overall, the goal is to maximize the sum of two terms:

  1. The expected cumulative extrinsic reward
  2. The expected cumulative sum of auxiliary rewards
Overall Maximization Goal

where the superscript c denotes an auxiliary control task reward. Here are the two control tasks used by UNREAL:

  • Pixel Changes (Pixel Control): The agent tries to maximize changes in pixel values since these changes often correspond to important events.
  • Network Features (Feature Control): The agent tries to maximize the activation of all units in a given layer. This can force the policy and value networks to extract more task-relevant, high-level information.

For more details on how these tasks are defined and learned, feel free to skim this paper [1]. For now, just know that the agent tries to find accurate Q value functions to best achieve these auxiliary tasks, using auxiliary rewards defined by the user.

Okay, perfect! Now we just add the extrinsic and auxiliary rewards then run A3C using the sum as a newly defined reward! Right?

How UNREAL is Clever

In actuality, UNREAL does something different. Instead of training a single policy to optimize this reward, it trains a policy for each of the tasks on top of the base A3C policy . While all auxiliary task policies share some network components with the base A3C agent, they each also add individual components to define separate policies.

For example, the “Pixel Control” task has a deconvolutional network after the shared convolutional network and LSTM. The output defines the Q-values for the pixel control policy. (Skim [1] for details on the implementation)

Each of the policies optimizes an n-step Q-learning loss:

Auxiliary Control Loss Using N-Step Q

Even more amazingly, we never explicitly use these auxiliary control task policies. Even though we discover which actions optimize each of the auxiliary tasks, we only use the base A3C agent’s actions in the environment. Then, you may think, all this auxiliary training was for nothing!

Not quite. The key is that there are shared parts of the architecture between the A3C agent and auxiliary control tasks! As we optimize policies over the auxiliary tasks, we are changing parameters that are also used by the base agent. This has, what I like to call, a “nudging effect.”

Updating shared components not only helps learn auxiliary tasks but also better equips the agent to solve the overall problem by extracting relevant information from the environment.

In other words, we get more information from the environment than if we did not use auxiliary tasks.


以上所述就是小编给大家介绍的《How DeepMind’s UNREAL Agent Performed 9 Times Better Than Experts on Atari》,希望对大家有所帮助,如果大家有任何疑问请给我留言,小编会及时回复大家的。在此也非常感谢大家对 码农网 的支持!

查看所有标签

猜你喜欢:

本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们

榨干百度谷歌

榨干百度谷歌

张志 / 电子工业出版社 / 2011-1 / 28.00元

小小的搜索引擎,可以成为你从事网络营销的利器。如果你还没有意识到这一点,或者还不知道从何下手,请打开《榨干百度谷歌:搜索引擎广告大赢家》吧!《榨干百度谷歌:搜索引擎广告大赢家》作者将其丰富的实战经验融汇在这书中,结合大量国内不同行业实际应用案例,生动地告诉读者,怎样正确地利用搜索引擎,以很小的投资获得巨大的回报。并且深入浅出地介绍了企业开展搜索营销的关键点,包括如何提炼并组合关键词、如何撰写简洁明......一起来看看 《榨干百度谷歌》 这本书的介绍吧!

图片转BASE64编码
图片转BASE64编码

在线图片转Base64编码工具

Base64 编码/解码
Base64 编码/解码

Base64 编码/解码

MD5 加密
MD5 加密

MD5 加密工具