内容简介:In recent years reinforcement Learning (RL) programs have successfully trained agents to defeat human professionals in complex games, offered insights for solving drug design challenges, and much more. These exciting advances however often come with a dram
In recent years reinforcement Learning (RL) programs have successfully trained agents to defeat human professionals in complex games, offered insights for solving drug design challenges, and much more. These exciting advances however often come with a dramatic growth in model scale and complexity, which has made it difficult for researchers to reproduce existing RL algorithms or rapidly prototype new ideas.
In the new paper Acme: A Research Framework for Distributed Reinforcement Learning , a team of DeepMind researchers introduce a framework that aims to solve the problem by enabling simple RL agent implementations to be run at different scales of execution.
RL enables autonomous agents to learn how to interact with an unknown environment by relying on assigned reward functions and negative rewards. Through its exploration of the environment, an agent gathers useful experiences from which it can learn to subsequently adjust and improve its performance . In online RL, both gathering environmental information and learning are handled simultaneously, and an enormous amount of interaction between the agent and the environment is required. In simulated environments and games, researchers obtain this massive experience in a distributed manner.
Offline RL meanwhile does not focus on learning policies represented as deep neural networks — learning instead on policies from a fixed dataset of experiences. In both settings, however, the widespread use of increasingly large-scale distributed systems in RL agent training is noteworthy.
The researchers suggest that — from a simple, single-process prototype of an algorithm to a full large-scale distributed system — re-implementation of the agent may be required to effectively improve reproducibility . The team explains they designed Acme to enable agents to run in both single-process and highly distributed regimes by providing tools and components for constructing agents at various levels of abstraction, from the lowest (e.g., networks, losses, policies) through to workers (actors, learners, replay buffers), and finally entire agents complete with the experimental apparatus necessary for robust measurement and evaluation, such as training loops, logging, and checkpointing.
The team describes Acme as a classical RL interface which connects actors with their environments . Actors can make observations and select actions that will be fed back into the environment accordingly and will then be used to update the actor’s internal state. The internal division of acting and learning from data also allows researchers to re-use the acting portion across many different agents.
Acme can enable reproducibility of methods and results, simplify the designing of new algorithms, and enhance the readability of RL agents. DeepMind says it released Acme to support scalable and fast iteration of research ideas in RL, and hope the research community can use the tool to explore RL agents at various levels of complexity, and leverage it as a reference implementation for existing RL algorithms and robust baselines.
The paper Acme: A new Framework for Distributed Reinforcement Learning is on arXiv , and Acme itself can be found on the project GitHub .
Journalist: Fangyu Cai | Editor : Michael Sarazen
以上所述就是小编给大家介绍的《DeepMind Introduces ‘Acme’ Research Framework for Distributed RL》,希望对大家有所帮助,如果大家有任何疑问请给我留言,小编会及时回复大家的。在此也非常感谢大家对 码农网 的支持!
猜你喜欢:本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们。
全景探秘游戏设计艺术
Jesse Schell / 吕阳、蒋韬、唐文 / 电子工业出版社 / 2010-6 / 69.00元
撬开你脑子里的那些困惑,让你重新认识游戏设计的真谛,人人都可以成为成功的游戏设计者!从更多的角度去审视你的游戏,从不完美的想法中跳脱出来,从枯燥的游戏设计理论中发现理论也可以这样好玩。本书主要内容包括:游戏的体验、构成游戏的元素、元素支撑的主题、游戏的改进、游戏机制、游戏中的角色、游戏设计团队、如何开发好的游戏、如何推销游戏、设计者的责任等。 本书适合任何游戏设计平台的游戏设计从业人员或即将......一起来看看 《全景探秘游戏设计艺术》 这本书的介绍吧!