Introducing TayPO, a Unifying Framework for Reinforcement Learning

栏目: IT技术 · 发布时间: 5年前

内容简介:A team of researchers from Columbia University and DeepMind have proposed a Taylor Expansion Policy Optimization (TayPO) framework that combines two leading algorithmic improvement methods.Policy optimization is a major framework in model-free reinforcemen

A team of researchers from Columbia University and DeepMind have proposed a Taylor Expansion Policy Optimization (TayPO) framework that combines two leading algorithmic improvement methods.

Introducing TayPO, a Unifying Framework for Reinforcement Learning

Policy optimization is a major framework in model-free reinforcement learning (RL), providing insights that can drive significant algorithmic performance gains. Two of the most prominent such algorithmic improvements are trust-region policy search and off-policy corrections — and these idea streams are usually evaluated separately. In the paper Taylor Expansion Policy Optimization, researchers partially unify these algorithmic ideas into a single framework, showing how Taylor expansions — a method based on the Taylor series concept used to describe and approximate math functions — share high-level similarities with both trust-region policy search and off-policy corrections. The paper was presented this week at ICML2020.

In most previous research on trust-region policy search, the main idea is to constrain the size of policy updates to limit the deviations between consecutive policies and lower-bound the performance of a new policy. Off-policy corrections meanwhile require accounting for discrepancies between target policies and behaviour policies. The researchers propose that the inherent notion of a trust-region constraint is a common feature shared by Taylor expansions and trust-region policy search, and that Taylor expansions also satisfy the requirement for off-policy evaluations.

This paper illustrates how Taylor expansions construct approximations to the full IS (Importance Sampling) corrections, which are at the core of most off-policy evaluation techniques and hence intimately relate to established off-policy evaluation techniques. Prior work has focused on applying off-policy corrections directly to policy gradient estimators instead of the surrogate objectives which generate the gradients. The researchers note that although standard policy optimization objectives involve IS weights, their link with IS is not made explicit. The use of Taylor expansions resolves the implicit link between standard policy optimization objectives and IS.

The researchers evaluated the benefits of applying the Taylor expansions across a diverse set of scenarios. The experiment results indicate that second-order correction leads to marginally better performance than first-order and retrace, and is significantly better than zero-order. In general, unbiased (or slightly biased) off-policy corrections do not yet perform as well as radically biased off-policy variants. All in all, this new formulation can bring significant gains to state-of-the-art deep RL agents.

The paper Taylor Expansion Policy Optimization is on arXiv .

Author: Grace Duan | Editor : Michael Sarazen & Fangyu Cai

Introducing TayPO, a Unifying Framework for Reinforcement Learning

Synced Report |  A Survey of China’s Artificial Intelligence Solutions in Response to the COVID-19 Pandemic — 87 Case Studies from 700+ AI Vendors

This report offers a look at how the Chinese government and business owners have leveraged artificial intelligence technologies in the battle against COVID-19. It is also available on Amazon Kindle .

Click here to find more reports from us.

We know you don’t want to miss any story.  Subscribe to our popular  Synced Global AI Weekly to get weekly AI updates.

Introducing TayPO, a Unifying Framework for Reinforcement Learning

Advertisements


以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持 码农网

查看所有标签

猜你喜欢:

本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们

第二次机器革命

第二次机器革命

[美]埃里克·布莱恩约弗森 / 蒋永军 / 中信出版社 / 2014-9 / 59.80

“一本非常鼓舞人心的书!”——托马斯•L•弗里德曼 《世界是平的》作者 一场革命开始了! 在《第二次机器革命》这本书中,埃里克•布莱恩约弗森和安德鲁•麦卡菲——这两位处于数字技术时代最前沿的思想家,向我们阐述了驱动我们经济和生活的发生变革的力量。他们认为,数字技术将会给我们带来难以想象的巨大变革:想象一下令人眩目的个人数字技术产品、一流的基础设施,都将会给我们带来极大的便利。数字技术(......一起来看看 《第二次机器革命》 这本书的介绍吧!

HTML 编码/解码
HTML 编码/解码

HTML 编码/解码

html转js在线工具
html转js在线工具

html转js在线工具

RGB CMYK 转换工具
RGB CMYK 转换工具

RGB CMYK 互转工具