Infinite Steps CartPole Problem With Variable Reward

栏目: IT技术 · 发布时间: 5年前

内容简介:In theThe CartPole problem is considered to be solved when the average reward is greater than or equal toThe CartPole problem has the following conditions for episode termination:

Infinite Steps CartPole Problem With Variable Reward

Modify Step Method of CartPole OpenAI Gym Environment Using Inheritance

In the last blog post , we wrote our first reinforcement learning application — CartPole problem. We used Deep -Q-Network to train the algorithm. As we can see in the blog, the fixed reward of +1 was used for all the stable states and when the CartPole loses its balance, a reward of 0 was given. We saw at the end: when the CartPole approaches 200 steps, it tends to lose balance. We ended the blog suggesting a remark: the maximum number of steps (which we defined 200) and the fixed reward may have led to such behavior. Today, let’s not limit the number of steps and modify the reward and see how the CartPole behaves.

CartPole Problem Definition

The CartPole problem is considered to be solved when the average reward is greater than or equal to 195.0 over 100 consecutive trials. This is considering the fixed reward of 1.0 . Thanks to its definition, it makes sense to keep a fixed reward of 1.0 for every balance state and limit the maximum number of steps to 200 . It delights to know that the problem was solved in the previous blog .

The CartPole problem has the following conditions for episode termination:

  1. Pole angle is more than 12 degrees.
  2. Cart position is more than 2.4 — center of the cart reaches the edge of the display.

Variable Reward

Our goal here is to remove the number of steps limitation and give a variable reward to each state.

If x and θ represents cart position and pole angle respectively, we define the reward as:

reward = (1 - (x ** 2) / 11.52 - (θ ** 2) / 288)

Here, both the cart position and pole angle components are normalized to [0, 1] interval to give equal weightage to them. Let’s see the screenshot of the 2D view of the 3D graph.

We see in the graph that when the CartPole is perfectly balanced (i.e. x = 0 and θ = 0 ), the maximum reward is achieved (i.e. 1 ). With increase in the absolute values of x and θ , the reward decreases and reaches 0 when |x| = 2.4 and |θ| = 12 .

Let’s inherit the CartPole environment gym class ( CartPoleEnv) to our custom class, CustomCartPoleEnv, and overwrite the step method. In the step method, we write the variable reward instead of the fixed reward.

By using the above block of code, the components of TF-Agents are made and the Deep Q-Network is trained. We see that the CartPole is even more balanced and stable over a large number of steps.

Demonstration

Let’s see the video of how our CartPole behaves after using the variable reward.

One episode lasts 35.4 seconds on an average. Impressive, isn’t it?

Possible Improvements

Here, the reward becomes zero only when both of the expressions (pole angle and cart position) reach the extreme values. We can employ different reward function that returns zero when one of the extreme conditions is reached. I expect such a reward function to do even better. Therefore, readers are encouraged to try such a reward function and comment how the CartPole behaved. Happy RLing!


以上就是本文的全部内容,希望本文的内容对大家的学习或者工作能带来一定的帮助,也希望大家多多支持 码农网

查看所有标签

猜你喜欢:

本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们

数学世纪

数学世纪

皮耶尔乔治·奥迪弗雷迪 / 胡作玄、胡俊美、于金青 / 上海科学技术出版社 / 2012-1 / 28.00元

《数学世纪:过去100年间30个重大问题》以简短可读的方式论述了整个20世纪的数学。20世纪的数学博大精深,新兴领域及学科的建立发展,许多经典问题得到解决,大量新的有意义的问题的引入,为数学带来了活力。《数学世纪:过去100年间30个重大问题》介绍了数学基础,20世纪的纯粹数学、应用和计算数学,以及目前未解的重要问题,中间穿插了希尔伯特的23个问题的解决情况、菲尔兹奖和沃尔夫奖得主的工作成就等。一起来看看 《数学世纪》 这本书的介绍吧!

html转js在线工具
html转js在线工具

html转js在线工具

UNIX 时间戳转换
UNIX 时间戳转换

UNIX 时间戳转换

HEX CMYK 转换工具
HEX CMYK 转换工具

HEX CMYK 互转工具