Leading towards reinforcement learning
Value Iteration
Learn the values for all states, then we can act according to the gradient. Value iteration learns the value of the states from the Bellman Update directly. The Bellman Update is guaranteed to converge to optimal values, under some non-restrictive conditions.
Learning a policy may be more direct than learning a value. Learning a value may take an infinite amount of time to converge to numerical precision of a 64bit float (think about a moving average averaging in a constant at every iteration, after starting with an estimate of 0, it will add a smaller and smaller nonzero number forever).
Policy Iteration
Learn a policy in tandem to the values. Policy learning incrementally looks at the current values and extracts a policy. Because the action space is finite , the hope is that it can converge faster than Value Iteration. Conceptually, the last change to the actions will happen well before the small rolling-average updates end. There are two steps to Policy Iteration.
The first is called Policy Extraction , which is how you go from a value to a policy — by taking the policy that maximizes over expected values.
The second step is Policy Evaluation . Policy evaluation takes a policy and runs value iteration conditioned on a policy . The samples are forever tied to the policy, but we know we have to run the iterative algorithms for way fewer steps to extract the relevant action information .
Like value iteration, policy iteration is guaranteed to converge for most reasonable MDPs because of the underlying Bellman Update.
Q-value Iteration
The problem with knowing optimal values is that it can be hard to distill a policy from it. The argmax
operator is distinctly nonlinear and difficult to optimize over, so Q-value Iteration takes a step towards direct policy extraction . The optimal policy at each state is simply the max q-value at that state.
The reason most instruction starts with Value Iteration is that it slots into the Bellman updates a little more naturally. Q-value Iteration requires the substitution of two of the key MDP value relations together . After doing so, it is one step removed from Q-learning, which we will get to know.
以上所述就是小编给大家介绍的《Fundamental Iterative Methods of Reinforcement Learning》,希望对大家有所帮助,如果大家有任何疑问请给我留言,小编会及时回复大家的。在此也非常感谢大家对 码农网 的支持!
猜你喜欢:本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们。
点击的奥秘:运用说服心理术提升在线影响力(全彩)
Nathalie Nahai(娜塔莉.纳海) / 陈旭 / 电子工业出版社 / 2014-9-1 / 75.00元
用户的每一次点击,不管是在虚拟商店购物,还是在浏览企业网站,或是漫无目的地把玩手机,都蕴藏着基于心理学的无穷奥秘。《点击的奥秘:运用说服心理术提升在线影响力》作者为全球知名的网络心理学家,其在《点击的奥秘:运用说服心理术提升在线影响力》中将心理学、神经科学及行为经济学巧妙地结合在一起,挖掘和提炼出一套行之有效的网络用户引导策略——既涵盖在线说服最新研究动向,也包括最前沿的科技成果,以及其他诸多惊人......一起来看看 《点击的奥秘:运用说服心理术提升在线影响力(全彩)》 这本书的介绍吧!