内容简介:Continuing from myAs explained in the last post, the vector representation of forward-mode AD can compute the derivatives of all parameter simultaneously, but it does so with considerable space cost: each operation creates a vector computing the derivative
Continuing from my last post on implementing forward-mode automatic differentiation (AD) using C# operator overloading , this is just a quick follow-up showing how easy reverse mode is to achieve, and why it's important.
Why Reverse Mode Automatic Differentiation?
As explained in the last post, the vector representation of forward-mode AD can compute the derivatives of all parameter simultaneously, but it does so with considerable space cost: each operation creates a vector computing the derivative of each parameter. So N parameters with M operations would allocation O(N*M) space. It turns out, this is unnecessary!
Reverse mode AD allocates only O(N+M) space to compute the derivatives of N parameters across M operations. In general, forward mode AD is best suited to differentiating functions of type:
<strong>R</strong> → <strong>R</strong><sup>N</sup>
That is, functions of 1 parameter that compute multiple outputs. Reverse mode AD is suited to the dual scenario:
<strong>R</strong><sup>N</sup> → <strong>R</strong>
That is, functions of many parameters that return a single real number. A lot of problems are better suited to reverse mode AD, and some modern machine learning frameworks now employ reverse mode AD internally (thousands of parameters, single output that's compared to a goal).
How does Reverse Mode Work?
The identities I described in the other article still apply since they're simply the chain rule , but reverse mode computes derivatives backwards . Forward-mode AD is easy to implement using dual numbers in which the evaluation order matches C#'s normal evaluation order: just compute a second number corresponding to the derivative along side the normal computation. Since reverse mode runs backwards, we have to do the computational dual: build a (restricted) continuation!
You can see a rough sketch of both forward mode and reverse mode here . Forward mode AD using dual numbers will look something like this:
public readonly struct Fwd { public readonly double Magnitude; public readonly double Derivative; public Fwd(double mag, double deriv) { this.Magnitude = mag; this.Derivative = deriv; } public Fwd Pow(int k) => new Fwd(Math.Pow(Magnitude, k), k * Math.Pow(Magnitude, k - 1) * Derivative); public static Fwd operator +(Fwd lhs, Fwd rhs) => new Fwd(lhs.Magnitude + rhs.Magnitude, lhs.Derivative + rhs.Derivative); public static Fwd operator *(Fwd lhs, Fwd rhs) => new Fwd(lhs.Magnitude + rhs.Magnitude, lhs.Derivative * rhs.Magnitude + rhs.Derivative * lhs.Magnitude); public static Func<double, Fwd> Differentiate(Func<Fwd, Fwd> f) => x => f(new Fwd(x, 1)); public static Func<double, double, Fwd> DifferentiateX0(Func<Fwd, Fwd, Fwd> f) => (x0, x1) => f(new Fwd(x0, 1), new Fwd(x1, 0)); public static Func<double, double, Fwd> DifferentiateX1(Func<Fwd, Fwd, Fwd> f) => (x0, x1) => f(new Fwd(x0, 0), new Fwd(x1, 1)); }
Translating this into reverse mode entails replacing Fwd.Derivative
with a continuation like so:
public readonly struct Rev { public readonly double Magnitude; readonly Action<double> Derivative; public Rev(double y, Action<double> dy) { this.Magnitude = y; this.Derivative = dy; } public Rev Pow(int e) { var x = Magnitude; var k = Derivative; return new Rev(Math.Pow(Magnitude, e), dx => k(e * Math.Pow(x, e - 1) * dx)); } public static Rev operator +(Rev lhs, Rev rhs) => new Rev(lhs.Magnitude + rhs.Magnitude, dx => { lhs.Derivative(dx); rhs.Derivative(dx); }); public static Rev operator *(Rev lhs, Rev rhs) => new Rev(lhs.Magnitude * rhs.Magnitude, dx => { lhs.Derivative(dx * rhs.Magnitude); rhs.Derivative(dx * lhs.Magnitude); }); public static Func<double, (double, double)> Differentiate(Func<Rev, Rev> f) => x => { double dx = 1; var y = f(new Rev(x, dy => dx = dy)); y.Derivative(1); return (y.Magnitude, dx); }; public static Func<double, double, (double, double, double)> Differentiate(Func<Rev, Rev, Rev> f) => (x0, x1) => { double dx0 = 1, dx1 = 1; var y = f(new Rev(x0, dy => dx0 = dy), new Rev(x1, dy => dx1 = dy)); y.Derivative(1); return (y.Magnitude, dx0, dx1); }; public static Func<double, double, double, (double, double, double, double)> Differentiate(Func<Rev, Rev, Rev, Rev> f) => (x0, x1, x2) => { double dx0 = -1, dx1 = -1, dx2 = -1; var y = f(new Rev(x0, dy => dx0 = dy), new Rev(x1, dy => dx1 = dy), new Rev(x2, dy => dx2 = dy)); y.Derivative(1); return (y.Magnitude, dx0, dx1, dx2); }; }
As I mentioned in my last post, my goal here isn't the most efficient implementation for reverse mode AD, but to distill its essence to make it direct and understandable. This representation builds a whole new continuation on every invocation of the function being differentiated. More efficient representations would only compute this continuation once for any number of invocations, and there are plenty of other optimizations that can be applied to both forward and reverse mode representations.
以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持 码农网
猜你喜欢:本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们。
疯传:让你的产品、思想、行为像病毒一样入侵(全新修订版)
[美] 乔纳•伯杰(Jonah Berger) / 乔迪、王晋 / 电子工业出版社 / 2016-6 / 68.00
是什么让事物变得流行? 从买轿车、买衣服、吃三明治,到给孩子取名字,你是否知道为什么某些产品会大卖,某些故事被人们口口相传,某些电子邮件更易被转发,或者某些视频链接被疯狂地点击,某些谣言更具传播力,某些思想和行为像病毒一样入侵你的大脑……这本书将为你揭示这些口口相传和社会传播背后的科学秘密,并且告诉你如何将产品、思想、行为设计成具有感染力和传播力的内容。 无论你是大公司的管理者,还是努......一起来看看 《疯传:让你的产品、思想、行为像病毒一样入侵(全新修订版)》 这本书的介绍吧!
RGB转16进制工具
RGB HEX 互转工具
MD5 加密
MD5 加密工具