25

Easy Reverse Mode Automatic Differentiation in C#

 3 years ago
source link: https://higherlogics.blogspot.com/2020/05/easy-reverse-mode-automatic.html
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Continuing from my last post on implementing forward-mode automatic differentiation (AD) using C# operator overloading , this is just a quick follow-up showing how easy reverse mode is to achieve, and why it's important.

Why Reverse Mode Automatic Differentiation?

As explained in the last post, the vector representation of forward-mode AD can compute the derivatives of all parameter simultaneously, but it does so with considerable space cost: each operation creates a vector computing the derivative of each parameter. So N parameters with M operations would allocation O(N*M) space. It turns out, this is unnecessary!

Reverse mode AD allocates only O(N+M) space to compute the derivatives of N parameters across M operations. In general, forward mode AD is best suited to differentiating functions of type:

<strong>R</strong> → <strong>R</strong><sup>N</sup>

That is, functions of 1 parameter that compute multiple outputs. Reverse mode AD is suited to the dual scenario:

<strong>R</strong><sup>N</sup> → <strong>R</strong>

That is, functions of many parameters that return a single real number. A lot of problems are better suited to reverse mode AD, and some modern machine learning frameworks now employ reverse mode AD internally (thousands of parameters, single output that's compared to a goal).

How does Reverse Mode Work?

The identities I described in the other article still apply since they're simply the chain rule , but reverse mode computes derivatives backwards . Forward-mode AD is easy to implement using dual numbers in which the evaluation order matches C#'s normal evaluation order: just compute a second number corresponding to the derivative along side the normal computation. Since reverse mode runs backwards, we have to do the computational dual: build a (restricted) continuation!

You can see a rough sketch of both forward mode and reverse mode here . Forward mode AD using dual numbers will look something like this:

public readonly struct Fwd
{
    public readonly double Magnitude;
    public readonly double Derivative;

    public Fwd(double mag, double deriv)
    {
        this.Magnitude = mag;
        this.Derivative = deriv;
    }

    public Fwd Pow(int k) =>
        new Fwd(Math.Pow(Magnitude, k), k * Math.Pow(Magnitude, k - 1) * Derivative);

    public static Fwd operator +(Fwd lhs, Fwd rhs) =>
        new Fwd(lhs.Magnitude + rhs.Magnitude, lhs.Derivative + rhs.Derivative);

    public static Fwd operator *(Fwd lhs, Fwd rhs) =>
        new Fwd(lhs.Magnitude + rhs.Magnitude, lhs.Derivative * rhs.Magnitude + rhs.Derivative * lhs.Magnitude);

    public static Func<double, Fwd> Differentiate(Func<Fwd, Fwd> f) =>
        x => f(new Fwd(x, 1));

    public static Func<double, double, Fwd> DifferentiateX0(Func<Fwd, Fwd, Fwd> f) =>
        (x0, x1) => f(new Fwd(x0, 1), new Fwd(x1, 0));

    public static Func<double, double, Fwd> DifferentiateX1(Func<Fwd, Fwd, Fwd> f) =>
        (x0, x1) => f(new Fwd(x0, 0), new Fwd(x1, 1));
}

Translating this into reverse mode entails replacing Fwd.Derivative with a continuation like so:

public readonly struct Rev
{
    public readonly double Magnitude;
    readonly Action<double> Derivative;

    public Rev(double y, Action<double> dy)
    {
        this.Magnitude = y;
        this.Derivative = dy;
    }

    public Rev Pow(int e)
    {
        var x = Magnitude;
        var k = Derivative;
        return new Rev(Math.Pow(Magnitude, e), dx => k(e * Math.Pow(x, e - 1) * dx));
    }

    public static Rev operator +(Rev lhs, Rev rhs) =>
        new Rev(lhs.Magnitude + rhs.Magnitude, dx =>
        {
            lhs.Derivative(dx);
            rhs.Derivative(dx);
        });

    public static Rev operator *(Rev lhs, Rev rhs) =>
        new Rev(lhs.Magnitude * rhs.Magnitude,
                dx =>
                {
                    lhs.Derivative(dx * rhs.Magnitude);
                    rhs.Derivative(dx * lhs.Magnitude);
                });

    public static Func<double, (double, double)> Differentiate(Func<Rev, Rev> f) =>
        x =>
        {
            double dx = 1;
            var y = f(new Rev(x, dy => dx = dy));
            y.Derivative(1);
            return (y.Magnitude, dx);
        };

    public static Func<double, double, (double, double, double)> Differentiate(Func<Rev, Rev, Rev> f) =>
        (x0, x1) =>
        {
            double dx0 = 1, dx1 = 1;
            var y = f(new Rev(x0, dy => dx0 = dy), new Rev(x1, dy => dx1 = dy));
            y.Derivative(1);
            return (y.Magnitude, dx0, dx1);
        };

    public static Func<double, double, double, (double, double, double, double)> Differentiate(Func<Rev, Rev, Rev, Rev> f) =>
        (x0, x1, x2) =>
        {
            double dx0 = -1, dx1 = -1, dx2 = -1;
            var y = f(new Rev(x0, dy => dx0 = dy),
                      new Rev(x1, dy => dx1 = dy),
                      new Rev(x2, dy => dx2 = dy));
            y.Derivative(1);
            return (y.Magnitude, dx0, dx1, dx2);
        };
}

As I mentioned in my last post, my goal here isn't the most efficient implementation for reverse mode AD, but to distill its essence to make it direct and understandable. This representation builds a whole new continuation on every invocation of the function being differentiated. More efficient representations would only compute this continuation once for any number of invocations, and there are plenty of other optimizations that can be applied to both forward and reverse mode representations.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK