March 2018

ICFP 2018 (distinguished paper)

Automatic differentiation (AD) in reverse mode (RAD) is a central component of deep learning and other uses of large-scale optimization. Commonly used RAD algorithms such as backpropagation, however, are complex and stateful, hindering deep understanding, improvement, and parallel execution. This paper develops a simple, generalized AD algorithm calculated from a simple, natural specification. The general algorithm is then specialized by varying the representation of derivatives. In particular, applying well-known constructions to a naive representation yields two RAD algorithms that are far simpler than previously known. In contrast to commonly used RAD implementations, the algorithms defined here involve no graphs, tapes, variables, partial derivatives, or mutation. They are inherently parallel-friendly, correct by construction, and usable directly from an existing programming language with no need for new data types or programming style, thanks to use of an AD-agnostic compiler plugin.

- ICFP version.
- Extended version (with proofs) on arXiv.org.
- Talks (four versions with video and slides).
- Related paper:
*Compiling to categories*.

The investigation of reverse-mode AD and its specialization to scalar-valued functions (as in backpropagation) were inspired by a conversation with Wang Ruikang.

ICFP version:

```
@inproceedings{Elliott-2018-ad-icfp,
author = {Conal Elliott},
title = {The simple essence of automatic differentiation},
booktitle = {Proceedings of the ACM on Programming Languages (ICFP)},
year = {2018},
url = {http://conal.net/papers/essence-of-ad/}
}
```

Extended version (with proofs) on arXiv:

```
@article{Elliott-2018-ad-extended,
author = {Conal Elliott},
title = {The simple essence of automatic differentiation (Extended version)},
journal = {CoRR},
mon = mar,
year = {2018},
volume = {abs/1804.00746},
url = {https://arxiv.org/abs/1804.00746}
}
```

Fixed after version of October 2, 2018:

- Section 13 (“Gradients and Duality”, bottom of page 70:20 in the ICFP version): The signature of
`onDot`

should result in`b → a`

, not`b ⊸ a`

, i.e., a function rather than a linear map. (The resulting function is, however, linear.) - Section 13: “Figures 11 and 12 show the results of reverse-mode AD via
*DDual (→+)*.” The resulting dual additive functions have been applied to a basis (here 1), unlike Figures 4 and 8.

Fixed after version of September 6, 2020:

- In Figures 7 and 10, the
*Cartesian*instance should have*Cocartesian k*as its parent class, and the*Cocartesian*instance should have*Cartesian k*as its parent class. (Thanks to Philippe Veber!)