March 2018
ICFP 2018 (distinguished paper)
Automatic differentiation (AD) in reverse mode (RAD) is a central component of deep learning and other uses of large-scale optimization. Commonly used RAD algorithms such as backpropagation, however, are complex and stateful, hindering deep understanding, improvement, and parallel execution. This paper develops a simple, generalized AD algorithm calculated from a simple, natural specification. The general algorithm is then specialized by varying the representation of derivatives. In particular, applying well-known constructions to a naive representation yields two RAD algorithms that are far simpler than previously known. In contrast to commonly used RAD implementations, the algorithms defined here involve no graphs, tapes, variables, partial derivatives, or mutation. They are inherently parallel-friendly, correct by construction, and usable directly from an existing programming language with no need for new data types or programming style, thanks to use of an AD-agnostic compiler plugin.
The investigation of reverse-mode AD and its specialization to scalar-valued functions (as in backpropagation) were inspired by a conversation with Wang Ruikang.
ICFP version:
@inproceedings{Elliott-2018-ad-icfp,
author = {Conal Elliott},
title = {The simple essence of automatic differentiation},
booktitle = {Proceedings of the ACM on Programming Languages (ICFP)},
year = {2018},
url = {http://conal.net/papers/essence-of-ad/}
}
Extended version (with proofs) on arXiv:
@article{Elliott-2018-ad-extended,
author = {Conal Elliott},
title = {The simple essence of automatic differentiation (Extended version)},
journal = {CoRR},
mon = mar,
year = {2018},
volume = {abs/1804.00746},
url = {https://arxiv.org/abs/1804.00746}
}
Fixed after version of October 2, 2018:
onDot
should result in b → a
, not b ⊸ a
, i.e., a function rather than a linear map. (The resulting function is, however, linear.)Fixed after version of September 6, 2020: