API
Modules
Types and constants
Functions and macros
AutoDiffOperators.forward_adtypeAutoDiffOperators.gradient_funcAutoDiffOperators.jvp_funcAutoDiffOperators.mulfunc_operatorAutoDiffOperators.only_gradientAutoDiffOperators.only_gradient!AutoDiffOperators.only_gradient!!AutoDiffOperators.reverse_adtypeAutoDiffOperators.similar_onehotAutoDiffOperators.valgrad_funcAutoDiffOperators.valid_forward_adtypeAutoDiffOperators.valid_reverse_adtypeAutoDiffOperators.with_floatlike_contentsAutoDiffOperators.with_gradientAutoDiffOperators.with_gradient!AutoDiffOperators.with_gradient!!AutoDiffOperators.with_jacobianAutoDiffOperators.with_jvpAutoDiffOperators.with_vjp_func
Documentation
AutoDiffOperators.AutoDiffOperators — ModuleAutoDiffOperatorsProvides Julia operators that act via automatic differentiation.
AutoDiffOperators.FwdRevADSelector — TypeAutoDiffOperators.FwdRevADSelector{Fwd<:ADSelector,Rev<:ADSelector} <: ADSelectorRepresent an automatic differentiation backend that forwards forward-mode and reverse-mode AD to two separate selectors fwd::ADSelector and rev::ADSelector.
User code should not instantiate AutoDiffOperators.FwdRevADSelector directly, but use ADSelector(fwd, rev) or ADSelector(fwd = fwd, rev = rev) instead.
AutoDiffOperators.WrappedADSelector — Typeabstract type AutoDiffOperators.WrappedADSelectorSupertype for AD selectors that wrap other AD selectors.
AutoDiffOperators.ADSelector — Typeconst ADSelector = Union{
AbstractADType,
WrappedADSelector
}Instances specify an automatic differentiation backend.
Either a subtype of ADTypes.AbstractADType, or an AD-selector wrapper like AutoDiffOperators.FwdRevADSelector.
In addition to using instances of AbstractADType directly (e.g. ADTypes.AutoForwardDiff()), ADSelector (specifically AbstractADType) instances for AD backends can be constructed directly from the backend modules (using default backend parameters):
import ForwardDiff
ADTypes.AutoForwardDiff()
ADSelector(ForwardDiff)
convert(ADSelector, ForwardDifsf)all construct an identical AutoForwardDiff object.
Separate AD backends for forward- and reverse-mode AD can be specified via ADSelector(fwd_adtype, rev_adtype), e.g.
import ForwardDiff, Mooncake
ADSelector(ADTypes.AutoForwardDiff(), ADTypes.AutoMooncake())
ADSelector(ADSelector(ForwardDiff), ADSelector(Mooncake))
ADSelector(ForwardDiff, Mooncake)Implementation
ADSelector instances can also be constructed from module names, though this should be avoided in end-user code:
ADSelector(:ForwardDiff)
ADSelector(Val(:ForwardDiff))
convert(ADSelector, :ForwardDiff)
convert(ADSelector, Val(:ForwardDiff))End-users should use module objects instead of module name, so that the respective AD backend package must be part of their environment/dependencies.
AutoDiffOperators.forward_adtype — Functionforward_adtype(ad::ADSelector)::ADTypes.AbstractADTypeReturns the forward-mode AD backend selector for ad.
Returns ad itself by default if ad supports forward-mode automatic differentation, or instance of ADTypes.NoAutoDiff if it does not.
May be specialized for some AD selector types, see FwdRevADSelector, for example.
AutoDiffOperators.gradient_func — Functiongradient_func(f, ad::ADSelector)Returns a function ∇f that calculates the gradient of f at a given point x, so that ∇f(x) is equivalent to only_gradient(f, x, ad).
AutoDiffOperators.jvp_func — Functionjvp_func(f, x::AbstractVector{<:Number}, ad::ADSelector)Returns a function jvp with jvp(z) == J * z.
AutoDiffOperators.mulfunc_operator — FunctionAutoDiffOperators.mulfunc_operator(
::Type{OP},
::Type{T}, sz::Dims{2}, ovp, vop,
::Val{sym}, ::Val{herm}, ::Val{posdef}
) where {OP, T<:Real, sym, herm, posdef}Generates a linear operator object of type OP that supports multiplication and with (adjoint) vectors based on a multiplication function ovp and an adjoint multiplication function vop.
An operator op = mulfunc_operator(OP, T, sz, ovp, vop, Val(sym), ::Val{herm}, ::Val{posdef}) must show show following behavior:
op isa OP
eltype(op) == T
size(op) == sz
op * x_r == ovp(x_r)
x_l' * op == vop(x_l)
issymmetric(op) == sym
ishermitian(op) == herm
isposdef(op) = posdefwhere x_l and x_r are vectors of size sz[1] and sz[2] respectively.
AutoDiffOperators.only_gradient — Functiononly_gradient(f, x::AbstractVector{<:Number}, ad::ADSelector)Returns the gradient ∇f(x) of f at x.
See also with_gradient(f, x, ad).
AutoDiffOperators.only_gradient! — Functiononly_gradient!(f, δx, x::AbstractVector{<:Number}, ad::ADSelector)Fills δx with the ∇f(x) of f at x and returns it.
AutoDiffOperators.only_gradient!! — Functiononly_gradient!!(f, δx, x::AbstractVector{<:Number}, ad::ADSelector)Returns the gradient ∇f(x) of f at x.
δx may or may not be reused/overwritten and returned as ∇f(x).
AutoDiffOperators.reverse_adtype — Functionreverse_adtype(ad::ADSelector)::ADTypes.AbstractADTypeReturns the reverse-mode AD backend selector for ad.
Returns ad itself by default if ad supports reverse-mode automatic differentation, or instance of ADTypes.NoAutoDiff if it does not.
May be specialized for some AD selector types, see FwdRevADSelector, for example.
AutoDiffOperators.similar_onehot — FunctionAutoDiffOperators.similar_onehot(A::AbstractArray, ::Type{T}, n::Integer, i::Integer)Return an array similar to A, but with n elements of type T, all set to zero but the i-th element set to one.
AutoDiffOperators.valgrad_func — Functionvalgrad_func(f, ad::ADSelector)Returns a function f_∇f that calculates the value and gradient of f at given points, so that f_∇f(x) is equivalent to with_gradient(f, x, ad).
AutoDiffOperators.valid_forward_adtype — Functionvalid_forward_adtype(ad::ADSelector)::ADTypes.AbstractADTypeSimilar to forward_adtype, but throws an exception if ad doesn't support forward-mode automatic differentiation instead of returning a NoAutoDiff.
AutoDiffOperators.valid_reverse_adtype — Functionvalid_reverse_adtype(ad::ADSelector)::ADTypes.AbstractADTypeSimilar to reverse_adtype, but throws an exception if ad doesn't support reverse-mode automatic differentiation instead of returning a NoAutoDiff.
AutoDiffOperators.with_floatlike_contents — FunctionAutoDiffOperators.with_floatlike_contents(A::AbstractArray)If the elements of A are integer-like, convert them using float, otherwise return A unchanged.
AutoDiffOperators.with_gradient — Functionwith_gradient(f, x::AbstractVector{<:Number}, ad::ADSelector)Returns a tuple (f(x), ∇f(x)) with the gradient ∇f(x) of f at x.
See also with_gradient!!(f, δx, x, ad) for the "maybe-in-place" variant of this function.
AutoDiffOperators.with_gradient! — Functionwith_gradient!(f, δx, x::AbstractVector{<:Number}, ad::ADSelector)Fills δx with the the gradient ∇f(x) of f at x and returns the tuple (f(x), δx).
AutoDiffOperators.with_gradient!! — Functionwith_gradient!!(f, δx, x::AbstractVector{<:Number}, ad::ADSelector)Returns a tuple (f(x), ∇f(x)) with the gradient ∇f(x) of f at x.
δx may or may not be reused/overwritten and returned as ∇f(x).
AutoDiffOperators.with_jacobian — Functionwith_jacobian(f, x::AbstractVector{<:Number}, OP, ad::ADSelector)Returns a tuple (f(x), J) with a multiplicative Jacobian operator J of type OP.
Example:
using AutoDiffOperators, LinearMaps
y, J = with_jacobian(f, x, LinearMap, ad)
y == f(x)
_, J_explicit = with_jacobian(f, x, DenseMatrix, ad)
J * z_r ≈ J_explicit * z_r
z_l' * J ≈ z_l' * J_explicitOP may be LinearMaps.LinearMap (resp. LinearMaps.FunctionMap) or Matrix. Other operator types can be supported by specializing mulfunc_operator for the operator type.
The default implementation of with_jacobian uses jvp_func and with_vjp_func to implement (adjoint) multiplication of J with (adjoint) vectors.
AutoDiffOperators.with_jvp — Functionwith_jvp(f, x::AbstractVector{<:Number}, z::AbstractVector{<:Number}, ad::ADSelector)Returns a tuple (f(x), J * z).
AutoDiffOperators.with_vjp_func — Functionwith_vjp_func(f, x::AbstractVector{<:Number}, ad::ADSelector)Returns a tuple (f(x), vjp) with the function vjp(z) ≈ J' * z.