Documentation | Build Status | JOSS paper |
---|---|---|
Building blocks for invertible neural networks in the Julia programming language.
- Memory efficient building blocks for invertible neural networks
- Hand-derived gradients, Jacobians
$J$ , and$\log |J|$ - Flux integration
- Support for Zygote and ChainRules
- GPU support
- Includes various examples of invertible neural networks, normalizing flows, variational inference, and uncertainty quantification
InvertibleNetworks is registered and can be added like any standard Julia package with the command:
] add InvertibleNetworks
Due to its memory scaling InvertibleNetworks.jl, has been particularily successful at Bayesian posterior sampling with simulation-based inference. To get started with this application refer to a simple example (Conditional sampling for MNSIT inpainting) but feel free to modify this script for your application and please reach out to us for help.
-
1x1 Convolutions using Householder transformations (example)
-
Residual block (example)
-
Invertible coupling layer from Dinh et al. (2017) (example)
-
Invertible hyperbolic layer from Lensink et al. (2019) (example)
-
Invertible coupling layer from Putzky and Welling (2019) (example)
-
Invertible recursive coupling layer HINT from Kruse et al. (2020) (example)
-
Activation normalization (Kingma and Dhariwal, 2018) (example)
-
Various activation functions (Sigmoid, ReLU, leaky ReLU, GaLU)
-
Objective and misfit functions (mean squared error, log-likelihood)
-
Dimensionality manipulation: squeeze/unsqueeze (column, patch, checkerboard), split/cat
-
Squeeze/unsqueeze using the wavelet transform
-
Invertible recurrent inference machines (Putzky and Welling, 2019) (generic example)
-
Generative models with maximum likelihood via the change of variable formula (example)
-
Glow: Generative flow with invertible 1x1 convolutions (Kingma and Dhariwal, 2018) (generic example, source)
GPU support is supported via Flux/CuArray. To use the GPU, move the input and the network layer to GPU via |> gpu
using InvertibleNetworks, Flux
# Input
nx = 64
ny = 64
k = 10
batchsize = 4
# Input image: nx x ny x k x batchsize
X = randn(Float32, nx, ny, k, batchsize) |> gpu
# Activation normalization
AN = ActNorm(k; logdet=true) |> gpu
# Test invertibility
Y_, logdet = AN.forward(X)
If you use InvertibleNetworks.jl in your research, we would be grateful if you cite us with the following bibtex:
@article{Orozco2024, doi = {10.21105/joss.06554}, url = {https://doi.org/10.21105/joss.06554}, year = {2024}, publisher = {The Open Journal}, volume = {9}, number = {99}, pages = {6554}, author = {Rafael Orozco and Philipp Witte and Mathias Louboutin and Ali Siahkoohi and Gabrio Rizzuti and Bas Peters and Felix J. Herrmann}, title = {InvertibleNetworks.jl: A Julia package for scalable normalizing flows}, journal = {Journal of Open Source Software} }
The following publications use InvertibleNetworks.jl:
-
"Reliable amortized variational inference with physics-based latent distribution correction"
- paper: https://arxiv.org/abs/2207.11640
- presentation
- code: ReliableAVI.jl
-
"Learning by example: fast reliability-aware seismic imaging with normalizing flows"
-
- paper
- code: WavefieldRecoveryUQ.jl
-
"Preconditioned training of normalizing flows for variational inference in inverse problems"
-
"Generalized Minkowski sets for the regularization of inverse problems"
We welcome contributions and bug reports! Please see CONTRIBUTING.md for guidance.
InvertibleNetworks.jl development subscribes to the Julia Community Standards.
-
Rafael Orozco, Georgia Institute of Technology [rorozco@gatech.edu]
-
Philipp Witte, Georgia Institute of Technology (now Microsoft)
-
Gabrio Rizzuti, Utrecht University
-
Mathias Louboutin, Georgia Institute of Technology
-
Ali Siahkoohi, Georgia Institute of Technology
This package uses functions from NNlib.jl, Flux.jl and Wavelets.jl