PDSampler.jl is a package designed to provide an efficient, flexible, and expandable framework for samplers based on Piecewise Deterministic Markov Processes and their applications. This includes the Bouncy Particle Sampler and the Zig-Zag Sampler.
Please refer to the documentation for information on how to use/expand this package.
The project is hosted by the Alan Turing Institute (ATI). If you encounter problems, please open an issue on Github.
If you have comments or wish to collaborate, please send an email to
tlienart > cpg σ gmail > com.
If you find this toolbox useful please star the repo. If you use it in your work, please cite this code and send us an email so that we can cite your work here.
If you want to make suggestions, if you want new features, please don't hesitate, open an issue or send an email.
- Thibaut Lienart (main dev)
- Sebastian Vollmer
- Andrew Duncan
- Martin O'Reilly (ATI)
Installation and requirements
(This is explained in more details in the documentation)
- Julia ∈
[0.7.*, 1.0.*], if you're on
0.6, check out the last legacy release.
In the Julia REPL:
] add PDSampler using PDSampler
Note that loading the package may take several seconds as some of the dependencies (in particular ApproxFun.jl are quite slow to load).
- Alexandre Bouchard-Côté, Sebastian J. Vollmer and Arnaud Doucet, The Bouncy Particle Sampler: A Non-Reversible Rejection-Free Markov Chain Monte Carlo Method, arXiv preprint, 2015.
- Joris Bierkens, Alexandre Bouchard-Côté, Arnaud Doucet, Andrew B. Duncan, Paul Fearnhead, Gareth Roberts and Sebastian J. Vollmer, Piecewise Deterministic Markov Processes for Scalable Monte Carlo on Restricted Domains, arXiv preprint, 2017.
- Joris Bierkens, Paul Fearnhead and Gareth Roberts, The Zig-Zag Process and Super-Efficient Sampling for Bayesian Analysis of Big Data, arXiv preprint, 2016.
- Changye Wu, Christian Robert, Generalized Bouncy Particle Sampler, arXiv preprint, 2017.
Note: if your paper is not listed here and you feel like it should, please open an issue (same goes if there is a mistake or if a preprint is now a proper-print).