MadNLP is a nonlinear programming (NLP) solver, purely implemented in Julia. MadNLP implements a filter line-search algorithm, as that used in Ipopt. MadNLP seeks to streamline the development of modeling and algorithmic paradigms in order to exploit structures and to make efficient use of high-performance computers.
pkg> add MadNLP
The build process requires C and Fortran compilers. If they are not installed, do
shell> sudo apt install gcc gfortran # Linux shell> brew cask install gcc gfortran # MacOS
MadNLP is interfaced with non-Julia sparse/dense linear solvers:
- HSL solvers (optional)
- Pardiso (optional)
- cuSOLVER (optional)
All the dependencies except for HSL solvers, Pardiso, and CUDA are automatically installed. To build MadNLP with HSL linear solvers (Ma27, Ma57, Ma77, Ma86, Ma97), the source codes need to be obtained by the user from http://www.hsl.rl.ac.uk/ipopt/ under Coin-HSL Full (Stable). Then, the tarball
coinhsl-2015.06.23.tar.gz should be placed at
deps/download. To use Pardiso, the user needs to obtain the Paridso shared libraries from https://www.pardiso-project.org/, place the shared library file (e.g.,
deps/download, and place the license file in the home directory. The absolute path for
deps/download can be obtained by:
julia> import MadNLP; joinpath(dirname(pathof(MadNLP)),"..","deps","download")
To use cuSOLVER, functional NVIDIA driver and corresponding CUDA toolkit need to be installed by the user. After obtaining the files, run
pkg> build MadNLP
Build can be customized by setting the following environment variables.
julia> ENV["MADNLP_CC"] = "/usr/local/bin/gcc-9" # default is "gcc" julia> ENV["MADNLP_FC"] = "/usr/local/bin/gfortran" # default is "gfortran" julia> ENV["MADNLP_BLAS"] = "openblas" # default is "mkl" if available "openblas" otherwise julia> ENV["MADNLP_ENALBE_OPENMP"] = false # default is "true" julia> ENV["MADNLP_OPTIMIZATION_FLAG"] = "-O2" # default is "-O3"
Alternatively, if the user has already installed HSL/pardiso library, one can simply specify the library path as follows:
julia> ENV["MADNLP_HSL_LIBRARY_PATH"] = "/opt/lib/libcoinhsl.so" julia> ENV["MADNLP_PARDISO_LIBRARY_PATH"] = "/opt/lib/libpardiso.so"
In this case, the source code is not compiled and the provided shared library is directly used.
MadNLP is interfaced with modeling packages:
using MadNLP, JuMP model = Model(()->MadNLP.Optimizer(linear_solver=MadNLP.Ma57,print_level=MadNLP.INFO,max_iter=100)) @variable(model, x, start = 0.0) @variable(model, y, start = 0.0) @NLobjective(model, Min, (1 - x)^2 + 100 * (y - x^2)^2) optimize!(model)
using MadNLP, Plasmo graph = OptiGraph() @optinode(graph,n1) @optinode(graph,n2) @variable(n1,0 <= x <= 2) @variable(n1,0 <= y <= 3) @constraint(n1,x+y <= 4) @objective(n1,Min,x) @variable(n2,x) @NLnodeconstraint(n2,exp(x) >= 2) @linkconstraint(graph,n1[:x] == n2[:x]) MadNLP.optimize!(graph;linear_solver=MadNLP.Ma97,print_level=MadNLP.DEBUG,max_iter=100)
using MadNLP, CUTEst model = CUTEstModel("PRIMALC1") madnlp(model,linear_solver=MadNLP.PardisoMKL,print_level=MadNLP.WARN,max_wall_time=3600)
Using special linear solvers
In order to use GPU solvers,
CUDA should be imported to the
using MadNLP, CUDA model = Model(()->MadNLP.Optimizer(linear_solver=MadNLP.LapackGPU)) # ...
In order to use multi-threaded solvers (
Schwawrz), julia session should be started with
julia -t 16 # to use 16 threads
To see the list of MadNLP solver options, check the OPTIONS.md file.
Bug reports and support
Please report issues and feature requests via the Github issue tracker.