## DZOptimization.jl

Fast in-place nonlinear optimization with fine-grained user control
Author dzhang314
Popularity
1 Star
Updated Last
2 Years Ago
Started In
October 2019

# DZOptimization.jl

DZOptimization.jl is a Julia package for smooth nonlinear optimization that emphasizes performance, flexibility, and memory efficiency. In basic usage examples (see below), DZOptimization.jl has 6x less overhead and uses 10x less memory than Optim.jl.

Unlike traditional optimization libraries which only provide a black-box `optimize` function (e.g., Optim.jl and NLopt.jl), DZOptimization.jl gives you full control of the optimization loop. This allows you to:

• interactively monitor the progress of an optimizer,
• interleave nonlinear optimization with other tasks,
• save/load data in the middle of optimization,
• run multiple optimizers in parallel, and
• terminate optimization whenever you want (as opposed to a predetermined list of convergence criteria).

DZOptimization.jl is designed to minimize overhead. It uses static data structures and in-place algorithms to ensure that memory is never dynamically allocated (outside of optimizer constructors). This makes DZOptimization.jl especially suitable for both small-scale optimization problems, since repeatedly allocating small vectors is wasteful, and large-scale optimization problems, since memory usage will never shoot up unexpectedly.

## Usage Example

The following example illustrates the use of `DZOptimization.BFGSOptimizer` to minimize the Rosenbrock function, starting at a random initial point.

``````using DZOptimization

rosenbrock_objective(x::Vector) =
(1 - x[1])^2 + 100 * (x[2] - x[1]^2)^2

g[1] = -2 * (1 - x[1]) - 400 * x[1] * (x[2] - x[1]^2)
g[2] = 200 * (x[2] - x[1]^2)
end

opt = BFGSOptimizer(rosenbrock_objective,
rand(2), # starting point
1.0)     # initial step size

while !opt.has_converged[]
println(opt.current_objective_value[], '\t', opt.current_point)
step!(opt)
end
``````

## Benchmarks

Compared to Optim.jl, the BFGS implementation in DZOptimization.jl is 6x faster and uses 10x less memory to minimize the Rosenbrock function.

``````using BenchmarkTools                            | using Optim
@benchmark begin                                |
opt = BFGSOptimizer(rosenbrock_objective,   | @benchmark optimize(rosenbrock_objective,