This package provides a modeling framework based on NLPModels.jl for block structured nonlinear optimization problems (NLPs) in Julia. Specifically, one can use this package to model NLPs of the following form:
where
In this package, block structured NLPs of the above form are represented by an instance of AbstractBlockNLPModel
.
This package is currently under active development and has not been registered yet. However, to access the current code, one can enter the following command in Julia REPL:
]add "https://github.com/exanauts/BlockNLPModels.jl"
To confirm if the package was installed correctly, please run:
test BlockNLPModels
The test code generates and solves a small instance of a BlockNLPModel
using the interior-point solver MadNLP.jl.
Suppose we want to model the following NLP that can be decomposed into three univariate blocks:
where
We start by initializing an empty BlockNLPModel
:
blocknlp = BlockNLPModel()
Next, we add the NLP blocks to blocknlp
. For this step, the subproblems need to be made available as AbstractNLPModel
objects. Assuming that nlp_blocks
is a 3-element vector containing the three NLP blocks of our example above, we add these blocks to blocknlp
by using the add_block!
function as follows:
for i in 1:3
add_block!(blocknlp, nlp_blocks[i])
end
Once the blocks have been added the next step is to add the linking constraints. For this, we make use of the add_links!
method in this package. This is defined as follows:
add_links!(block_nlp_model, n_constraints, links, rhs_constants)
, where
block_nlp_model
is an instance ofBlockNLPModel
n_constraints
is the number of linking constraints that are being added toblock_nlp_model
links
is a dictionary that specifies which block is being linked with what coefficientsrhs_constants
is the right-hand-side vector for the linking constraints
For our example above, we add the linking constraint as follows:
add_links!(blocknlp, 1, Dict(1 => [1.], 2 => [1.], 3 => [1.]), b)
With this, we have finished modeling the above example as an AbstractBlockNLPModel
.
In addition to a modeling framework, this package also provides several callback functions to enable easy implementation of several popular NLP decomposition algorithms to efficiently solve large instances of block structured NLPs. We highlight some of the important ones here:
-
For the dual decomposition method, one can call
DualizedNLPBlockModel
for the subproblems to dualize their objective. This method is defined as follows:DualizedNLPBlockModel(nlp, λ, A)
, where- nlp: the subproblem to be dualized as an instance of
AbstractNLPModel
- λ: vector of dual variables corresponding to the linking constraints assosciated with the nlp block
nlp
- A: linking constraint matrix block assosciated with the nlp block
nlp
- nlp: the subproblem to be dualized as an instance of
This function will return the dualized block model as an instance of AbstractNLPModel
.
As an example, suppose for the problem shown above, we wish to dualize the second nlp block. Here we assume that the dual variables are stored in variable y
. Dualization can be achieved as follows:
A = get_linking_matrix_blocks(blocknlp)
dualized_block = DualizedNLPBlockModel(blocknlp.blocks[2].problem_block, y[blocknlp.problem_size.con_counter+1:end], A[2])
- To support the implementation of ADMM, we provide the following callback function:
AugmentedNLPBlockModel(nlp, y, ρ, A, b, x)
, where
- nlp: the subproblem which is to be augmented
- y: the dual variables corresponding to the linking constraints
-
$\rho$ : penalty parameter - A: matrix containing the linking constraint coefficients
- b: the right-hand-side vector for the linking constraints
- x: current estimate of the primal variables
The output of this function will be the augmented block model as an instance of AbstractNLPModel
.
Again, as an illustration, we augment the objective function of the secoond NLP block of the above example problem. We assume x
and y
are the current estimates of the primal and dual variables, respectively. Further, we assume ρ
to be the penalty parameter.
A = get_linking_matrix(blocknlp)
b = get_rhs_vector(blocknlp)
dualized_block = AugmentedNLPBlockModel(blocknlp.blocks[2], y[blocknlp.problem_size.con_counter+1:end], ρ, A, b, x)
- We also support the addition of a proximal term to an augmented objective. This can be achieved with the following method:
ProxAugmentedNLPBlockModel(nlp, y, ρ, A, b, x, P)
, where P
AugmentedNLPBlockModel
method.
This implementation of this method follows the same steps as for the AugmentedNLPBlockModel
method. The only difference being the addition of a sparse P
matrix which can be defined by making use of the SparseArrays.jl
package in Julia.
- Finally, our package also provides the user an option to convert the
AbstractBlockNLPModel
into a full-spaceAbstractNLPModel
which can be solved with any general purpose NLP solver. The method for this is as follows:
FullSpaceModel(block_nlp_model)
, where block_nlp_model
is an instance of AbstractBlockNLPModel
.
For example, suppose we desire to solve the three block problem described above with MadNLP.jl
. This can be done by executing the following set of commands:
full_model = FullSpaceModel(blocknlp)
solution = madnlp(full_model)
assuming that MadNLP.jl
has already been imported in the environment.
This package's development was supported by the Exascale Computing Project (17-SC-20-SC), a joint project of the U.S. Department of Energy’s Office of Science and National Nuclear Security Administration, responsible for delivering a capable exascale ecosystem, including software, applications, and hardware technology, to support the nation’s exascale computing imperative.