Distributed-memory version of Gridap ๐Ÿšง work in progress ๐Ÿšง
Author gridap
7 Stars
Updated Last
1 Year Ago
Started In
April 2020


Stable Dev CI

Parallel distributed-memory version of Gridap.jl. ๐Ÿšง work in progress ๐Ÿšง


This package is currently experimental, under development. In any case, the final purpose is to provide programming paradigm-neutral, parallel finite element data structures for distributed computing environments. This feature implies that communication among tasks are not tailored for a particular programming model, and thus can be leveraged with, e.g., MPI or the master-worker programming model built-in in Julia. Whenever one sticks to MPI as the underlying communication layer, GridapDistributed.jl leverages the suite of tools available in the PETSc software package for the assembly and solution of distributed discrete systems of equations.


Before using GridapDistributed.jl package, we have to build MPI.jl and GridapDistributedPETScWrappers.jl. We refer to the main of the latter for configuration instructions.

MPI-parallel Julia script execution instructions

In order to execute a MPI-parallel GridapDistributed.jl driver, we have first to figure out the path of the mpirun script corresponding to the MPI library with which MPI.jl was built. In order to do so, we can run the following commands from the root directory of GridapDistributed.jl git repo:

$ julia --project=. -e "using MPI;println(MPI.mpiexec_path)" 

Alternatively, for convenience, one can assign the path of mpirun to an environment variable, i.e.

$ export MPIRUN=$(julia --project=. -e "using MPI;println(MPI.mpiexec_path)")

As an example, the MPI-parallel GridapDistributed.jl driver MPIPETScCommunicatorsTests.jl, located in the test directory, can be executed as:

$MPIRUN -np 2 julia --project=. -J ../Gridap.jl/compile/ test/MPIPETScTests/MPIPETScCommunicatorsTests.jl

where -J ../Gridap.jl/compile/ is optional, but highly recommended in order to reduce JIT compilation times. More details about how to generate this file can be found here.

Two big warnings when executing MPI-parallel drivers:

  • Data race conditions associated to the generation of precompiled modules in cache. See here.

  • Each time that GridapDistributed.jl is modified, the first time that a parallel driver is executed, the program fails during MPI initialization. But the second, and subsequent times, it works ok. I still do not know the cause of the problem, but it is related to module precompilation as well.