GridapPETSc is a plugin of GridapDistributed.jl that provides the full set of scalable linear and nonlinear solvers in the PETSc library. It also provides serial solvers to Gridap.jl.
Take a look at this tutorial for learning how to use GridapPETSc in distributed-memory simulations of PDEs.
It can also be used in the serial case, as shown in this test.
GridapPETSc julia package requires the PETSC library (Portable, Extensible Toolkit for Scientific Computation) and MPI to work correctly. You have two main options to install these dependencies.
-
Do nothing [recommended in most cases]. Use the default precompiled
MPIinstallation provided byMPI.jland the pre-compiledPETSclibrary provided byPETSc_jll. This will happen under the hood when you installGridapPETSc. You can also force the installation of these default dependencies by setting the environment variablesJULIA_MPI_BINARYandJULIA_PETSC_LIBRARYto empty values. -
Choose a specific installation of
MPIandPETScavailable in the system [recommended in HPC clusters].- First, choose a
MPIinstallation. See the documentation ofMPI.jlfor further details. An easy way to achieve this is to create the environment variableJULIA_MPI_BINARYcontaining the path to theMPIbinary. - Second, choose a
PETScinstallation. To this end, create an environment variableJULIA_PETSC_LIBRARYcontaining the path to the dynamic library object of thePETSCinstallation (i.e., the.sofile in linux systems). Very important: The chosenPETSclibrary needs to be configured with theMPIinstallation considered in the previous step.
- First, choose a
GridapPETScdefault sparse matrix format is 0-based compressed sparse row. This type of sparse matrix storage format can be described by theSparseMatrixCSR{0,PetscReal,PetscInt}andSymSparseMatrixCSR{0,PetscReal,PetscInt}Julia types as implemented in the SparseMatricesCSR Julia package.- When running in MPI parallel mode (i.e., with a MPI communicator different from
MPI.COMM_SELF),GridapPETScimplements a sort of limited garbage collector in order to automatically deallocate PETSc objects. This garbage collector can be manually triggered by a call to the functionGridapPETSc.gridap_petsc_gc().GridapPETScautomatically calls this function inside at different strategic points, and this will be sufficient for most applications. However, for some applications, with a very frequent allocation of PETSc objects, it might be needed to call this function from application code. This need will be signaled by PETSc via the following internal message errorPETSC ERROR: No more room in array, limit 256 recompile src/sys/objects/destroy.c with larger value for MAXREGDESOBJS