HealpixMPI.jl

An MPI-parallel implementation of the Healpix tessellation scheme in Julia
Author LeeoBianchi
Popularity
2 Stars
Updated Last
1 Year Ago
Started In
December 2022

Codecov Build Status Dev

HealpixMPI.jl: an MPI-parallel implementation of the Healpix tessellation scheme in Julia

Welcome to HealpixMPI.jl, an MPI-parallel implementation of the main functionalities of HEALPix spherical tessellation scheme, entirely coded in Julia.

This package constitutes a natural extension of the package Healpix.jl, providing an MPI integration of its main functionalities, allowing for simultaneous shared-memory (multithreading) and distributed-memory (MPI) parallelization leading to high performance sperical harmonic transforms.

Read the full documentation for further details.

Installation

From the Julia REPL, run

import Pkg
Pkg.add("HealpixMPI")

Usage Example

The example shows the necessary steps to set up and perform an MPI-parallel alm2map SHT with HealpixMPI.jl.

Set up

We set up the necessary MPI communication and initialize Healpix.jl structures:

using MPI
using Random
using Healpix
using HealpixMPI

#MPI set-up
MPI.Init()
comm = MPI.COMM_WORLD
crank = MPI.Comm_rank(comm)
csize = MPI.Comm_size(comm)
root = 0

#initialize Healpix structures
NSIDE = 64
lmax = 3*NSIDE - 1
if crank == root
  h_map = HealpixMap{Float64, RingOrder}(NSIDE)   #empty map
  h_alm = Alm(lmax, lmax, randn(ComplexF64, numberOfAlms(lmax)))  #random alm
else
  h_map = nothing
  h_alm = nothing
end

Distribution

The distributed HealpixMPI.jl data types are filled through an overload of MPI.Scatter!:

#initialize empty HealpixMPI structures 
d_map = DMap{RR}(comm)
d_alm = DAlm{RR}(comm)

#fill them
MPI.Scatter!(h_map, d_map)
MPI.Scatter!(h_alm, d_alm)

SHT

We perform the SHT through an overload of Healpix.alm2map and, if needed, we Gather! the result in a HealpixMap:

alm2map!(d_alm, d_map; nthreads = 16)
MPI.Gather!(d_map, h_map)

This allows the user to adjust at run time the number of threads to use, typically to be set to the number of cores of your machine.

Run

In order to exploit MPI parallelization run the code through mpirun or mpiexec as

$ mpiexec -n {Ntask} julia {your_script.jl}

To run a code on multiple nodes, specify a machine file machines.txt as

$ mpiexec -machinefile machines.txt julia {your_script.jl}

Used By Packages

No packages found.