Peridynamics.jl

A Julia package for parallel peridynamics simulations
Author kaipartmann
Popularity
40 Stars
Updated Last
2 Months Ago
Started In
June 2022


The Peridynamics.jl logo

A high-level Julia package for parallel peridynamics simulations

Documentation:
Stable Dev

Build status:
Build Status Coverage

Code quality:
SciML Code Style Aqua QA

Citation:
DOI

Main Features

  • ๐ŸŽฏ Dynamic and quasi-static analysis
  • ๐Ÿชจ Multiple peridynamics formulations and material models
  • ๐ŸŽณ Multibody contact simulations
  • ๐Ÿค“ User friendly API that captures many errors before submitting simulations
  • ๐Ÿš€ Enhanced HPC capabilities with MPI or multithreading

Installation

Peridynamics.jl is a registered Julia package, so you can install it by just typing

add Peridynamics

in the julia package manager. Please take a look at the documentation for more details on the installation.

Tutorials

Tensile test quasi-static
Tensile test dynamic
Tension with predefined crack
The old logo
Kalthoff Winkler

Fragmenting cylinder

Usage

To run the dynamic tensile test simulation shown above, just 7 lines of code are needed:

body = Body(BBMaterial(), "TensileTestMesh.inp")
material!(body; horizon=0.01, rho=2700, E=70e9, Gc=100)
velocity_bc!(t -> -0.6, body, :bottom, 1)
velocity_bc!(t -> 0.6, body, :top, 1)
vv = VelocityVerlet(steps=500)
job = Job(body, vv; path="results/tension_dynamic")
submit(job)

Take a look at the tutorial of the tensile test for more details on this example.

If you want to run this example with multithreading, just start Julia with more than 1 thread. To use MPI, you can create a script containing the same code without changes and run it with:

mpiexec -n 6 julia --project path/to/script.jl

Please take a look at the how-to guide on MPI for more details.

Authors

Acknowledgements

The authors gratefully acknowledge the support of the Deutsche Forschungsgemeinschaft (DFG) under the project WE2525-14/1.

The support of Carsten Bauer and Xin Wu from PC2 with the design of the internal structure regarding parallel performance is gratefully acknowledged.

The authors gratefully acknowledge the computing time provided to them on the high-performance computer Noctua 2 at the NHR Center PC2. These are funded by the Federal Ministry of Education and Research and the state governments participating on the basis of the resolutions of the GWK for the national highperformance computing at universities (www.nhr-verein.de/unsere-partner).