266 Stars
Updated Last
4 Months Ago
Started In
May 2020


Metal programming in Julia

With Metal.jl it's possible to program GPUs on macOS using the Metal programming framework.

The package is a work-in-progress. There are bugs, functionality is missing, and performance hasn't been optimized. Expect to have to make changes to this package if you want to use it. PRs are very welcome!


  • Mac device with M-series chip
  • Julia 1.8
  • macOS 13 (Ventura)

These requirements are fairly strict, and are due to our limited development resources (manpower, hardware). Technically, they can be relaxed. If you are interested in contributing to this, see this issue for more details. In practice, Metal.jl will probably work on any macOS 10.15+, and other GPUs that are supported by Metal might also function (if only partially), but such combinations are unsupported for now.

Quick start

Metal.jl can be installed with the Julia package manager. From the Julia REPL, type ] to enter the Pkg REPL mode and run:

pkg> add Metal

Or, equivalently, via the Pkg API:

julia> import Pkg; Pkg.add("Metal")

For an overview of the toolchain in use, you can run the following command after importing the package:

julia> using Metal

julia> Metal.versioninfo()
macOS 12.2.0, Darwin 21.3.0

- Julia: 1.8.0-beta3
- LLVM: 13.0.1

1 device:
- Apple M1 Pro (64.000 KiB allocated)

Array abstraction

The easiest way to work with Metal.jl, is by using its array abstraction. The MtlArray type is both meant to be a convenient container for device memory, as well as provide a data-parallel abstraction for using the GPU without writing your own kernels:

julia> a = MtlArray([1])
1-element MtlArray{Int64, 1}:

julia> a .+ 1
1-element MtlArray{Int64, 1}:

Kernel programming

The above array abstractions are all implemented using Metal kernels written in Julia. These kernels follow a similar programming style to Julia's other GPU back-ends, and with that deviate from how kernels are implemented in Metal C (i.e., indexing intrinsics are functions not arguments, arbitrary aggregate arguments are supported, etc):

julia> function vadd(a, b, c)
           i = thread_position_in_grid_1d()
           c[i] = a[i] + b[i]
vadd (generic function with 1 method)

julia> a = MtlArray([1,1,1,1]); b = MtlArray([2,2,2,2]); c = similar(a);

julia> @metal threads=2 groups=2 vadd(a, b, c)

julia> Array(c)
4-element Vector{Int64}:


This package also supports profiling GPU execution for later visualization with Apple's Xcode tools. The easiest way to generate a GPU report is to use the Metal.@profile macro as seen below. To profile GPU code from a Julia process, you must set the METAL_CAPTURE_ENABLED environment variable before importing Metal.jl. On the first Metal command detected, you should get a message stating "Metal GPU Frame Capture Enabled" if the variable was set correctly:

julia> using Metal

julia> function vadd(a, b, c)
           i = thread_position_in_grid_1d()
           c[i] = a[i] + b[i]

julia> a = MtlArray([1]); b = MtlArray([2]); c = similar(a);
... Metal GPU Frame Capture Enabled

julia> Metal.@profile @metal threads=length(c) vadd(a, b, c);
[ Info: GPU frame capture saved to julia_capture_1.gputrace

This will generate a .gputrace folder in the current directory. To view the profile, open the folder with Xcode.

Note: Xcode is a large install, and there are some peculiarities with viewing Julia-created GPU traces. It's recommended to only have one trace open at a time, and the shader profiler may fail to start.

Metal API wrapper

Finally, all of the above functionality is made possible by interfacing with the Metal libraries through ObjectiveC.jl. We provide low-level objects and functions that map These low-level API wrappers, along with some slightly higher-level Julia wrappers, are available in the MTL submodule exported by Metal.jl:

julia> dev = MTLDevice(1)
<AGXG13XDevice: 0x14c17f200>
    name = Apple M1 Pro

julia> dev.name
NSString("Apple M1 Pro")


Metal.jl relies on a custom LLVM with an AIR back-end, provided as a JLL. Normally, this JLLis built on Yggdrasil. If you need to make changes to the LLVM back-end, have a look at the build_llvm.jl in the deps/ folder. This scripts builds a local version of the LLVM back-end, and configures a local preference such that any environment depending on the corresponding JLLs will pick-up the modified version (i.e., do julia --project in a clone of Metal.jl).


This package builds upon the experience of several Julia contributors to CUDA.jl, AMDGPU.jl and oneAPI.jl.

Used By Packages

No packages found.