Skip to content

GPU deployment using ROCmlink

IREE can accelerate model execution on AMD GPUs using ROCm.

Prerequisiteslink

In order to use ROCm to drive the GPU, you need to have a functional ROCm environment. It can be verified by the following steps:

rocm-smi | grep rocm

If rocm-smi does not exist, you will need to install the latest ROCm Toolkit SDK for Windows or Linux.

Get the IREE compilerlink

Download the compiler from a releaselink

Currently ROCm is NOT supported for the Python interface.

Build the compiler from sourcelink

Please make sure you have followed the Getting started page to build the IREE compiler, then enable the ROCm compiler target with the IREE_TARGET_BACKEND_ROCM option.

Tip

iree-compile will be built under the iree-build/tools/ directory. You may want to include this path in your system's PATH environment variable.

Get the IREE runtimelink

Next you will need to get an IREE runtime that includes the ROCm HAL driver.

Build the runtime from sourcelink

Please make sure you have followed the Getting started page to build IREE from source, then enable the experimental ROCm HAL driver with the IREE_EXTERNAL_HAL_DRIVERS=rocm option.

Compile and run a program modellink

With the compiler and runtime ready, we can now compile programs and run them on GPUs.

Compile a programlink

The IREE compiler transforms a model into its final deployable format in many sequential steps. A model authored with Python in an ML framework should use the corresponding framework's import tool to convert into a format (i.e., MLIR) expected by the IREE compiler first.

Using MobileNet v2 as an example, you can download the SavedModel with trained weights from TensorFlow Hub and convert it using IREE's TensorFlow importer. Then run one of the following commands to compile:

iree-compile \
    --iree-hal-target-backends=rocm \
    --iree-rocm-target-chip=<...> \
    mobilenet_iree_input.mlir -o mobilenet_rocm.vmfb

Note that IREE comes with bundled bitcode files, which are used for linking certain intrinsics on AMD GPUs. These will be used automatically or if the --iree-rocm-bc-dir is empty. As additional support may be needed for different chips, users can use this flag to point to an explicit directory. For example, in ROCm installations on Linux, this is often found under /opt/rocm/amdgcn/bitcode.

Note that a ROCm target chip (iree-rocm-target-chip) of the form gfx<arch_number> is needed to compile towards each GPU architecture. If no architecture is specified then we will default to gfx908.

Here is a table of commonly used architectures:

AMD GPU Target Chip
AMD MI25 gfx900
AMD MI50 gfx906
AMD MI60 gfx906
AMD MI100 gfx908
AMD MI300A gfx940
AMD MI300 gfx942

Run a compiled programlink

Run the following command:

iree-run-module \
    --device=rocm \
    --module=mobilenet_rocm.vmfb \
    --function=predict \
    --input="1x224x224x3xf32=0"

The above assumes the exported function in the model is named as predict and it expects one 224x224 RGB image. We are feeding in an image with all 0 values here for brevity, see iree-run-module --help for the format to specify concrete values.