How to Create a Docker Container with GPU Support

Learn how to create a docker container with GPU support on Ubuntu with our sample instructions. Unlock the power of GPU-accelerated containers today.

Why Using GPUs in Docker Containers?

Using GPUs in Docker containers offers several advantages, especially for applications that require high computational power, such as deep learning, scientific simulations, and video processing.

Here’s why using GPUs in Docker containers is beneficial:

Ease of Deployment: Containers package the application, its dependencies, and configuration together. This makes it easy to deploy GPU-accelerated applications across different machines or cloud platforms.

Dependency Isolation: Different applications may require different versions of CUDA, cuDNN, or other GPU libraries. Containers allow you to encapsulate these dependencies separately, avoiding conflicts and ensuring compatibility.

Scalability: With container orchestration tools like Kubernetes, you can automate the deployment, scaling, and management of GPU-accelerated applications across a cluster of machines, optimizing workload distribution.

Reproducibility: In research and development, reproducibility is key. Containers ensure that experiments or models can be reproduced exactly, as the entire software environment is preserved.

Security: Containers run in isolated environments, which can enhance security by limiting the potential impact of a compromised application. This is important when running sensitive or critical workloads.

Ease of Collaboration: By using Docker, you can share a containerized environment with team members, ensuring that everyone is working with the same setup. This reduces "it works on my machine" issues, particularly when dealing with complex GPU dependencies.

Seamless Integration with CI/CD: Containers can be easily integrated into continuous integration/continuous deployment (CI/CD) pipelines, allowing automated testing and deployment of GPU-accelerated applications.

Pay-As-You-Go Model in Cloud: When using cloud platforms, Docker containers with GPU support allow you to run intensive tasks on GPU instances only when necessary, taking advantage of the pay-as-you-go model.

Prerequisites

CUDA-capable GPU: Ensure you have an NVIDIA GPU installed.

Docker: Ensure that Docker is installed on your system. You can install Docker by following the official installation guide (https://docs.docker.com/engine/install/) for your operating system here.

NVIDIA Driver: Ensure that the appropriate NVIDIA driver is installed on your system. You can check the installation with the nvidia-smi command.

NVIDIA Container Toolkit: If you want GPU support in Docker containers, you need to install the NVIDIA Container Toolkit. This enables Docker to interface with NVIDIA GPUs. Installation Reference - How to Install NVIDIA Container Toolkit? ➥

Run a Docker Container with GPU Support

After setting up Docker and NVIDIA Container Toolkit, you can run Docker containers with GPU support using the following command:

Sample 1. Run a sample CUDA container

$ sudo docker run --rm --gpus all nvidia/cuda:12.4.0-base-ubuntu22.04 nvidia-smi
Use gpus from a docker container

Explanation of the Command:

sudo: Runs the command with root privileges.

docker run: Runs a new container.

--rm: Automatically removes the container when it exits.

--gpus all: Allocates all available GPUs to the container. You can also specify specific GPUs, e.g., --gpus '"device=0,1"' for the first and second GPUs.

nvidia/cuda:11.0-base: The Docker image to use, in this case, a base image with CUDA 11.0.

nvidia-smi: Command to check GPU status within the container.

Sample 2. Runtime options with Memory, CPUs, and GPUs

$ sudo docker run -itd --rm --runtime=nvidia --gpus "device=0" \
    --cpus="2" \
    --memory="4g" \
    --memory-swap="8g" \
    --shm-size="2g" \
    --gpus '"device=0"' \
    -v /path/on/host:/path/in/container \
    nvidia/cuda:12.4.0-base-ubuntu22.04 /bin/bash

Some parameter explanations:

-itd: -i: Keeps STDIN open even if not attached, which allows interactive processes to run. -t: Allocates a pseudo-TTY, making the container interactive. -d: Runs the container in detached mode, meaning it runs in the background.

--rm: Automatically removes the container when it exits. This prevents leftover containers from consuming resources.

--runtime=nvidia: Specifies the NVIDIA runtime to enable GPU support.

--cpus="2": Limit the number of CPU cores that the container can use, here set to 2 cores.

--memory="4g": Limit the memory usage of the container to 4GB.

--memory-swap="8g": Limits the total memory (physical + swap) to 8GB. The container can use up to 4GB of swap in addition to the 4GB of physical memory.

--shm-size="2g": Sets the size of the /dev/shm (shared memory) to 2GB, which is useful for applications that require more shared memory.

--gpus "device=0": Specifies to use the first GPU (device 0). If you want to limit the usage of video memory, you need to control it in the program in the container, because Docker itself does not provide a direct video memory limit option.

-v /path/on/host:/path/in/container: Bind the directory /path/on/host on the host to /path/in/container in the container to achieve directory binding. You can replace the actual path as needed. If you want to bind multiple directories, you can use multiple -v options.

nvidia/cuda:12.4.0-base-ubuntu22.04: Specifies the Docker image to use, which is a base image with CUDA 12.4.0 on Ubuntu 22.04.

/bin/bash: The command to run inside the container. It starts a Bash shell, allowing for interactive command-line operations.

Sample 3. Build Your Custom Docker Image

If you need a custom Docker image with your specific environment or software, you can create a Dockerfile. Example Dockerfile:

FROM nvidia/cuda:12.4.0-base-ubuntu22.04

# Install necessary packages
RUN apt-get update && apt-get install -y \
    python3 \
    python3-pip

# Install Python packages
RUN pip3 install --upgrade pip
RUN pip3 install numpy scipy

# Set up your application here
# COPY ./your-app /app

# Set the command to run when the container starts
CMD ["nvidia-smi"]

Build and Run the Docker Image:

# Build the image
sudo docker build -t my-cuda-app .

# Run the container with GPU support
sudo docker run --rm --runtime=nvidia --gpus all my-cuda-app

This Dockerfile starts with an NVIDIA CUDA base image, installs Python, and runs nvidia-smi by default. You can modify it to suit your needs.

Conclusion

With these steps, you should be able to create and run Docker containers with GPU support, whether using pre-built images or building your own.