Estimated reading time: 9 minutes
Last updated on November 8th, 2024 at 05:10 pm
In the modern computing world, GPUs ( Graphics Processing Units ) have become extremely important for handling data-intensive tasks such as machine learning, AI, and large-scale analytics and business intelligence.
Docker is the most popular platform for running container workloads at scale. Docker is the best choice for running your application and webservers at scale without worrying about capacity.
But the question is can Docker use GPU? How can Docker container can take leverages the power of GPUs? Can we even use a Docker container for the ML workload or data-intensive tasks?
The short answer is yes, they can use it.
In this guide, let’s understand how can Docker use GPUs and learn everything about setup and best practices.
Table of Contents
Understanding Docker and GPU Compatibility
What is Docker?
Docker allows the developer to package the entire application into a lightweight, standalone, and executable package that includes app dependencies required for the application called the containers.
As mentioned the container includes everything your application needs to run: code, system libraries, runtime, and dependencies.
If you want to know everything about Docker and provide in-depth explanations, and best practices check out: Mastering Docker: A Comprehensive Guide
The Role of GPUs in Modern Computing
CPUs are powerful and provide all the processing power it needs to keep the computer running. GPUs on the other hand are excellent choices for parallel processing you can think that as more specialized for rendering and graphic-related tasks.
GPUs have special use cases such as deep learning, scientific research, and video rendering, where computational demands are high and need parallel processing.
DevOps and AI: 4 Scary Predictions for 2024
Can Docker Use GPUs?
The answer is YES!
Docker containers can use GPUs but using them inside your container does require the specific setups and configuration in place.
Docker and GPUs can work together with the NVIDIA Container Toolkit. Installing and running the container with this helps to utilize the GPUs for the task requiring performance and data processing.
Let’s understand how to set the docker and the required configuration for Docker to use GPU.
DevOps Efficiency Hacks in Your Inbox! 📩
Stop wasting time searching. Get weekly tips & tutorials to streamline your DevOps workflow.
Steps to Set Up GPU with Docker (NVIDIA Toolkit)
Prerequisites for GPU-Enabled Docker Containers
Before you can run Docker containers with GPU support, you need to check your system meets the following prerequisites:
Software | Usage |
---|---|
Docker | Ensure Docker is installed on your system. |
NVIDIA GPU | A compatible NVIDIA GPU is required. |
CUDA Drivers | These are required for communication between the GPU and the system. |
NVIDIA Container Toolkit | This toolkit enables Docker to communicate with the GPU provided by NVIDIA. |
Installing NVIDIA Container Toolkit
To create GPU-enabled Docker containers, you need to install the NVIDIA Container Toolkit and configure Docker properly.
Let’s get started with installing the NVIDIA Container Toolkit. Below is a step-by-step guide for installing it on an Ubuntu system:
1. Install Docker: If Docker isn’t installed, you can install it using:
sudo apt-get update
sudo apt-get install -y docker.io
2. Add the NVIDIA package repositories and the GPG key:
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/libnvidia-container/gpgkey | sudo apt-key add -
curl -s -L https://nvidia.github.io/libnvidia-container/$distribution/libnvidia-container.list | sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
3. Install the NVIDIA Container Toolkit:
sudo apt-get update
sudo apt-get install -y nvidia-container-toolkit
Configuring Docker to Access GPUs
Once the NVIDIA driver and toolkit are installed, you can launch GPU-enabled Docker containers.
Before that let’s configure the container runtime by using the nvidia-ctk
command:
sudo nvidia-ctk runtime configure --runtime=docker
The above command will modify the /etc/docker/daemon.json
file and set the runtime for the NVIDIA.
{
"runtimes": {
"nvidia": {
"path": "nvidia-container-runtime",
"runtimeArgs": []
}
}
}
Apply the configuration changes by restarting Docker:
sudo systemctl restart docker
Configuring Rootless Docker to Access GPUs
If you follow the Docker security best practices and run the Docker container in rootless mode, you can follow this guide to configure the Docker to access the GPUs.
The only difference with the running as root and rootless is the Docker daemon file is different and per user basis. Let’s configure the rootless Docker use GPU with the nvidia-ctk
command:
nvidia-ctk runtime configure --runtime=docker --config=$HOME/.config/docker/daemon.json
The above command will modify the $HOME/.config/docker/daemon.json
file and set the runtime for the NVIDIA.
{
"runtimes": {
"nvidia": {
"path": "nvidia-container-runtime",
"runtimeArgs": []
}
}
}
Now let’s restart the Rootless Docker daemon:
systemctl --user restart docker
Configuring containerd for Kubernetes to Access GPUs
NVIDIA provides support for using GPUs with containerd
for the Kubernetes using the nvidia-ctk
with the below command:
sudo nvidia-ctk runtime configure --runtime=containerd
The above command will modify the /etc/containerd/config.toml
and updated so that containerd
can use the NVIDIA Container Runtime.
Apply the configuration changes by restarting containerd
:
sudo systemctl restart containerd
So far we have learned about rootless Docker, containerd
for Kubernetes & Docker use GPUs along with the configuration.
Level Up Your DevOps Skills! 📈
Get Weekly Tips, Tutorials & Master the Latest Trends – Subscribe Now!
Best Practices for Using GPUs with Docker
1. Ensure Proper GPU Isolation
Use the --gpus
flag in Docker to specify the number of GPUs or limit access to specific GPUs. Isolates GPU resources, preventing resources from having exclusive access to the specified GPUs.
docker run --gpus '"device=0"' ubuntu nvidia-smi
2. Use the Latest NVIDIA Docker Toolkit
Keep the NVIDIA Container Toolkit up to date to take advantage of the latest performance improvements, bug fixes, and security patches.
This ensures compatibility with the latest versions of Docker and CUDA.
sudo apt-get update && sudo apt-get upgrade nvidia-container-toolkit
3. Optimizing Docker Use GPU
Use lightweight base images that include only the necessary libraries and dependencies for your applications.
To use Docker with CUDA, you’ll need to use the official NVIDIA CUDA images optimized for performance and include all necessary drivers and libraries.
FROM nvidia/cuda
4. Implement Security Best Practices
Apply the principle of least privilege by running containers as non-root users whenever possible.
Learn more about Docker Security, which consists of different attack surfaces and actions that make your deployments more secure.
Docker Container Security Cheatsheet: Don’t Get Hacked🔐
These best practices help you maximize the performance and reliability of GPU-accelerated workloads where Docker use GPU.
Always refer to the latest official documentation to keep your configurations up to date.
Fast-Track Your DevOps Career 🚀
Stay ahead of the curve with the latest industry insights. Get weekly tips & propel your skills to the next level.
FAQs About Docker and GPU
Why is Docker not recognizing my GPU?
This issue can arise due to several reasons:
NVIDIA Driver Issues: Ensure the NVIDIA drivers are correctly installed on your host machine.
You can check this by running
nvidia-smi
on the host. If it doesn’t work, reinstall the drivers.NVIDIA Container Toolkit: The NVIDIA Container Toolkit may not be installed or correctly configured.
How can I fix the error “Could not select device driver” when running Docker with GPUs?
This can happen if the NVIDIA Container Toolkit is not installed or configured correctly. Follow the guide from the blog to install and configure the runtime for the Docker use GPU
How do I run a Docker container with GPU support?
To run a Docker container with GPU support, use the
--gpus
flag with thedocker run
command.How can I check if Docker can access my GPU?
To verify if Docker can access your GPU, use the following command to check if the NVIDIA runtime is available:
docker info | grep "Runtimes"
How do I use a GPU in Docker?
To use a GPU in Docker, you’ll need to install NVIDIA Container Toolkit.
Install NVIDIA Drivers: Ensure that your system has NVIDIA drivers installed for the GPU.
Install Docker: Set up Docker if it’s not already installed.
Install NVIDIA Container Toolkit: Allow Docker to interact with GPU
Run a GPU-Enabled ContainerDoes Docker support GPU acceleration?
Yes, Docker supports GPU acceleration through NVIDIA Container Toolkit, which allows Docker containers to access the host system’s GPU resources.
This is useful for workloads such as AI/ML, deep learning, and data processing that require GPU acceleration to enhance performance.
To enable GPU acceleration, your machine needs:
A CUDA-compatible GPU (e.g., NVIDIA GPUs)
The NVIDIA drivers and CUDA toolkit installed
Docker and NVIDIA Container Toolkit configured properlyWhich drivers do I need to use GPU with Docker?
To use GPU with Docker, you need the following drivers:
1. NVIDIA Drivers: The primary driver that allows Docker to access the GPU. Ensure you have the latest version of the drivers installed from NVIDIA’s official site.2. CUDA Toolkit: This toolkit provides the necessary development environment for GPU-accelerated applications. You can install it by following NVIDIA’s installation guide.
3. NVIDIA Container Toolkit: This allows the Docker container to interact with the GPU.
Conclusion
Docker provides the flexibility to utilize GPUs for high-performance computing making it easy for data-intensive applications to run inside the container.
Whether you’re deploying machine learning models or conducting scientific research, Docker with GPU support can significantly enhance performance and efficiency.
Additional Reading
Mastering Docker: A Comprehensive Guide
10 Docker Network Best Practices: For Optimal Container Networking