If you’ve been working with Docker for a while, you’ve probably encountered this frustrating error message at least once: “no space left on device.” This error can appear during various Docker operations – building images, pulling from registries, or running containers – and it always seems to happen at the worst possible moment.
After dealing with countless Docker deployments and helping teams resolve storage issues, I’ve compiled this comprehensive guide to help you understand, fix, and prevent this common Docker problem. Let’s dive into the technical details and practical solutions that actually work.
1. ‘No Space Left on Device’ Error: Understanding the Root Cause
The “no space left on device” error occurs when Docker runs out of available disk space on the filesystem where it stores its data. By default, Docker stores all its data in /var/lib/docker
on Linux systems, and this directory can quickly fill up due to several factors:
Primary Causes:
- Accumulation of unused Docker images over time
- Stopped containers that haven’t been removed
- Orphaned volumes containing application data
- Large container logs without rotation
- Build cache from frequent image builds
- System-wide disk space shortage
Docker’s layered filesystem architecture means that even seemingly small operations can consume significant disk space. Each image layer is stored separately, and multiple containers can share the same base layers, but unused layers persist until explicitly cleaned.
2. Diagnosing the Problem
Before jumping into solutions, it’s crucial to understand exactly what’s consuming your disk space. Here are the essential diagnostic commands:
Check Overall System Disk Usage
df -h
This shows you the overall disk usage across your system. Pay special attention to the filesystem containing /var/lib/docker
.
Analyze Docker-Specific Disk Usage
docker system df
This command provides a summary of space used by Docker components:
TYPE TOTAL ACTIVE SIZE RECLAIMABLE
Images 15 3 4.12GB 2.54GB (61%)
Containers 2 1 0B 0B
Local Volumes 10 2 1.3GB 1.0GB (76%)
Build Cache 5 0 2.1GB 2.1GB (100%)
For detailed breakdown, use:
docker system df -v
Check Container-Specific Usage
docker ps --size
This shows the disk usage of individual containers, including their writable layer size.
3. Immediate Solutions for Critical Situations
When you’re in a crisis and need space immediately, here are the fastest solutions:
Quick Cleanup Command
docker system prune -a -f
Warning: This removes all unused containers, networks, images, and build cache. Only use this if you’re certain you don’t need any stopped containers or unused images.
Conservative Cleanup (Safer Option)
# Remove stopped containers
docker container prune -f
# Remove unused images
docker image prune -a -f
# Remove unused volumes (be careful!)
docker volume prune -f
# Remove unused networks
docker network prune -f
Manual Selective Cleanup
If you prefer more control over what gets deleted:
# List and remove specific containers
docker ps -a
docker rm $(docker ps -a -q --filter "status=exited")
# List and remove specific images
docker images
docker rmi image_name:tag
# List and remove specific volumes
docker volume ls
docker volume rm volume_name
4. Advanced Solutions for Persistent Issues
Relocate Docker’s Data Directory
If your current disk partition is too small, you can move Docker’s entire data directory to a larger location:
Step 1: Stop Docker
sudo systemctl stop docker
Step 2: Move the data
sudo mv /var/lib/docker /path/to/new/location/docker
Step 3: Update Docker configuration
Create or edit /etc/docker/daemon.json
:
{
"data-root": "/path/to/new/location/docker"
}
Step 4: Restart Docker
sudo systemctl start docker
Configure Storage Driver Options
You can limit Docker’s storage consumption by configuring storage driver options:
{
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.size=20GB"
]
}
Set Up Log Rotation
Large container logs can consume significant space. Configure log rotation in your daemon.json:
{
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3"
}
}
5. Platform-Specific Solutions
Docker Desktop for Mac
On macOS, Docker stores data in a file called Docker.raw
. If this file grows too large:
- Open Docker Desktop Preferences
- Go to Resources → Advanced
- Note the “Disk image location”
- You can reset this by removing the
Docker.raw
file (Docker will recreate it)
Docker Desktop for Windows (WSL2)
For Windows users with WSL2:
# Check WSL disk usage
wsl --list --verbose
# Compact the virtual disk
wsl --shutdown
diskpart
# Then use the compact command on the VHDX file
Production Linux Servers
On production servers, consider:
- Setting up separate partitions for Docker data
- Using LVM for flexible disk management
- Implementing automated cleanup scripts
- Monitoring disk usage with tools like Prometheus
6. Prevention and Best Practices
Implement Automated Cleanup
Create a cron job for regular cleanup:
# Add to crontab (crontab -e)
0 2 * * 0 docker system prune -f > /var/log/docker-cleanup.log 2>&1
Use .dockerignore Files
Reduce build context size and prevent unnecessary files from being copied:
.git
.gitignore
README.md
Dockerfile
.dockerignore
node_modules
npm-debug.log
Optimize Dockerfile Practices
# Use multi-stage builds to reduce final image size
FROM node:16 AS builder
COPY package*.json ./
RUN npm ci --only=production
FROM node:16-alpine
COPY --from=builder /app/node_modules ./node_modules
COPY . .
Monitor Disk Usage
Set up monitoring with alerts:
#!/bin/bash
# Simple monitoring script
USAGE=$(df / | awk 'NR==2 {print $5}' | sed 's/%//')
if [ $USAGE -gt 80 ]; then
echo "Warning: Disk usage is at ${USAGE}%"
docker system df
fi
7. Troubleshooting Specific Scenarios
When Cleanup Doesn’t Work
If docker system prune
doesn’t free enough space:
- Check for large log files:
sudo du -sh /var/lib/docker/containers/*/*-json.log
- Look for orphaned data:
sudo du -sh /var/lib/docker/overlay2/*
- Check inode usage:
df -ih
Inode Exhaustion
Sometimes the issue isn’t disk space but inode exhaustion:
df -ih
If you’re out of inodes, you’ll need to remove files rather than just clean up Docker data.
Build Context Issues
Large build contexts can cause problems:
# Check build context size before building
du -sh .
8. Monitoring and Alerting Setup
Using Docker System Events
Monitor Docker events in real-time:
docker system events --filter type=container --filter type=image
Prometheus Monitoring
For production environments, use cAdvisor with Prometheus:
version: '3'
services:
cadvisor:
image: gcr.io/cadvisor/cadvisor:latest
ports:
- "8080:8080"
volumes:
- /:/rootfs:ro
- /var/run:/var/run:ro
- /sys:/sys:ro
- /var/lib/docker/:/var/lib/docker:ro
9. Recovery Procedures
When Docker Won’t Start
If Docker fails to start due to disk space:
- Free up space manually outside of Docker
- Clear Docker’s temporary files:
sudo rm -rf /var/lib/docker/tmp/*
- Check Docker daemon logs:
journalctl -u docker.service
Emergency Space Recovery
In critical situations:
# Find largest files in Docker directory
sudo find /var/lib/docker -type f -size +100M -exec ls -lh {} \; | sort -k5 -hr
# Remove specific large files if safe
sudo rm /var/lib/docker/tmp/very-large-file
The “no space left on device” error is entirely preventable with proper management and monitoring. By implementing the strategies outlined in this guide, you can maintain a clean Docker environment and avoid unexpected downtime.
Remember that Docker cleanup should be part of your regular maintenance routine, not something you only do when problems arise. Start with automated cleanup scripts, monitor your usage regularly, and educate your team about best practices.