Sandboxing AI CLIs with Docker
This tutorial walks you through setting up an isolated development environment for AI codegen CLIs like Claude Code, Gemini CLI, OpenAI Codex, and GitHub Copilot using the aidevtools Docker image.
Why Sandbox AI Tools?
AI-powered CLIs can execute bash commands on your behalf. While incredibly productive, they can also hallucinate commands that:
- Delete or modify files unexpectedly
- Fill your disk with unwanted data
- Install dependencies globally
- Access sensitive system resources
By running these tools inside a Docker container, you get full isolation without sacrificing productivity.
Prerequisites
- Docker Desktop installed and running
That's it. No IDE extensions or special configuration needed.
Step 1: Pull the Image
The pre-built image is available on GitHub Container Registry:
docker pull ghcr.io/sugarfreebytes/aidevtools:latest
Alternatively, you can build it yourself with only the tools you need:
git clone https://github.com/sugarfreebytes/aidevtools.git
cd aidevtools
docker build --build-arg INSTALL_JAVA=false --build-arg INSTALL_RUST=false -t aidevtools .
Available Build Args
You can disable any component you don't need:
| Category | Build Arg | Default |
|---|---|---|
| Languages | INSTALL_JAVA | true |
INSTALL_NODE | true | |
INSTALL_PYTHON | true | |
INSTALL_GO | true | |
INSTALL_RUST | true | |
INSTALL_DENO | true | |
| AI CLIs | INSTALL_CLAUDE | true |
INSTALL_CODEX | true | |
INSTALL_COPILOT | true | |
INSTALL_GEMINI | true | |
| Dev Tools | INSTALL_GH | true |
INSTALL_NEOVIM | true | |
INSTALL_TMUX | true | |
INSTALL_DIRENV | true |
Step 2: Get the Wrapper Script
Download the aidevtools.sh wrapper script into your project:
curl -fsSL https://raw.githubusercontent.com/sugarfreebytes/aidevtools/main/aidevtools.sh -o aidevtools.sh
chmod +x aidevtools.sh
Or clone the repo and symlink it:
git clone https://github.com/sugarfreebytes/aidevtools.git ~/aidevtools
ln -s ~/aidevtools/aidevtools.sh /usr/local/bin/aidevtools
Step 3: Run It
Navigate to your project directory and run:
# Interactive shell with all tools available
./aidevtools.sh
# Launch a specific AI CLI directly
./aidevtools.sh claude
./aidevtools.sh gemini
./aidevtools.sh codex
# Run any command
./aidevtools.sh ls -la
On first run, the script will create a .ai/ directory in your project to persist AI tool configurations.
Step 4: Persistent AI Configuration
The .ai/ folder structure keeps your AI tool configurations persistent between container runs:
my-project/
├── .ai/
│ ├── claude-home/ # ~/.claude in container
│ │ └── claude.json # ~/.claude.json in container
│ ├── gemini-home/ # ~/.gemini in container
│ ├── codex-home/ # ~/.codex in container
│ └── copilot-home/ # ~/.config/github-copilot in container
└── ... your project files
This means:
- Your API keys and auth tokens survive between runs
- Conversation history and context are preserved
- You can switch between projects and resume where you left off
Add .ai/ to your .gitignore to avoid committing sensitive configuration:
echo ".ai/" >> .gitignore
Step 5: Optional Host Mounts
By default, the container is fully isolated. If you need to share host configurations (SSH keys, Git config, etc.),
edit the OPTIONAL_MOUNTS section in aidevtools.sh:
OPTIONAL_MOUNTS=(
-v ~/.ssh:/home/devuser/.ssh:ro
-v ~/.gitconfig:/home/devuser/.gitconfig:ro
# -v ~/.gnupg:/home/devuser/.gnupg:ro
# -v ~/.kube:/home/devuser/.kube:ro
# -v /var/run/docker.sock:/var/run/docker.sock
# -p 3000:3000
# -p 8080:8080
)
Mount credentials as read-only (:ro) to prevent the container from modifying them.
Step 6: Custom Image
If you want to use a custom image (e.g., one you built with fewer components):
./aidevtools.sh -i my-custom-image
# or
IMAGE=my-custom-image ./aidevtools.sh
Isolation Details
The container provides several layers of isolation:
- Non-root user - Runs as
devuserwith no sudo access - Ephemeral - Container is removed after each session (
--rm) - Scoped mounts - Only your project directory is mounted
- No network restrictions - AI CLIs need internet access for API calls, but the container has no access to your host filesystem beyond the mounted directory
Optional: Size-Limited Volume (macOS)
For an extra layer of protection, you can create a size-limited disk image for your workspace:
# Create a 20GB sparse disk image
hdiutil create -size 20g -fs APFS -type SPARSE -volname "AIWorkspace" ~/AIWorkspace.sparseimage
# Mount it
hdiutil attach ~/AIWorkspace.sparseimage
# Work from the mounted volume
cd /Volumes/AIWorkspace
Add the mount command to your shell profile for convenience:
alias mount-ai="hdiutil attach ~/AIWorkspace.sparseimage"
Summary
You now have a sandboxed AI development environment with:
- Isolation - Container can't touch your main OS
- No root - Runs as non-root with no sudo
- Persistence - AI configs survive between runs via
.ai/mounts - Flexibility - Customize the image with only the tools you need
- Simplicity - One script, one command