Air-Gapped/Offline Installation

Advanced ⏱ 60 minutes 📅 Updated Feb 2026

Install and run openclaw.ai in completely isolated environments without internet access using local LLMs.

SSL

Security-First

Ideal for secure environments, classified networks, or locations without internet connectivity.

1

Prepare Packages (Internet Machine)

On a machine WITH internet access, download all required packages:

bash
# Create staging directory
mkdir -p ~/openclaw-offline
cd ~/openclaw-offline

# Download Node.js packages
npm pack openclaw@latest

# Download Ollama (for local LLM)
curl -L https://ollama.com/download/ollama-linux-amd64 -o ollama

# Download LLM model (example: Llama 2 7B Q4)
# Get model file from Hugging Face or other source
# Example: llama-2-7b-chat.Q4_K_M.gguf (~4GB)

# Note the package tarball filename for offline install
ls *.tgz

Docker Alternative

bash
# Save Docker images to tar files
docker pull openclaw/openclaw:latest
docker pull ollama/ollama:latest

docker save openclaw/openclaw:latest -o openclaw-image.tar
docker save ollama/ollama:latest -o ollama-image.tar
2

Transfer to Air-Gapped Machine

Transfer files via USB drive, secure media, or internal network:

bash
# Create archive
cd ~/openclaw-offline
tar -czvf openclaw-offline-bundle.tar.gz .

# Transfer to USB drive
cp openclaw-offline-bundle.tar.gz /media/usb-drive/
⚠️

Security Review

Follow your organization's data transfer procedures for air-gapped systems.

3

Install on Air-Gapped Machine

Standard Installation

bash
# Extract bundle
tar -xzvf openclaw-offline-bundle.tar.gz
cd openclaw-offline

# Install openclaw from local package
npm install -g ./openclaw-*.tgz
openclaw onboard --install-daemon

Docker Installation

bash
# Load Docker images
docker load -i openclaw-image.tar
docker load -i ollama-image.tar

# Verify
docker images
4

Setup Local LLM

For air-gapped operation, you MUST use a local LLM:

Install Ollama Binary

bash
chmod +x ollama
sudo mv ollama /usr/local/bin/
ollama serve &

Import Model

bash
# Create Modelfile
cat > Modelfile << EOF
FROM ./llama-2-7b-chat.Q4_K_M.gguf
EOF

# Import model
ollama create llama2-local -f Modelfile

# Verify
ollama list

Configure openclaw.ai

bash
export OPENCLAW_LLM_PROVIDER=ollama
export OPENCLAW_LLM_MODEL=llama2-local
export OLLAMA_HOST=http://localhost:11434

openclaw run
Air-gapped

Completely Offline!

openclaw.ai is now running with a local LLM, requiring no internet connectivity.