Want the ChatGPT experience without sending your conversations to OpenAI? Open WebUI is an open-source, self-hosted web interface that works with any LLM — including local models via Ollama, or cloud APIs like OpenAI and Anthropic. Here’s how to set it up on your own VPS.

Recommended VPS Specifications

  • CPU: 2 vCPU minimum
  • RAM: 4GB minimum (8GB recommended for multiple users)
  • Storage: 40GB+ SSD
  • OS: Ubuntu 22.04 LTS or 24.04 LTS
  • Docker: Docker Engine + Docker Compose plugin

What is Open WebUI?

Open WebUI (formerly Ollama WebUI) is a feature-rich, self-hosted AI chat interface. Think of it as your own private ChatGPT that you control completely. Features include:

  • ChatGPT-like interface — familiar and easy to use
  • Multi-model support — switch between GPT-4, Claude, Llama, Mistral, and more
  • Document uploads — chat with PDFs, Word docs, and text files
  • Code execution — run Python code in sandboxed environments
  • Image generation — integrate with Stable Diffusion or DALL-E
  • User management — multi-user with role-based access
  • Conversation history — stored locally, not in the cloud

What You’ll Need

For this guide, we recommend a Cloud VPS with at least 8GB RAM if you plan to run local models. If you’re only connecting to cloud APIs (OpenAI, Anthropic), 4GB is sufficient.

Canadian Web Hosting’s Cloud VPS plans give you the flexibility to start small and scale up. Canadian data centres mean your conversations stay in Canada — important for privacy-conscious users and regulated industries.

Option 1: Quick Install with Docker (Recommended)

The fastest way to get started is with Docker:

# Install Docker if you haven't curl -fsSL https://get.docker.com | sh sudo usermod -aG docker $USER # Log out and back in for group changes to take effect # Run Open WebUI with Ollama integration docker run -d -p 3000:8080 \ --add-host=host.docker.internal:host-gateway \ -v open-webui:/app/backend/data \ --name open-webui \ --restart always \ ghcr.io/open-webui/open-webui:main

Visit http://your-server-ip:3000 and create your admin account. That’s it — Open WebUI is running.

Option 2: Connect to Cloud APIs

If you don’t want to run local models, connect Open WebUI to OpenAI or Anthropic:

# Run with OpenAI support docker run -d -p 3000:8080 \ -e OPENAI_API_KEY=sk-your-key-here \ -v open-webui:/app/backend/data \ --name open-webui \ --restart always \ ghcr.io/open-webui/open-webui:main

For Anthropic Claude:

docker run -d -p 3000:8080 \
  -e ANTHROPIC_API_KEY=sk-ant-your-key-here \
  -v open-webui:/app/backend/data \
  --name open-webui \
  --restart always \
  ghcr.io/open-webui/open-webui:main

Once running, go to Settings ? Connections in the Open WebUI interface to configure additional providers.

Option 3: Full Local AI with Ollama

For completely offline AI, install Ollama alongside Open WebUI:

# Install Ollama curl -fsSL https://ollama.com/install.sh | sh # Start Ollama service ollama serve & # Download a model (Llama 3.2 is excellent) ollama pull llama3.2 # Run Open WebUI connected to Ollama docker run -d -p 3000:8080 \ --add-host=host.docker.internal:host-gateway \ -v open-webui:/app/backend/data \ -v ollama:/root/.ollama \ --name open-webui \ --restart always \ ghcr.io/open-webui/open-webui:main

Hardware note: Running local models requires significant RAM. Llama 3.2 3B needs about 4GB. Larger models like Llama 3.1 70B need 40GB+ and typically require GPU acceleration.

Add SSL with Nginx

For production use, put Nginx in front with SSL:

sudo apt install -y nginx certbot python3-certbot-nginx # Create Nginx config sudo nano /etc/nginx/sites-available/openwebui
server {
    listen 80;
    server_name your-domain.com;

    client_max_body_size 100M;  # For large file uploads

    location / {
        proxy_pass http://127.0.0.1:3000;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_read_timeout 86400;
    }
}
# Enable and get SSL
sudo ln -s /etc/nginx/sites-available/openwebui /etc/nginx/sites-enabled/
sudo nginx -t
sudo systemctl reload nginx
sudo certbot --nginx -d your-domain.com

Securing Your Installation

Open WebUI has built-in authentication, but there are additional steps to lock it down:

  • Enable authentication — Settings ? Admin ? Toggle “Enable Authentication”
  • Restrict signups — Disable open registration and invite users manually
  • Set up fail2ban — Protect against brute force login attempts
  • Firewall — Only allow ports 80, 443, and 22
# Basic firewall setup
sudo ufw allow 22/tcp
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp
sudo ufw enable

Keeping Open WebUI Updated

# Pull the latest image
docker pull ghcr.io/open-webui/open-webui:main

# Stop and remove the old container
docker stop open-webui
docker rm open-webui

# Start with the new image (use the same command as before)
docker run -d -p 3000:8080 \
  -v open-webui:/app/backend/data \
  --name open-webui \
  --restart always \
  ghcr.io/open-webui/open-webui:main

Your data persists in the open-webui Docker volume, so you won’t lose conversations or settings.

Troubleshooting

“Connection Refused” Errors

Check that the container is running: docker ps. If it’s not running, check logs with

docker logs open-webui
.

Ollama Models Not Appearing

Make sure Ollama is running on the host (ollama serve) and the Docker container can reach it via host.docker.internal.

Out of Memory with Local Models

Use a smaller model (Llama 3.2 1B instead of 70B) or upgrade your VPS RAM. Check memory with free -h.

Why Self-Host Instead of Using ChatGPT?

  • Privacy — your conversations never leave your server
  • Cost control — no per-message fees, just your VPS cost
  • Customization — tweak prompts, add custom tools, integrate with your systems
  • Compliance — data sovereignty for regulated industries
  • No rate limits — use it as much as you need

With a Cloud VPS from Canadian Web Hosting, you get a reliable foundation for your AI infrastructure. Our Canadian data centres, full root access, and 24/7 support mean you can focus on using AI, not managing servers.

Resources