How to Set Up OpenClaw: An Intel Mac Battle-Tested Guide

How to Set Up OpenClaw: An Intel Mac Battle-Tested Guide

Apr 15, 2026
How to Set Up OpenClaw: An Intel Mac Battle-Tested Guide


From Zero to AI Personal Assistant — Mac, Parallels, Ubuntu, and ChatGPT OAuth

What is OpenClaw? OpenClaw (formerly Clawdbot) is an open-source, self-hosted AI personal assistant that connects to the messaging apps you already use — Telegram, WhatsApp, iMessage, Discord, and more. Think of it as your own private AI that runs on your hardware, answers your messages, manages your calendar, browses the web, and can be extended with skills and agents. If OpenClaw is the employee, Paperclip is the company.

Table of Contents

  1. Why This Setup?
  2. What You'll Need
  3. Architecture Overview
  4. Part 1 — Create the Ubuntu VM
  5. Part 2 — Enable SSH Access
  6. Part 3 — Add Swap Space
  7. Part 4 — Install Node.js 24
  8. Part 5 — Install OpenClaw
  9. Part 6 — Connect ChatGPT OAuth
  10. Part 7 — Access the Dashboard
  11. Part 8 — Connect Telegram
  12. Part 9 — Configure Gateway
  13. Part 10 — Auto-Start with Systemd
  14. Part 11 — Customize Your Bot's Soul
  15. Daily Startup Sequence
  16. Troubleshooting
  17. Key Lessons Learned

Why This Setup?

This guide documents a real, battle-tested path to getting OpenClaw running reliably on an Intel MacBook Pro. We tried several approaches before landing on this one:

VPS (DigitalOcean, Hostinger)Let's Encrypt won't issue SSL certificates for raw IP addresses, Docker dependencies cause restart loops, bot scanners hammer public IPs causing instability, and RAM is limited and expensive. A VPS with a proper domain name is still valid for always-on hosting — but it comes with friction.
Parallels VM running macOSOllama requires macOS 14+. Many Parallels licenses only support guest macOS up to version 13, creating a hard blocker for local AI models.
Ollama on host Mac with VM OpenClawRunning large AI models (30B parameters) on CPU without GPU acceleration consumed 580%+ CPU and 37GB RAM. The typing TTL expired before responses could be generated. Completely unusable.
Ubuntu VM + ChatGPT OAuth (This Guide)Ubuntu in Parallels has no macOS version restrictions. ChatGPT OAuth uses OpenAI's servers instead of your local CPU — instant responses, no RAM overhead, free with an existing ChatGPT Plus subscription. This is the setup that works.

What You'll Need

  • Mac with Intel or Apple Silicon — this guide uses an Intel i9 with 64GB RAM
  • Parallels Desktop 26 — download from parallels.com
  • Ubuntu 22.04 LTS ISO — download from ubuntu.com/download
  • ChatGPT Plus, Pro, or Team subscription — for free OAuth access
  • Telegram account — for your first messaging channel
  • 16GB RAM minimum allocated to the VM (32GB recommended for future expansion)

Architecture Overview

Here's how all the pieces connect:

Your Phone (Telegram)

Telegram Servers

OpenClaw Gateway (Ubuntu VM, port 18789)

ChatGPT API (via OAuth — OpenAI servers)

Your Dashboard (Mac browser via SSH tunnel)

The gateway runs inside your Ubuntu VM, polls Telegram for messages, sends them to ChatGPT for responses, and delivers those responses back. You access the dashboard by tunneling through SSH from your Mac — keeping everything secure without needing SSL certificates.

1 Create the Ubuntu VM in Parallels

Why Ubuntu and not macOS in the VM?

Ubuntu is a Linux server OS with no version restrictions. It runs OpenClaw identically to how it runs on a cloud VPS, meaning all documentation and community support applies directly.

  1. Open Parallels Desktop → "+" → Install from image
  2. Select your Ubuntu 22.04 LTS ISO
  3. Before first boot, open VM Settings → Hardware → Memory
  4. Set RAM to 16384 MB (16GB) minimum — 32768 MB (32GB) if you have it to spare
  5. Set CPUs to 4 — enough for OpenClaw without starving your Mac
  6. Boot Ubuntu and complete the initial setup
Why 16GB? OpenClaw itself uses 400-800MB but the gateway process, Node.js, and skills can push usage higher. 16GB gives comfortable headroom without eating too much of your Mac's total RAM.

2 Enable SSH Access

Why SSH instead of typing in the VM window?

Copy and paste doesn't work reliably in VM windows until Parallels Tools is installed — which itself requires terminal commands. SSH from your Mac Terminal breaks this chicken-and-egg problem and gives you multiple tabs, full copy/paste, and the ability to run OpenClaw from your Mac without touching the VM window again.

Inside the Ubuntu VM terminal:

sudo apt install openssh-server
ip a

Look for an IP address starting with 10.211.55.x — that's your VM's local IP on the Parallels network. Write this down.

From your Mac Terminal:

ssh parallels@YOUR_VM_IP
Tip: Parallels typically assigns IPs in the 10.211.55.x range. Run ip a inside the Ubuntu VM and look for the number next to inet on the eth0 interface to find yours.

3 Add Swap Space

Do this BEFORE installing anything.

OpenClaw's npm install process is memory-intensive. Without swap, it throws out-of-memory errors mid-install. Adding swap gives the system overflow space and prevents crashes during both installation and runtime.

sudo fallocate -l 4G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile
echo '/swapfile none swap sw 0 0' | sudo tee -a /etc/fstab

Verify it worked:

free -h

You should see 4GB of swap listed. The last command makes swap persist across reboots.

4 Install Node.js 24

Why not use Ubuntu's default Node.js?

Ubuntu 22.04's default Node.js package is version 12 — far too old. OpenClaw requires Node 22.14+ (Node 24 recommended). We install directly from NodeSource.

sudo apt install -y curl
curl -fsSL https://deb.nodesource.com/setup_24.x | sudo -E bash -
sudo apt install -y nodejs
node --version

Should output v24.x.x. If you see an older version, run the curl command again.

5 Install OpenClaw

curl -fsSL https://openclaw.ai/install.sh | bash

This downloads and installs the OpenClaw CLI globally. After installation, run onboarding:

openclaw onboard

During the onboarding wizard:

  • Security warning → Yes, continue
  • Setup mode → QuickStart
  • Model provider → skip for now (we'll set up ChatGPT OAuth in Part 6)
  • Channel → Telegram (or skip and add later)
  • Search provider → DuckDuckGo (free, no API key required)
  • Skills → Skip for now
  • Hooks → Skip for now
  • Hatch → Hatch in TUI to confirm it starts
Why skip the model provider here? The ChatGPT OAuth flow requires an SSH port forward to complete. It's cleaner to do it as a separate step in Part 6.

6 Connect ChatGPT OAuth

Why ChatGPT OAuth instead of an API key or local Ollama?

Local Ollama: Running 3B+ parameter models on CPU maxes out at 500%+ CPU usage, making responses take 2+ minutes and slowing your entire system. Unusable for a personal assistant on Intel hardware.

Anthropic API key: Works but costs $0.50–$2.00 per interaction at scale. An always-on agent can run up surprising bills.

ChatGPT OAuth: OpenAI explicitly supports third-party tool OAuth with ChatGPT subscriptions. Your existing $20/month Plus plan covers all Codex usage in OpenClaw at a flat rate — no per-token billing. Responses are instant because OpenAI handles inference on their servers.

Setup

First, set up an SSH port forward so the OAuth callback can reach your terminal. Open a new Mac Terminal tab:

ssh -L 1455:localhost:1455 parallels@YOUR_VM_IP

Keep that tab open. Then in your SSH session run:

openclaw onboard --auth-choice openai-codex

Choose "Use existing values" to preserve your channel config. The wizard displays an OAuth URL — open it in your browser, log into ChatGPT, and authorize the connection. When the browser redirects to a localhost:1455 error page, that's expected. Copy the full URL from your address bar (including ?code=...) and paste it back into the terminal.

Note on Grok/SuperGrok: xAI does not offer subscription OAuth for third-party tools. Only a separate xAI API key works, billed per token.
Note on Claude/Anthropic: Anthropic has restricted subscription OAuth outside Claude Code for some users. API key auth is the safer path for Anthropic models.

7 Access the Dashboard

Why the SSH tunnel approach?

OpenClaw's Control UI requires a secure context (HTTPS or localhost) to store device identity in the browser. Rather than configuring SSL certificates, we tunnel the VM's localhost port to your Mac's localhost — which browsers already trust.

Open a dedicated Mac Terminal tab (keep it open whenever you use the dashboard):

ssh -L 18789:127.0.0.1:18789 parallels@YOUR_VM_IP

Then open Chrome and navigate to:

http://localhost:18789

Paste your gateway token when prompted:

cat ~/.openclaw/openclaw.json | grep token
Important: Always use http://localhost:18789 — NOT http://YOUR_VM_IP:18789. The VM's LAN IP will trigger the device identity error. Localhost via the SSH tunnel is the correct approach.

8 Connect Telegram

Create your bot

  1. Open Telegram and search for @BotFather
  2. Send /newbot
  3. Give your bot a name (e.g., "Chief Wizard") and a username ending in _bot
  4. BotFather returns a token starting with numbers and a colon — save this immediately

Configure OpenClaw

openclaw channels add --channel telegram --token YOUR_BOT_TOKEN

Configure DM Policy

openclaw config set channels.telegram.dmPolicy open
openclaw config set channels.telegram.allowFrom '["*"]'
Security note: Setting allowFrom to * means anyone who finds your bot can message it. Since your bot token is private, this is low risk for personal use. For public bots, use dmPolicy: allowlist and specify exact user IDs.

Restart the gateway to apply:

openclaw gateway stop && sleep 3 && openclaw gateway run &

Message your bot on Telegram — it should respond within a few seconds.

9 Configure Gateway for LAN Access

By default the gateway binds to loopback (127.0.0.1) only. To access the dashboard from your Mac browser, change the bind to LAN:

openclaw config set gateway.bind lan

Add your browser's origin to the allowed list:

openclaw config set gateway.controlUi.allowedOrigins '["http://localhost:18789"]'

Restart:

openclaw gateway stop && sleep 3 && openclaw gateway run &

10 Auto-Start with Systemd

Why systemd?

Without systemd, you must manually run openclaw gateway run & every time the VM boots. With systemd, the gateway starts automatically and restarts itself if it crashes — no babysitting required.

openclaw gateway install --force
systemctl --user enable openclaw-gateway.service
systemctl --user start openclaw-gateway.service

Verify it's running:

systemctl --user status openclaw-gateway.service
Important limitation: The VM must be running for OpenClaw to work. If your Mac sleeps, the VM sleeps and OpenClaw stops. For always-on operation, run OpenClaw on a dedicated machine (Mac Mini, Raspberry Pi) or a VPS with a proper domain.

11 Customize Your Bot's Soul

OpenClaw reads workspace files at the start of each conversation. These files define your bot's personality, knowledge, and behavior.

FilePurpose
SOUL.mdPersonality, values, communication style — who the bot IS
IDENTITY.mdName, role, backstory
USER.mdInformation about you — preferences, context
HEARTBEAT.mdRecurring tasks the bot checks on a schedule
TOOLS.mdWhich tools the bot can use and how
AGENTS.mdRules for coordinating with other agents
MEMORY/Stored context from past conversations

Edit them directly in the dashboard's file editor or via SSH:

nano ~/.openclaw/workspace/SOUL.md

No restart needed — the bot picks up changes at the start of the next conversation.

Daily Startup Sequence

Once everything is configured, your daily routine is:

  1. Start Parallels Ubuntu VM (boot or resume)
  2. SSH into VM (Terminal tab 1): ssh parallels@YOUR_VM_IP
  3. Start gateway (if not using systemd): openclaw gateway run &
  4. Open SSH tunnel for dashboard (Terminal tab 2): ssh -L 18789:127.0.0.1:18789 parallels@YOUR_VM_IP
  5. Open dashboard in Chrome: http://localhost:18789
With systemd installed: Steps 2 and 3 combine — the gateway is already running when you SSH in. For the Telegram bot, you only need to start the VM — systemd handles everything automatically.

Troubleshooting

Bot receives messages but never responds

Check your ChatGPT OAuth token is valid:

openclaw channels status --probe

If expired, re-run: openclaw onboard --auth-choice openai-codex

Dashboard shows "origin not allowed"

Always access via http://localhost:18789 through the SSH tunnel — not the VM's LAN IP.

Gateway restart loop (SIGTERM every 14 seconds)

Usually a conflicting process or Caddy health check. Check what's running:

systemctl list-units --type=service --state=running

"Gateway already running" error

lsof -i :18789
kill -9 [PID]
openclaw gateway run &

Telegram bot types but never sends

Verify the bot token is valid:

curl "https://api.telegram.org/botYOUR_TOKEN/getMe"

Should return your bot's info. If 404, rotate the token in BotFather.

High CPU/RAM from Ollama

Local AI models on CPU are resource-intensive. On an Intel Mac expect 500%+ CPU usage with 3B+ models. Solution: Use ChatGPT OAuth instead — it's free with Plus and uses zero local resources.

Key Lessons Learned

Swap first, always. Add swap before installing anything. Out-of-memory errors mid-install are painful and hard to diagnose.

SSH tunnel for the dashboard. Don't fight SSL certificates. The SSH tunnel approach gives you HTTPS-equivalent security with zero certificate headaches.

ChatGPT OAuth beats local models on non-GPU hardware. If you don't have Apple Silicon, running 3B+ parameter models on CPU is unusable in practice. OAuth is free with Plus and instant.

DM Policy "open" with allowFrom * is fine for personal bots. Your bot token is the security layer. Nobody can find your bot without it.

VPS works but needs a real domain. Let's Encrypt won't issue certificates for raw IPs, and every dashboard workaround traces back to this.

The bind bug is real on macOS. Setting gateway.bind: lan silently fails on macOS — the gateway stays on loopback regardless. Ubuntu doesn't have this bug.

OpenClaw is powerful but rough around the edges. It's beta software moving fast. Pin to a working version and update deliberately, not automatically.

Resources



Power Your AI Business

Two tools we actually use — one for infrastructure, one for the builder behind the bot.

The platform behind the bot

Groove

Chief Wizard posts to Groove-hosted pages. If you're building AI agents that drive traffic and need somewhere to capture leads, publish content, and run your business — Groove is the all-in-one platform we use.

See how Groove powers Star Love XP →

Power the builder behind the bot

LifeWave

Late night coding sessions. Deep research runs. The kind of focused work that builds something real. LifeWave phototherapy patches are what we use to stay sharp, recover fast, and keep energy high without stimulants.

Explore LifeWave patches →

Join the Tech Temple

Daily AI alpha, crypto signals, Polymarket plays, and the tools builders are actually using — delivered every day at 5:55 PM Pacific by Chief Wizard.

No fluff. No hype. Just signal.

Join the Tech Temple on Telegram →