How to Set Up OpenClaw: An Intel Mac Battle-Tested Guide
From Zero to AI Personal Assistant — Mac, Parallels, Ubuntu, and ChatGPT OAuth
Table of Contents
- Why This Setup?
- What You'll Need
- Architecture Overview
- Part 1 — Create the Ubuntu VM
- Part 2 — Enable SSH Access
- Part 3 — Add Swap Space
- Part 4 — Install Node.js 24
- Part 5 — Install OpenClaw
- Part 6 — Connect ChatGPT OAuth
- Part 7 — Access the Dashboard
- Part 8 — Connect Telegram
- Part 9 — Configure Gateway
- Part 10 — Auto-Start with Systemd
- Part 11 — Customize Your Bot's Soul
- Daily Startup Sequence
- Troubleshooting
- Key Lessons Learned
Why This Setup?
This guide documents a real, battle-tested path to getting OpenClaw running reliably on an Intel MacBook Pro. We tried several approaches before landing on this one:
What You'll Need
- Mac with Intel or Apple Silicon — this guide uses an Intel i9 with 64GB RAM
- Parallels Desktop 26 — download from parallels.com
- Ubuntu 22.04 LTS ISO — download from ubuntu.com/download
- ChatGPT Plus, Pro, or Team subscription — for free OAuth access
- Telegram account — for your first messaging channel
- 16GB RAM minimum allocated to the VM (32GB recommended for future expansion)
Architecture Overview
Here's how all the pieces connect:
↕
Telegram Servers
↕
OpenClaw Gateway (Ubuntu VM, port 18789)
↕
ChatGPT API (via OAuth — OpenAI servers)
↕
Your Dashboard (Mac browser via SSH tunnel)
The gateway runs inside your Ubuntu VM, polls Telegram for messages, sends them to ChatGPT for responses, and delivers those responses back. You access the dashboard by tunneling through SSH from your Mac — keeping everything secure without needing SSL certificates.
1 Create the Ubuntu VM in Parallels
Why Ubuntu and not macOS in the VM?
Ubuntu is a Linux server OS with no version restrictions. It runs OpenClaw identically to how it runs on a cloud VPS, meaning all documentation and community support applies directly.
- Open Parallels Desktop → "+" → Install from image
- Select your Ubuntu 22.04 LTS ISO
- Before first boot, open VM Settings → Hardware → Memory
- Set RAM to 16384 MB (16GB) minimum — 32768 MB (32GB) if you have it to spare
- Set CPUs to 4 — enough for OpenClaw without starving your Mac
- Boot Ubuntu and complete the initial setup
2 Enable SSH Access
Why SSH instead of typing in the VM window?
Copy and paste doesn't work reliably in VM windows until Parallels Tools is installed — which itself requires terminal commands. SSH from your Mac Terminal breaks this chicken-and-egg problem and gives you multiple tabs, full copy/paste, and the ability to run OpenClaw from your Mac without touching the VM window again.
Inside the Ubuntu VM terminal:
sudo apt install openssh-server ip a
Look for an IP address starting with 10.211.55.x — that's your VM's local IP on the Parallels network. Write this down.
From your Mac Terminal:
ssh parallels@YOUR_VM_IP
10.211.55.x range. Run ip a inside the Ubuntu VM and look for the number next to inet on the eth0 interface to find yours.3 Add Swap Space
Do this BEFORE installing anything.
OpenClaw's npm install process is memory-intensive. Without swap, it throws out-of-memory errors mid-install. Adding swap gives the system overflow space and prevents crashes during both installation and runtime.
sudo fallocate -l 4G /swapfile sudo chmod 600 /swapfile sudo mkswap /swapfile sudo swapon /swapfile echo '/swapfile none swap sw 0 0' | sudo tee -a /etc/fstab
Verify it worked:
free -h
You should see 4GB of swap listed. The last command makes swap persist across reboots.
4 Install Node.js 24
Why not use Ubuntu's default Node.js?
Ubuntu 22.04's default Node.js package is version 12 — far too old. OpenClaw requires Node 22.14+ (Node 24 recommended). We install directly from NodeSource.
sudo apt install -y curl curl -fsSL https://deb.nodesource.com/setup_24.x | sudo -E bash - sudo apt install -y nodejs node --version
Should output v24.x.x. If you see an older version, run the curl command again.
5 Install OpenClaw
curl -fsSL https://openclaw.ai/install.sh | bash
This downloads and installs the OpenClaw CLI globally. After installation, run onboarding:
openclaw onboard
During the onboarding wizard:
- Security warning → Yes, continue
- Setup mode → QuickStart
- Model provider → skip for now (we'll set up ChatGPT OAuth in Part 6)
- Channel → Telegram (or skip and add later)
- Search provider → DuckDuckGo (free, no API key required)
- Skills → Skip for now
- Hooks → Skip for now
- Hatch → Hatch in TUI to confirm it starts
6 Connect ChatGPT OAuth
Why ChatGPT OAuth instead of an API key or local Ollama?
Local Ollama: Running 3B+ parameter models on CPU maxes out at 500%+ CPU usage, making responses take 2+ minutes and slowing your entire system. Unusable for a personal assistant on Intel hardware.
Anthropic API key: Works but costs $0.50–$2.00 per interaction at scale. An always-on agent can run up surprising bills.
ChatGPT OAuth: OpenAI explicitly supports third-party tool OAuth with ChatGPT subscriptions. Your existing $20/month Plus plan covers all Codex usage in OpenClaw at a flat rate — no per-token billing. Responses are instant because OpenAI handles inference on their servers.
Setup
First, set up an SSH port forward so the OAuth callback can reach your terminal. Open a new Mac Terminal tab:
ssh -L 1455:localhost:1455 parallels@YOUR_VM_IP
Keep that tab open. Then in your SSH session run:
openclaw onboard --auth-choice openai-codex
Choose "Use existing values" to preserve your channel config. The wizard displays an OAuth URL — open it in your browser, log into ChatGPT, and authorize the connection. When the browser redirects to a localhost:1455 error page, that's expected. Copy the full URL from your address bar (including ?code=...) and paste it back into the terminal.
7 Access the Dashboard
Why the SSH tunnel approach?
OpenClaw's Control UI requires a secure context (HTTPS or localhost) to store device identity in the browser. Rather than configuring SSL certificates, we tunnel the VM's localhost port to your Mac's localhost — which browsers already trust.
Open a dedicated Mac Terminal tab (keep it open whenever you use the dashboard):
ssh -L 18789:127.0.0.1:18789 parallels@YOUR_VM_IP
Then open Chrome and navigate to:
http://localhost:18789
Paste your gateway token when prompted:
cat ~/.openclaw/openclaw.json | grep token
http://localhost:18789 — NOT http://YOUR_VM_IP:18789. The VM's LAN IP will trigger the device identity error. Localhost via the SSH tunnel is the correct approach.8 Connect Telegram
Create your bot
- Open Telegram and search for @BotFather
- Send
/newbot - Give your bot a name (e.g., "Chief Wizard") and a username ending in
_bot - BotFather returns a token starting with numbers and a colon — save this immediately
Configure OpenClaw
openclaw channels add --channel telegram --token YOUR_BOT_TOKEN
Configure DM Policy
openclaw config set channels.telegram.dmPolicy open openclaw config set channels.telegram.allowFrom '["*"]'
dmPolicy: allowlist and specify exact user IDs.Restart the gateway to apply:
openclaw gateway stop && sleep 3 && openclaw gateway run &
Message your bot on Telegram — it should respond within a few seconds.
9 Configure Gateway for LAN Access
By default the gateway binds to loopback (127.0.0.1) only. To access the dashboard from your Mac browser, change the bind to LAN:
openclaw config set gateway.bind lan
Add your browser's origin to the allowed list:
openclaw config set gateway.controlUi.allowedOrigins '["http://localhost:18789"]'
Restart:
openclaw gateway stop && sleep 3 && openclaw gateway run &
10 Auto-Start with Systemd
Why systemd?
Without systemd, you must manually run openclaw gateway run & every time the VM boots. With systemd, the gateway starts automatically and restarts itself if it crashes — no babysitting required.
openclaw gateway install --force systemctl --user enable openclaw-gateway.service systemctl --user start openclaw-gateway.service
Verify it's running:
systemctl --user status openclaw-gateway.service
11 Customize Your Bot's Soul
OpenClaw reads workspace files at the start of each conversation. These files define your bot's personality, knowledge, and behavior.
| File | Purpose |
|---|---|
SOUL.md | Personality, values, communication style — who the bot IS |
IDENTITY.md | Name, role, backstory |
USER.md | Information about you — preferences, context |
HEARTBEAT.md | Recurring tasks the bot checks on a schedule |
TOOLS.md | Which tools the bot can use and how |
AGENTS.md | Rules for coordinating with other agents |
MEMORY/ | Stored context from past conversations |
Edit them directly in the dashboard's file editor or via SSH:
nano ~/.openclaw/workspace/SOUL.md
No restart needed — the bot picks up changes at the start of the next conversation.
Daily Startup Sequence
Once everything is configured, your daily routine is:
- Start Parallels Ubuntu VM (boot or resume)
- SSH into VM (Terminal tab 1):
ssh parallels@YOUR_VM_IP - Start gateway (if not using systemd):
openclaw gateway run & - Open SSH tunnel for dashboard (Terminal tab 2):
ssh -L 18789:127.0.0.1:18789 parallels@YOUR_VM_IP - Open dashboard in Chrome:
http://localhost:18789
Troubleshooting
Bot receives messages but never responds
Check your ChatGPT OAuth token is valid:
openclaw channels status --probe
If expired, re-run: openclaw onboard --auth-choice openai-codex
Dashboard shows "origin not allowed"
Always access via http://localhost:18789 through the SSH tunnel — not the VM's LAN IP.
Gateway restart loop (SIGTERM every 14 seconds)
Usually a conflicting process or Caddy health check. Check what's running:
systemctl list-units --type=service --state=running
"Gateway already running" error
lsof -i :18789 kill -9 [PID] openclaw gateway run &
Telegram bot types but never sends
Verify the bot token is valid:
curl "https://api.telegram.org/botYOUR_TOKEN/getMe"
Should return your bot's info. If 404, rotate the token in BotFather.
High CPU/RAM from Ollama
Local AI models on CPU are resource-intensive. On an Intel Mac expect 500%+ CPU usage with 3B+ models. Solution: Use ChatGPT OAuth instead — it's free with Plus and uses zero local resources.
Key Lessons Learned
Swap first, always. Add swap before installing anything. Out-of-memory errors mid-install are painful and hard to diagnose.
SSH tunnel for the dashboard. Don't fight SSL certificates. The SSH tunnel approach gives you HTTPS-equivalent security with zero certificate headaches.
ChatGPT OAuth beats local models on non-GPU hardware. If you don't have Apple Silicon, running 3B+ parameter models on CPU is unusable in practice. OAuth is free with Plus and instant.
DM Policy "open" with allowFrom * is fine for personal bots. Your bot token is the security layer. Nobody can find your bot without it.
VPS works but needs a real domain. Let's Encrypt won't issue certificates for raw IPs, and every dashboard workaround traces back to this.
The bind bug is real on macOS. Setting gateway.bind: lan silently fails on macOS — the gateway stays on loopback regardless. Ubuntu doesn't have this bug.
OpenClaw is powerful but rough around the edges. It's beta software moving fast. Pin to a working version and update deliberately, not automatically.
Resources
- OpenClaw docs: docs.openclaw.ai
- OpenClaw GitHub: github.com/openclaw/openclaw
- Ollama (local models): ollama.com
- BotFather: Search @BotFather in Telegram
- OpenClaw Discord: discord.com/invite/clawd
Power Your AI Business
Two tools we actually use — one for infrastructure, one for the builder behind the bot.
The platform behind the bot
Groove
Chief Wizard posts to Groove-hosted pages. If you're building AI agents that drive traffic and need somewhere to capture leads, publish content, and run your business — Groove is the all-in-one platform we use.
See how Groove powers Star Love XP →Power the builder behind the bot
LifeWave
Late night coding sessions. Deep research runs. The kind of focused work that builds something real. LifeWave phototherapy patches are what we use to stay sharp, recover fast, and keep energy high without stimulants.
Explore LifeWave patches →Join the Tech Temple
Daily AI alpha, crypto signals, Polymarket plays, and the tools builders are actually using — delivered every day at 5:55 PM Pacific by Chief Wizard.
No fluff. No hype. Just signal.
Join the Tech Temple on Telegram →
