I want to share my experience installing and experimenting with OpenClaw on a Raspberry Pi.

What is OpenClaw?
OpenClaw (formerly ClawdBot/Moltbot) was released in late January as an open-source, self-hosted autonomous AI agent. In simple terms, it’s an AI assistant that runs on your own hardware and can perform tasks on your computer and across the web. Think: a local AI that can install software, configure services, create files, browse the web (with the right APIs), and automate workflows.
That’s powerful. It’s also potentially dangerous.
A Quick Warning
This is not something you should install casually. Giving an AI agent control over a machine, especially one connected to the internet, comes with real security risks. If misconfigured, it could delete files, expose credentials, or run tasks you didn’t intend.
That’s why I installed it on a Raspberry Pi 400 that had no personal data on it. It’s essentially a sandbox machine. I also plan to power it off when I’m not around and avoid long-running autonomous tasks. Ironically, long-term autonomy is one of the most powerful features, but also where things can go sideways.
The Hardware
I used a Raspberry Pi 400 (which contains a Raspberry Pi 4). The current generation is the Raspberry Pi 5, but the 400 is what I had available.
I installed Raspberry Pi OS following the official instructions on the Raspberry Pi website. That part was straightforward. Once it was booted and connected to the internet, I ran the install command from the OpenClaw website:
curl -fsSL https://openclaw.ai/install.sh | bash
It ran for a while and completed without issues.
Initial Setup
During setup, OpenClaw prompts you to choose a model. I went with Google’s Gemini models, mainly because Google offers a generous free trial (at the time of writing, $300 in API credits for cloud services). Gemini 3 Pro is also a strong model.
At one point, OpenClaw asked how I wanted it to handle memory. After some research, I selected:
-
session-memory -
boot-md
This combination is generally recommended because it gives a good balance of contextual awareness and proactive behavior without unnecessarily increasing API costs.
There were additional options that I left at defaults for now.
Connecting via Telegram
To interact with OpenClaw remotely, I chose Telegram, since it seemed to be the fastest and simplest method. NOTE: you do not need to ever use this. It is just a convenience.
Here’s what that process looked like:
-
Created a Telegram account.
-
Messaged BotFather (yes, that’s actually what it’s called).
-
Created a new bot.
-
Received an API key.
-
Copied that API key into the OpenClaw setup on the Raspberry Pi.
At first I wasn’t sure why Telegram was necessary, but it provides a convenient remote interface. You can interact with your agent from your phone without being physically at the Pi.
Adding Skills (Important)
By default, OpenClaw has limited abilities. You’ll want to add APIs to extend what it can do.
A few important ones:
-
Brave Search API – Allows it to search the web.
-
Firecrawl – Enables scraping and structured extraction from websites.
-
Gemini (or other AI API) – Provides the reasoning engine.
Without Brave Search, for example, it can’t meaningfully browse the web.
The “Soul” File
When you first interact with OpenClaw, it asks you to define its identity. This is stored in a file called soul.md.
This file defines how it behaves: its tone, personality, constraints, and priorities. For example:
-
“You will be concise.”
-
“You will be polite.”
-
“You will ask for clarification when needed.”
It’s an interesting concept. You’re essentially shaping the behavioral layer that sits on top of the model.
What I’ve Tried So Far
I haven’t pushed it too far yet, but I did test a few practical tasks.
One useful test: I had OpenClaw install the Apache web server.
That alone was helpful. Apache installation can involve multiple steps, and having the agent handle it saved time.
I also had it create a basic website, set up folders correctly, and create the required MySQL database tables. Could I have done this manually or used a coding assistant? Yes. But having everything created and placed correctly in one go was convenient.
That said, for pure coding tasks, it might be cheaper to use a coding model separately and avoid agent-based API costs.
Multiple Agents (Very Useful)
One feature I found particularly useful is the ability to define multiple agents using different models.
This can save money.
For example:
-
A “smart” agent that uses a more powerful (and expensive) model.
-
A “fast” agent that uses a lighter, cheaper model for routine tasks.
Here’s a simplified example of what that configuration looks like inside openclaw.json:
"list": [
{
"id": "smart",
"default": true,
"name": "Smart",
"model": {
"primary": "google/gemini-3-pro",
"fallbacks": [
"google/gemini-2.5-flash"
]
}
},
{
"id": "fast",
"name": "Fast",
"model": {
"primary": "google/gemini-2.5-flash",
"fallbacks": [
"google/gemini-2.5-flash-lite"
]
}
}
]
You can then switch between them depending on the task. You only need to give a short command in the chat:
/agent smart
Useful Commands
Here are some commands I’ve found helpful:
-
openclaw configure
Update settings interactively. -
nano ~/.openclaw/openclaw.json
Manually edit configuration (only if you know what you’re doing). -
openclaw gateway restart
Restart the session. -
openclaw gateway stop
Stop the session. -
openclaw gateway start
Start the session. -
openclaw models status
View your configured models and fallbacks. -
openclaw tui
Opens the terminal UI and enables the browser interface athttp://localhost:18789
Inside the chat interface, there are also slash commands such as:
-
/exit -
/session new -
/agent smart
Final Thoughts (For Now)
OpenClaw is powerful. Running an autonomous agent locally on your own hardware feels like a glimpse into where computing is headed.
But it’s not plug-and-play. You need to understand APIs, configuration files, model selection, and security implications. This is not something I would put on a machine with sensitive data.
I’ll continue experimenting and will update this post as I try more advanced use cases. For now, I’m cautiously impressed.