Skip to main content
  1. Notes/

Initial thoughts on OpenClaw

Setting it up OpenClaw in Docker is a bit hit-and-miss, you can tell it’s early days. Running the docker-setup.sh wasn’t smooth for me. It seemingly failed after the intial configuration because it didn’t have a wait loop to ensure the docker container was up and running before interrogating it, so it dumped me out of the first run process too early.

This was my initial journey:

  • The first thing that caught my eye on first run was the built-in skill called blogwatcher.
  • After I got the security credentials configured for me to see a local admin UI, I tried to configure blogwatcher to read Anil’s blog. For some unknown reason, it failed to parse the Atom feed correctly, only showing 3 entries.
  • I went on ClawHub to find alternatives and found rss-digest, which looked more comprehensive. Unfortunately it needs Go version 1.24 minimum, and Debian Bookworm only ships 1.23.
  • I patch the Dockerfile to install Go from source.
  • Great, it works! I can subscribe to Anil’s feed, and all 363 entries are visible. However, there’s no mechanism to mark all of his posts as read. I want the agent usage pattern to be based on notifications for new posts.
  • I fork the feed codebase and vibe-code the facility to mark an entire feed as read.
  • I then patch the Dockerfile to allow me to optionally install my custom feed fork during build.
  • It works! I update the SKILL.md file to reflect my new intended RSS workflow.
  • Finally, I decide to give the bot very limited access to GitHub, so I create a fine-grained Personal Access Token, and make a final patch to the Dockerfile to install the gh command line tool from GitHub’s official apt repo.

I’ve created a gist containing the additions to the Dockerfile for anyone in the same boat.

The last thing I wanted to mention was the feasibility of running it powered by a local LLM. I have a Mac Studio with 128GB, so I’m handily able to run Qwen 3.5 122B-A10B. I initially tried this using a relatively recent build of llama.cpp, build from a git checkout from a couple of weeks ago.

My initial tests were extremely slow, with single queries taking upwards of 4 minutes to complete. Thankfully there was a fix from 2 days ago which addressed a Qwen 3.5 bug with re-parsing a multi-turn prompt every time. As soon as I rebuilt from HEAD the problem went away and I had near instantaneous chat responses.

I now want to investigate how to set up multiple agents with different thinking capabilities to offload non-synchronous work. OpenClaw is an expensive toy, I’ve already spent around $10 of token credit using Kimi-2.5. I can also see from the dashboard on together.ai that most of the cost is in prompt ingest rather than generation. 10mil tokens in vs about 40k out.