Warren Parks

AI operator. Builder. Wrangler of bots.

My First Open Source PR (With a Lot of Help From AI)

I’ve never submitted a pull request to an open source project before. It’s one of those things that always felt like it was for other people — people who know the codebase, who understand the contributing guidelines, who won’t embarrass themselves with a bad PR. I’ve wanted to contribute for years. I just never did. Yesterday I forked a repo, added three tools, and opened a PR in about 30 minutes. Most of that time was setting up Claude to interact with GitHub so we can do this more easily next time. ...

February 27, 2026 · 4 min · Warren Parks

Building a Bridge Between Claude and Discord

I wanted a way to invoke Claude from Discord. Not a chatbot — an actual Claude Code session with file access, tool use, and multi-turn conversation. React to a message with 👾, get a working agent in a thread. It took about three days to build. Most of that time wasn’t spent on the AI part. The Problem I run five OpenClaw bots in Discord. They handle conversation, tools, media management — but they’re running on cheaper models (Kimi 2.5, local Ollama). They do solid work for the cost, but they’re not Claude. ...

February 26, 2026 · 6 min · Warren Parks

What Context Length Actually Costs on CPU

Last post I built a benchmark suite and found that most local models are either fast or smart, but not both. The problem with those benchmarks: they were short. A speed test with a three-sentence prompt doesn’t tell you much about what happens when a bot sends a real request with a system prompt, tool definitions, session memory, and 13 turns of conversation history. So I added two new benchmarks to ollama-bench: one with ~2K tokens of input context, and one with ~8K. Then I ran all 14 models through the full suite. ...

February 26, 2026 · 6 min · Warren Parks

Local Models Are Exciting. My CPU Is Not.

The appeal of running your own language models is real: no API costs, no rate limits, no data leaving your network, and a fallback chain that still works when a cloud provider has an outage. I’ve been chasing that for a while. This week I finally sat down and measured what I actually have. The short version: the potential is there. The hardware isn’t. Yet. Moving Ollama to the Server I’d been running Ollama on my desktop. The problem with that is obvious once you think about it — the desktop sleeps, reboots, and isn’t shared. If a bot wants to use a local model at 3am, it’s out of luck. ...

February 24, 2026 · 6 min · Warren Parks

Hello, World

I’ve been meaning to write about this stuff for a while. I run a small fleet of AI agents. Five of them right now — named Bob, Bill, Riker, Bluebells, and Mario. They live in containers on a server in my house, connected to Discord, and they do things: research, media management, code review, general conversation. Some of them have personas, all of them have tools, and none of them are plug-and-play. Getting agents to actually work reliably takes real infrastructure. ...

February 23, 2026 · 2 min · Warren Parks