Ralph Wiggum is the name of a “technique” that felt like a joke when I first read about it. Geoffrey Huntley originated it, I believe, and the idea is dead simple: persist. Make your AI agent run the prompt in a loop: the same conversation, over and over. The theory is that, by doing so, the agent will eventually “get it right”.

while :; do openhands -f PROMPT.md --headless ; done

I didn’t believe it would work. My strong intuition was that the LLM would just go astray, inventing who-knows-what useless stuff, and the end result would be worse than one attempt.

Well, not so fast.

I tried it one night around 2 a.m.; maybe because I was too tired to think straight. Otherwise I might have dismissed it as too silly to waste tokens on. I set up an OpenHands SDK agent with a script running the conversation in a loop, same prompt, almost no guardrails. In other words: I did a Ralph.

The task was to create a CLI for working with LLM profiles, loading, switching, and the like. Neither complex nor super-simple, but with a few footguns that the agent could’ve stepped on.

Then I went to sleep.

… but it kinda works

The next day I found a handy CLI waiting. No issues, no bugs, it just worked. The agent had been running tests, and it had been fixing some minor but real bugs with the existing tests’ reliability, and with the use of the OpenHands terminal via tmux, which I PR-ed to the upstream repo separately. For the rest, it kept focus. It didn’t go off the rails.

Maybe the task was too easy? Maybe! It wasn’t difficult, but it was more than the agent would have one-shot I think.

Importantly, my prompt was bad: a wall of text dumped at 2 a.m., a clear spec it is not. Also, the iterations had no clear finish condition, other than a hard maximum of 50 iterations and a sentence in the prompt saying like “if you’re done, you can exit.” So it did run all 50 iterations, but the last more than 30 (!) were no-ops.

Once the agent thought it was finished, each new iteration did nothing: it checked the code, found no issues, maybe tweaked a word in a comment, and exited. This was with GPT-5.0. I harbor a tiny suspicion that Sonnet 4 would have found new “issues” to “fix” forever, but GPT-5.0 was pragmatic.

If it had a clear finish condition, and a better prompt accounting for the iterative process… I feel like that way lies some usefulness I don’t wrap my head around yet.

Anyway, Ralph Wiggum is silly, but it kinda works. Worth messing with on something more complex than the current LLMs can handle in one go.