It’s that time again.

The model capabilities have jumped a couple of milestones, AI agents are everywhere, PRs and reviews are melting and molding into a continuous improvement process of sorts. OpenHands is changing too.

When this happens, I need to step back and think. Change is natural and necessary, but to understand it, I need to go back to basics. What does OpenHands need to be? What does it not need to be?

A lens from 2023

Recently, an Anthropic internal document from June 2023 surfaced in a lawsuit. It’s an amazing read, even today, and moreso for the middle of 2023. It made me think — and it crystallized something I’d been idly pondering for a long while.

The document lays out three visions for transformative AI: the Robot (an autonomous virtual employee), the Cyborg (a personal augmenter, “your” AI), and Org Infrastructure (pervasive intelligence woven into how organizations work).

My first reaction was: “We’re on other two at OpenHands… robot and cyborg, actually.” Once I wrote that, I stopped and stared at it for a while. Is it true? Yes, it is.

OpenHands, formerly known as OpenDevin, was born in the wake of the Devin splash-marketing campaign from March 2024 — “the AI software engineer of the future is coming.” It was sleek and it was exciting and it was… concerning. Because that future was incredibly fun and empowering, but empowering a few; since it was closed, proprietary, accessible to some and not to others. The community response was immediate and incredible: academia and software engineering and Joe Schmuck got together to build an Open Source AI software engineer. As a statement, a protest, and a project. OpenDevin was born.

But we’ve always had a tension in the project: what belongs in Open Source and what doesn’t? Saying “the AI software engineer could partly be OSS” doesn’t tell us enough. The Anthropic document helped me see why.

The document itself acknowledges that the AI employee (their “robot”) shares a common agentic core with the human augmenter (“cyborg”). The AI employee is a software engineer. The human augmenter is a software engineer. They’re siblings. Maybe the AI engineer-employee is simply the way the AI engineer gets taken into enterprise.

The Robot archetype implies a bunch of things — many agents for many users, enterprise rules, org-level orchestration — while the Cyborg implies something different: accessible wherever you are, memory of what you care about when reviewing PRs, what you want it to know about how you work or with what context you’re working.

That distinction is, I think, a very good lens to help us understand the different use cases.

What the Personal AI Engineer needs

So what does the concept of the AI-engineer-as-human-augmenter (the “cyborg”) imply?

  • Local-first. The AI engineer runs on your machine. It has access to your files, your tools, it can run code locally. But “local” doesn’t exactly mean “local-only.” OpenHands has always been called “local,” and it is — but the better word is local-first. Your machine is home base, but you can access it from the internet just like you access it from your iPad, just like you access the old laptop in the corner that you gave to the agent. That’s remote access, a basic version of it. Local-first means your machine is the center of gravity, not that you’re chained to it.
  • Your choice of model. Not locked in. Models are improving at an incredible pace, and the best one for you might not be the same as the best one for me. You should be able to mix and match and choose, and change your mind.
  • Memory. Conversations aren’t fully disconnected, unless you want them to be.The agent knows the project(s), it knows what you’ve worked on before, it adapts to your style through what it learns from working with you. Memory has very high lock-in potential, and LLM companies have already started on the lock-in path: OpenAI has an encrypted compaction endpoint, Anthropic built in context awareness and is messing with the model’s context behind the scenes, and most of this is still an unsolved research problem. Your agent’s memory must be yours.
  • Accessible wherever you are. On the iPad in my garden. On the phone in the train. If it’s your augmenter, your extension, then it needs to be with you. Device doesn’t matter.
  • Meets you where you work or play. GitHub, Discord, Slack — your agent should be reachable from the surfaces you already use, not limited to a single UI. Using it with the tools you already live in unlocks great potential.

This last point connects back to local-first. When you trigger your agent from a GitHub PR comment, or from Discord, you’re reaching your own machine from the outside. It’s remote ingress to your Personal AI Engineer. The iPad in the garden, the phone in the train, the GitHub integration — they’re all the same thing: you, talking to your computer back home.

I happen to believe Claude Code won because it became suddenly possible to talk to your computer. A tiny terminal window and you speak in English and your computer writes code and runs commands and does the thing. The same applies here — only, as LLM capabilities improved, it naturally moves from the terminal to the other places where you are.

What it doesn’t need

Notice what is not on that list. Multi-user is obvious. Enterprise all kinds of dashboards and controls and audits you-name-it. Sandbox fleet orchestration. These aren’t bad things; they’re just not this. The Personal AI Engineer doesn’t need them. It talks to a sandbox, not a sandbox fleet. It authenticates or otherwise secures access for you.

How about OpenDevin’s Web UI?

The Web UI is a bit special case, for a simple reason: it’s the Devin-like UI, exposing the Devin functionality that the community wanted to build in the open. It’s pretty and it felt good.

But the Web UI has always been a bit of an odd fit for the Open Source project. It has some features that are, strangely, mostly more limited than your own machine:

  • VSCode web: on my machine, I have VSCode desktop, and it’s more powerful and more integrated with my workflow than the web version (for obvious reasons)
  • read-only terminal: both you and your agent have it on your machine, and it’s not read-only. It can be useful when an agent is running in a sandbox or remote, but I don’t know; for months now, one of my agents is debugging the other. Just the other day, I opened OpenHands CLI and complained that my little feline agent isn’t responding anymore on Whatsapp. I think this feature is assuming humans debug terminal output… that’s so early 2025.
  • diff viewer: this one is a bit funny to me, do we really need another diff viewer, especially read-only? I’ve expressed before my doubts about it, I honestly don’t know what justifies its on-going maintenance. Moreso because with agents’ capabilities increases, we are moving away from nitpicking every bit of code, and the diff will be on GitHub or your versioning system anyway. Even code review is changing, which is one more step away.
  • browser and app tabs: well, my agent has its own Chrome these days. Even if I ran it remotely, it would have its own browser. The capabilities are there for a while now.

Maybe these make sense for enterprise use cases, although even that I’m not so sure anymore. In December 2025, I had CLI AI agents working for 18 days (with breaks, but still), communicating to each other while I was sleeping. I… think we have stepped in a different world, where UIs are melting and agents love CLIs and APIs. Textfields for humans in a browser? I don’t know, my agent uses Slack in its own browser. People don’t like it when I say this, but I think no one knows the “right” UI/UX for agents, and we all need to try things and scrap and learn.

The Web UI is there because it was the first thing we built, but I think it’s about a year now since we were wondering if we can deprecate it. A lot of bug reports on OpenHands come from it, from incessant issues with docker, people losing their work even, a lot of cruft and a lot of special cases and bad code design is still in it. I haven’t used it in a long time. I don’t know for how long it still makes sense to maintain it, or if we should just focus on the CLI and the integrations with the other UIs where people already are, or replace it with something else.

The Personal AI Engineer needs to be Open Source

Anthropic is clearly moving fast on all three visions from that 2023 document. I believe they will release a “Claude as Cyborg” soon — a Personal AI Engineer of their own. I believe it will obviously be proprietary, with an obnoxious license, as we’ve seen for a while from them.

An AI engineer at your fingertips is deeply transformative power, and Anthropic is sure as hell not content to be a model company letting others build it; they encroach on the application market - their own customers! - at an accelerated pace. If the Personal AI Engineer exists as proprietary products from a handful of companies, this transformative power gets locked away behind ridiculous terms of service and licensing restrictions.

Maybe that means what OpenHands is is changing, because it cannot be tied to a specific UI. That, too, was coming for a while. OpenHands is now an ecosystem around the agent, and the Open Source sub-projects feel like they’re growing to maybe make the Personal AI Engineer useful and accessible.

I believe all of this — the agentic core, its surrounding features as the Personal AI Engineer — needs to be Open Source. Free as in freedom; not tied to a single point of failure. I don’t know how to express the urgency of this, but I feel it in my bones. It’s just too powerful to be locked away. An AI engineer of your own.