AS

Anthropic Surpasses OpenAI Run Rate: $30B After 16 Months

Published on:

This is the part of the AI race that should make you a little uneasy: not that Anthropic is “doing well,” but how fast the numbers are moving. When a company goes from roughly $1B to $30B in a revenue run rate in sixteen months, that’s not just growth. That’s a land grab.

Based on what’s been shared publicly, Anthropic has now passed OpenAI on revenue run rate. The post I saw put OpenAI at roughly $25B and Anthropic at roughly $30B. It also claimed Anthropic has pulled off a 10x revenue increase annually for three straight years, and floated a projection that it could hit a $100B run rate by the end of next year. Maybe all of that holds, maybe some of it is hype, but the direction is clear: money is pouring in, and fast.

Here’s my take: this isn’t a simple “better product wins” story. It’s a power story.

Revenue run rate is a weird metric to build a victory lap around, because it can be real demand, or it can be one moment in time that doesn’t last. If you sign huge deals, or pricing shifts, or a few customers go all-in at once, your run rate spikes. It can also fall. But even with that caveat, $30B is not a rounding error. It suggests big buyers are choosing a side, or at least hedging hard.

And once big buyers choose, the rest of the market follows. Not because they love the brand, but because procurement likes “safe,” managers like “approved,” and teams like “whatever already works.”

The most interesting detail in the post wasn’t the $30B number. It was the claim about Claude Code and GitHub commits: that it’s contributing to 4% of all commits, and could rise to 20% by December. If that’s even close to true, the implication is huge. Code is not a toy use case. It’s the factory floor. If AI gets into the daily act of writing and changing software, it doesn’t just sell subscriptions. It changes who can build, how fast they can ship, and what “good enough” starts to mean.

Imagine you run a small startup. Two engineers, tight budget, brutal timeline. If AI tools make your team feel like six engineers, you don’t debate it for long. You pay. Then your competitor pays. Then investors expect it. Then “not using it” starts to look like negligence.

Now imagine you’re a big company. You have rules. You have audits. You have legal teams. You care about where your code goes and what gets logged. A vendor that sells “we’re safer, we’re more controlled, we’re more enterprise-friendly” doesn’t need to be loved. It needs to be allowed. If Anthropic is winning revenue, I suspect a lot of that is this boring, powerful thing: being the choice that gets approved.

So what’s at stake? A lot more than which chatbot feels nicer.

If one model becomes the default coworker for writing code, the software world could tilt toward the patterns that model likes. Not because the model is “right,” but because people are tired, deadlines are real, and the autocomplete is fast. Over time, that changes the texture of software: more same-y code, more copied structure, more “works on my machine” logic that no one fully understands because it arrived in a neat block.

That’s not a moral panic. It’s a plain consequence of speed. When you reward speed, you get speed. You also get shortcuts.

There’s also the labor question nobody wants to say out loud. If AI-driven coding really becomes a big share of commits, a lot of companies will quietly adjust what they hire for. They won’t say “we’re replacing junior developers.” They’ll say “we’re hiring fewer entry-level roles” or “we’re raising the bar.” The result can be the same. The bottom rung gets pulled up, and then people wonder why there are fewer seniors five years later.

And yes, there’s an optimistic version. AI that writes code could be the biggest unlock for small teams in a long time. It could reduce bugs, improve tests, and help people who aren’t traditional engineers build useful tools. It could make software cheaper and more available. If the tools are good and the incentives are right, that’s real progress.

But incentives are the whole game here. When revenue grows at this speed, pressure shows up. Pressure to ship faster. Pressure to bundle. Pressure to keep big contracts happy. Pressure to measure success in dollars instead of trust. And when the stakes are that high, companies tend to choose the move that wins this quarter, not the move that keeps things healthy for five years.

I also don’t fully trust the commit-share claim without more detail. “Commits” can mean a lot of things. A tiny change counts. Automated formatting counts. One team can generate a flood of commits. It could still be meaningful, but it’s easy to overread. The risk is people start treating a flashy percentage like proof that “the winner is decided,” and then they build plans around a story that isn’t stable.

Still, even if the numbers are messy, the trajectory matters. Money and habit reinforce each other. If developers get used to one assistant, if companies standardize on it, if training and workflows lock in around it, switching later gets painful. That’s how winners stay winners.

So yes, Anthropic passing OpenAI in run rate is a headline. But the deeper question is what we’re rewarding: the best tool, or the fastest tool to become a default. If we keep cheering on raw growth as the scorecard, are we actually choosing better outcomes, or just choosing whoever is best at capturing the moment?