NK

North Korean Hackers Drain $286M From Drift Protocol in 10 Seconds

Published on:

This is the part of crypto that never gets fixed: the money moves at the speed of software, but the trust still runs on vibes.

A North Korea–linked hacking group allegedly drained about $286 million from Drift Protocol in around 10 seconds. Ten seconds. That’s not “they slowly siphoned funds for months.” That’s “the door was unlocked and someone backed a truck up to it.”

From what’s been shared publicly, the attack wasn’t just brute force. It was clever and cruel. The hackers created a fake collateral market around a worthless token, then used that setup to get around Drift’s safety systems. They disabled circuit breakers that are supposed to stop abnormal behavior, then ran rapid transactions using pre-signed agreements. In other words, the system did what it was allowed to do—just not what it was meant to do.

And that’s the uncomfortable truth: most of these “hacks” are really about permission. Not permission from a human, but permission from code. If the code can be convinced something is real collateral, then it’s real enough to be used against you.

People will argue about whether this was “sophisticated” or “obvious in hindsight.” I think both can be true. It can be advanced, and it can also be the kind of failure that keeps happening because the incentives are upside-down. The industry rewards speed, complexity, and new features. It punishes caution. Nobody gets famous for not shipping.

Here’s the part I don’t think enough crypto builders want to say out loud: if your safety circuit breakers can be disabled by an attacker, they aren’t safety circuit breakers. They’re decoration. They might reduce risk in normal times, but normal times are not when you need them.

Imagine you’re just a regular user with funds parked in Drift because the yields looked good or the trading felt smooth. You didn’t sign up to be part of a geopolitical money pipeline. But that’s what this turns into. If the reporting is right and this is tied to North Korea, that money doesn’t just disappear into a meme wallet. It can feed a state that is very open about using cybercrime as a funding strategy. That changes the moral temperature of the room fast.

And yet, the crypto world often reacts to these events like bad weather. “Storm hit. Anyway.” That shrug is the real vulnerability.

Because the consequences aren’t limited to one protocol. When $286 million can be taken in seconds, the blast radius spreads outward. Users lose money, sure. But more than that, people lose the feeling that any of this is stable. The next time a founder says “funds are safe,” it lands differently. The next time regulators look at the space, they don’t see innovation; they see a machine that can be robbed at will and then used to move money across borders.

A lot of the defense of crypto goes like this: “Traditional finance gets hacked too.” True. But when a bank gets hit, there are usually layers: fraud teams, reversible transfers, insurance, courts, police, time. In these attacks, time is exactly what you don’t have. Ten seconds is basically instant. You can’t “notice unusual activity” in ten seconds. You can’t wake up the on-call engineer in ten seconds. You can’t even understand what’s happening in ten seconds.

That’s why I’m not impressed when people say, “Well, the attacker was very smart.” Okay. Then what? Are we building a financial system where “smart attacker” means “inevitable wipeout”? Because if that’s the baseline, then the product isn’t finance. It’s gambling with extra steps.

To be fair, there’s a serious counter-argument: these systems are still young, and stress is how they improve. Every exploit teaches the next protocol what not to do. That’s the optimistic version, and I don’t think it’s totally wrong. Security does get better when failures are public and painful.

But here’s what bothers me: the learning is paid for by users who didn’t agree to be crash-test dummies. If you’re a developer, a venture investor, or a power trader, you may understand the risk. If you’re a normal person who saw an app, connected a wallet, and clicked a few buttons, you probably don’t. The industry likes to talk about “self-custody” and “personal responsibility,” but it quietly depends on people underestimating how sharp the edges still are.

The other consequence is strategic. If these attacks are part of a broader trend, and a state-backed group is doing it repeatedly, then this stops being only a security problem. It becomes an arms race. That pushes protocols toward tighter controls, heavier monitoring, and more gatekeeping. Which might reduce theft, but it also drags crypto toward the same permissioned world it claimed to replace. Some people will cheer that. Others will call it betrayal. Either way, the direction changes.

So what should happen now? If Drift’s safety systems could be bypassed by a fake market and worthless collateral, then the entire idea of “automated trust” needs a hard reality check. Not a tweet thread. Not a post-mortem full of vague promises. A real shift in what gets prioritized: fewer features, slower releases, and brutal assumptions about what an attacker can fake.

And users need to stop treating “decentralized” as a synonym for “safe.” It can mean resilient in one way, and fragile in another. Code doesn’t care that you’re careful, and it doesn’t care that you’re honest. It only cares what it can be made to accept.

If hacks like this can drain hundreds of millions in seconds and keep happening, what would actually convince you that this whole system is becoming safer rather than just getting better at pretending?