NA

Neuralink and ElevenLabs Restore an ALS Patient’s Natural Voice

Published on:

This is one of those stories that sounds pure good news—until you sit with it for more than a minute. Giving someone their voice back, especially someone with ALS, is deeply human. It’s also the kind of “help” that comes with strings we don’t like to talk about when the headline is emotional enough.

Based on what’s been shared publicly, Neuralink says patient #3, Brad, who has ALS, has been able to communicate again using a version of his natural voice. The way they got there is the bigger point: a brain-computer interface paired with AI voice cloning technology built with ElevenLabs. It’s part of a clinical trial called VOICE, aimed at people who’ve lost the ability to speak.

Let me be blunt: the benefit here is real. Anyone who has watched a person lose speech knows it’s not just the loss of words. It’s the loss of speed, tone, timing, joking, arguing, flirting, apologizing—basically the parts of a person that ride on top of language. If the only thing you can do is type with your eyes or choose words slowly from a screen, you start living in a world where everyone talks over you without meaning to. So yes, restoring a familiar voice is not some gimmick. It can give dignity back in a way “communication device” doesn’t capture.

But I don’t love how quickly we’re sliding from “assistive tech” into “identity tech.”

A cloned voice isn’t just sound. It’s social proof. It’s how your partner knows it’s you in the next room. It’s how your kid hears comfort. It’s how your friends read your mood. When a system can produce “your” voice on demand, you’re not just giving a patient a tool. You’re creating a new version of them that can be replayed, edited, and—if things go wrong—used without them.

Even if everything is done with consent, consent gets messy fast when the situation is desperate. Imagine you’re Brad and you’re losing speech. Someone offers you a way to talk again, in your own voice. Are you really going to negotiate the fine print? Are you going to say, “Actually, I’d like limits on where my voice model is stored, who can access it, what happens if the company is sold”? Most people will say yes and deal with the rest later. And “later” is usually when problems show up.

There’s also a quieter consequence: families will start to prefer the smooth version of communication. A slow typing interface forces patience. A fast voice can bring conversation back, but it can also create pressure. If the system can talk at normal speed, people will expect normal speed. If it can sound upbeat, people will expect upbeat. The tech that’s meant to reduce suffering can accidentally turn into a new standard the patient is judged against.

And then there’s the question nobody wants to ask out loud: who is speaking, exactly?

If a brain signal is used to select words, and an AI generates the voice, that’s already a layered process. Depending on how it’s designed, it might involve prediction, auto-correct, maybe even smoothing or suggesting phrases. That can be helpful. It can also blur authorship. If the system “helps” you speak by guessing what you meant, it might sometimes guess wrong—and the output still carries your voice. In normal life, if I put words in your mouth, people can see my lips moving. Here, the lips don’t exist. Only your voice does.

Picture a real moment: a patient is exhausted, frustrated, and trying to tell a caregiver “stop.” The system outputs something softer. Or the patient wants to say “I’m scared,” and the system turns it into “I’m okay.” Even small shifts like that matter, because the whole point of this is agency. If we get the agency part wrong, we’re not restoring a person. We’re replacing them with a convenient version.

To be fair, there’s another side that deserves respect: this is a clinical trial, and the goal is medical help, not entertainment. People with ALS have been trapped behind walls for too long. If this works reliably, it could change day-to-day life in the most concrete way: ordering food without a посредник, calling your spouse by name, telling your doctor exactly where it hurts, saying “I love you” in the voice your family remembers. That’s not small. That’s life.

Still, I don’t think we should clap so hard that we ignore the power shift. The company that holds the model holds a piece of the person. The system that generates the voice can, in theory, generate it when the person isn’t present. And once society accepts “your voice can be rebuilt,” people will push the boundary: after someone dies, after someone loses capacity, after someone can’t easily say no.

I want this for patients. I also want hard limits around it, because the same thing that heals can be used to fake, pressure, and control.

So here’s the line I keep coming back to: if your voice can be restored by a device and an AI model, who should have the right to use that voice, and under what conditions?