- DYSLEXIC AI
- Posts
- Newsletter 324: When Voice Stops Working, I Stop Working
Newsletter 324: When Voice Stops Working, I Stop Working
🧠 How voice-to-text became my most essential accessibility feature, and what happens when it breaks

Hey friends,
Happy Friday!
I need to be honest with you.
I've become completely dependent on AI and voice features.
Not in the way people usually mean when they talk about "AI dependence."
Not lazy. Not complacent.
Dependent in the way someone with low vision depends on screen magnification.
Dependent in the way someone in a wheelchair depends on a ramp.
Voice-to-text is my accessibility feature.
Full stop.
And yesterday's newsletter about going against the grain my whole life? That's the context for why this matters so much.
Today I want to talk about what happens when the accessibility features you depend on stop working.
The Upgrade That Downgraded Me
A few weeks ago, I upgraded my phone.
Samsung S23 to Samsung S25.
The main reason? The curved edges on the S23 caused me to keep cracking the screen.
The S25 has flat edges. Problem solved, right?
I figured the upgrade would be a bonus.
Newer hardware. Updated features. Better camera. Faster processing. Everything a new phone is supposed to deliver.
What didn't I expect?
The microphone barely understands me.
I'm not talking about occasional mistakes.
The kind where you have to go back and fix a word or two.
The default voice-to-text is now less accurate than me typing.
If you know anything about dyslexia and typing, you know that's a very low bar.
My typing is slow. Error-prone. Frustrating.
It's the physical manifestation of the gap between how fast I think and how fast I can execute.
Even short, simple text messages come out garbled now.
Words I say clearly get transcribed as completely different words.
Sentences lose their structure.
It's like going backwards.
From a tool that let me operate at the speed of thought to a tool that's worse than the bottleneck I was trying to escape.
And it's not just frustrating. It's disabling.
What "Disabling" Actually Means
Let me be specific about what happens when voice features break.
Before (with working voice-to-text):
I can respond to texts while driving (hands-free)
I can capture ideas the moment they hit
I can write newsletters like this one in 20 minutes
I can participate in group chats and keep up with conversations
I can document meetings and thoughts in real-time
I can operate at the speed of my thinking
After (with broken voice-to-text):
I avoid texting unless absolutely necessary
Ideas evaporate before I can capture them
Newsletters take 3-4x longer and feel like pulling teeth
I go silent in group chats because responding takes too much energy
I lose details from meetings because I can't document fast enough
I operate at the speed of my typing, which is painfully slow
This isn't about convenience.
This is about capability.
When voice features work, I'm a high-output creator who produces multiple newsletters from a single 20-minute conversation.
When they don't, I'm someone who struggles to send a coherent text message.
That's the definition of an accessibility feature.
Something that closes the gap between capability and execution.
Something that, when removed, creates a barrier where there wasn't one before.
The Bridge Across the Gap
I've written before about the gap between thinking speed and execution speed.
For dyslexic minds, the bottleneck has never been the thinking.
It's the output.
We can see the whole picture.
Connect the dots.
Generate ideas at full speed.
Spot patterns and relationships that others miss.
Process information laterally instead of linearly.
But the moment we try to push those ideas through a keyboard?
Everything slows down. Gets mangled. Loses its shape.
Here's what that actually feels like:
I'm thinking: "Voice-to-text bridges the gap between my thinking speed and execution speed, which is why it's so essential for neurodivergent thinkers."
My fingers type: "Voice to txet bridgse the gap betwen my thiniking speed and executino speed which is why its so essnetial for neurdivergent thinekrs."
That's not an exaggeration.
That's a real representation of what comes out when I type at the speed I'm thinking.
I can slow down. Triple-check every word. Type deliberately.
But then I'm not operating at thinking speed anymore.
I'm operating at typing speed.
And the difference between those two speeds is the difference between writing 2 newsletters in 20 minutes and taking 3 hours to write one.
Voice-to-text was the bridge across that gap.
And AI tools that understand imperfect input made that bridge even wider.
Claude doesn't need perfect dictation. It understands context. It fills in gaps. It handles my verbal shortcuts and tangents and non-linear thought patterns.
Together, they let me operate at the speed of my thinking for the first time in my life.
When voice features don't work?
The bridge disappears.
And I'm stuck on the wrong side of the gap.
Back where I was for the first 40+ years of my life.
Able to think but unable to execute at the speed those thoughts deserve.
The Workarounds I've Built
Here's what I do now with my Samsung S25:
For quick texts:
I use Claude's voice input instead of Samsung's default
I dictate the message to Claude
I copy and paste into my texting app
Three steps instead of one, but at least it works
For longer content:
I use my laptop instead of my phone
Claude Desktop has better voice recognition
Or I route through other voice apps that work better
Google's voice typing is more accurate than Samsung's
For real-time situations:
I still struggle
Can't exactly pull out my laptop while driving
Can't route through Claude for a quick response in a meeting
Sometimes I just don't respond, which makes me look unresponsive
The meta problem:
I'm spending cognitive energy on workarounds.
Energy that used to go into creating content, solving problems, connecting with people.
Now it goes into figuring out how to use tools that are supposed to make things easier.
That's the hidden cost of broken accessibility features.
Not just the direct friction.
But the cognitive load of constant adaptation.
The Test I Apply to Every Tool
This experience crystallized something I've been feeling for a while.
I now evaluate every tool through a simple filter:
How good are the voice features?
Not "does it have voice features?"
But how good are they?
Here's my rubric:
Tier 1 (Essential):
Accurate transcription of my speech patterns
Handles dysfluencies and verbal shortcuts
Works in real-time without lag
Doesn't require perfect enunciation
Understands context to correct obvious errors
Tier 2 (Usable with effort):
Generally accurate but needs occasional corrections
Requires slightly more deliberate speech
Small lag but manageable
Works well enough that voice is still faster than typing
Tier 3 (Frustrating but technically functional):
Frequent errors that require manual correction
Lag makes conversation feel broken
Needs multiple attempts for complex sentences
Voice is barely faster than typing, if at all
Tier 4 (Unusable):
More errors than correct words
Doesn't understand my speech patterns at all
Creates more work than it saves
Actively disabling
My Samsung S25's default voice-to-text? Tier 4.
Claude's voice features? Tier 1.
That's why I'm writing this newsletter in Claude instead of my phone's notes app.
If a tool doesn't have solid voice input, I look for workarounds.
Third-party voice apps. Browser extensions. Creative routing through other tools.
If I can't find a way to use voice?
I use that tool less.
Sometimes I stop using it altogether.
It doesn't matter how powerful the features are.
If I can't speak to it, the friction is too high.
This isn't a preference.
This is the same calculation anyone with an accessibility need makes every day.
"Is this tool worth the extra effort it requires?"
"Can I find a workaround that makes it usable?"
"Or do I just avoid it and lose access to what it offers?"
What This Newsletter Proves
Yesterday I wrote Edition 323 about going against the grain my whole life.
Today I'm writing Edition 324 about voice features.
Both newsletters came from the same 20-minute conversation.
Me talking. Claude listening.
I couldn't have done this before AI and voice-to-text.
Maybe if I'd been an executive with a secretary to transcribe for me.
Or a king with a scribe.
Or wealthy enough to hire someone to convert my thoughts into written words.
But now?
AI is my scribe.
And Claude's voice features, for the record, have been solid today.
More accurate than Samsung's default.
Better than Google's voice typing.
On par with the best voice recognition I've used.
That matters.
Not just for me personally.
But for what it represents.
When AI companies prioritize voice features and make them genuinely good, they're not just adding a nice-to-have interface.
They're building the front door for neurodivergent users.
For people with dyslexia, ADHD, motor differences, visual impairments.
For anyone whose thinking speed outpaces their typing speed.
For anyone who processes information better verbally than textually.
Voice isn't optional for us.
Voice is primary.
The Two Newsletters Test
Here's another way to think about it.
I dictated the raw content for both newsletters (323 and 324) in one conversation.
About 20 minutes of talking.
Stream of consciousness. Non-linear. Jumping between ideas.
Then Claude helped me structure it into two separate newsletters.
One about the personal journey. One about the tool implications.
Total time from thought to published: maybe 90 minutes for both.
Without voice-to-text?
I'd still be working on the first one.
Typing slowly. Making errors. Fixing errors. Losing my train of thought. Getting frustrated. Taking breaks.
Maybe I'd finish one newsletter in 3-4 hours if I really pushed.
The second one? Probably wouldn't happen.
That's a 5-6x multiplier.
From voice features working well.
That's not an enhancement.
That's not making things slightly easier.
That's the difference between capable and disabled.
A Challenge for Builders
If you're building AI tools, apps, or platforms:
Voice isn't a nice-to-have.
For people with dyslexia, ADHD, motor differences, visual impairments, voice is the primary interface.
It's the front door.
If your front door doesn't work well, those users won't find a back entrance.
They'll just leave.
Here's what that means in practice:
1. Test with diverse speakers
Not just the default American accent with perfect enunciation.
People with accents.
Different speech patterns.
Communication styles that don't fit the "standard" model.
People who think out loud and verbally process.
People who use filler words and verbal shortcuts.
The people who need voice the most are often the ones whose voices are hardest for basic models to understand.
2. Optimize for thinking speed, not perfect speech
I don't speak in perfectly formed sentences.
I speak in thought fragments that need to be assembled.
Good voice AI understands that and helps bridge the gap.
Bad voice AI punishes it and makes me slow down.
3. Build for real-time capture
Ideas evaporate.
The ability to capture a thought the moment it hits is everything.
Lag kills that.
Accuracy issues kill that.
Having to repeat yourself kills that.
4. Make voice the default, not the alternative
If I have to click three buttons to activate voice mode, I probably won't use it.
If voice is right there, primary, ready to go? I'll use it constantly.
5. Consider context and correction
Claude does this well.
Even when voice transcription has minor errors, Claude understands the context and fixes obvious mistakes.
That's the difference between Tier 1 and Tier 2 voice features.
The Bigger Pattern
This isn't just about voice features.
It's about accessibility as a core design principle.
When you build for the edges (people with the most constraints, the most specific needs), you often end up building better tools for everyone.
Examples:
Curb cuts were designed for wheelchairs. Everyone benefits from them.
Closed captions were designed for deaf users. Everyone uses them in noisy environments or when sound is off.
Voice features designed for dyslexic users? Everyone benefits when they're driving, cooking, multitasking, or just thinking faster than they can type.
The disability rights slogan: "Nothing about us without us."
If you're building voice features for accessibility, talk to the people who need them.
Don't guess what we need.
Don't assume what will work.
Ask.
Test.
Iterate based on real feedback from real users who depend on these features to function.
What I'm Watching For
1. Voice feature quality across platforms
Which AI companies prioritize voice as a first-class interface?
Which treat it as an afterthought?
The answers will determine which tools neurodivergent users adopt.
2. Cross-device consistency
My Samsung downgrade is a perfect example.
Tools that work great on desktop but poorly on mobile create barriers.
Accessibility needs to work everywhere, not just in ideal conditions.
3. Multimodal AI development
As AI gets better at understanding voice, context, and imperfect input, the gap between thinking and execution shrinks further.
That's the future I'm excited about.
4. Accessibility awareness in AI development
Are companies hiring neurodivergent testers?
Are they building feedback loops with disabled users?
Are they prioritizing features that matter for accessibility?
The Bottom Line
My voice is my keyboard.
When it works, I'm unstoppable.
I can produce two newsletters from one 20-minute conversation.
I can respond to texts while driving.
I can capture ideas the moment they hit.
I can participate fully in conversations without the typing bottleneck.
When it doesn't work, I'm stuck.
Back on the wrong side of the gap.
Capable of thinking but unable to execute at the speed those thoughts deserve.
If that doesn't make voice-to-text an accessibility feature, I don't know what does.
This isn't about making things easier.
This is about making things possible.
There's a difference.
And anyone building tools for neurodivergent users needs to understand that difference.
Thanks for reading.
And if you're a builder working on voice features, I'm happy to test and give feedback.
Because this matters.
Not just for me.
For everyone who thinks faster than they can type.
Which, if we're being honest, is most people.
—Matt "Coach" Ivey
Founder, LM Lab AI • Creator, Dyslexic AI
(Dictated, not typed. Obviously.)

TL;DR (Too Long, Didn't Read for my fellow skimmers)
📱 The Problem: Upgraded to Samsung S25, but voice-to-text is now worse than typing. Not an inconvenience. Disabling.
♿ The Reality: Voice-to-text isn't convenience. It's accessibility. Like screen magnification for low vision. Like ramps for wheelchairs.
⚡ The Gap: Dyslexic minds think fast but execute slow through keyboards. Voice bridges that gap. When voice breaks, capability disappears.
🧪 My Test: How good are the voice features? Tier 1 (essential) to Tier 4 (unusable). Samsung default is Tier 4. Claude is Tier 1.
⏱️ The Proof: Both newsletters (323 and 324) came from one 20-minute conversation. 90 minutes total from thought to published. Would take 6+ hours typing.
🔧 The Workarounds: Use Claude's voice instead of Samsung's. Use laptop instead of phone. Route through better tools. All add cognitive load.
🎯 The Multiplier: Working voice features = 5-6x productivity increase. Not enhancement. Difference between capable and disabled.
🏗️ For Builders: Voice isn't nice-to-have. It's the front door for neurodivergent users. Test with diverse speakers. Optimize for thinking speed, not perfect speech.
📊 The Rubric: Accurate transcription, handles dysfluencies, works real-time, understands context, doesn't require perfect enunciation.
💡 The Pattern: Building for the edges (most specific needs) creates better tools for everyone. Curb cuts. Closed captions. Voice features.
🔍 What I'm Watching: Voice quality across platforms, cross-device consistency, multimodal AI development, accessibility awareness in AI companies.
⬅️ Previous: Edition 323 about going against the grain my whole life. The context for why voice features matter so much.
✨ Bottom Line: My voice is my keyboard. When it works, I'm unstoppable. When it doesn't, I'm stuck. That's the definition of an accessibility feature.
If you're building voice features or AI tools, I'm happy to test and give feedback. This matters for everyone who thinks faster than they can type. Which is most people.
|
|
|





Reply