
Edition 349 | April 22, 2026 The Dyslexic AI Newsletter by LM Lab AI
What You'll Learn Today
What a new academic paper is calling the "LLM Fallacy"
Why researchers are finally noticing what we have been living with
Why the paper is correct about the risk and partially wrong about the solution
Why the Cognitive Balance Model is the antidote
How the HGI (Human Guidance Index) gives you a measurable way to avoid this trap
Three honest questions to ask yourself about your own AI work this week
Reading Time: 9 minutes Listening Time: 13 minutes
Happy Wednesday.
A paper dropped on arXiv last week that I have been sitting with for a few days.
It is called "The LLM Fallacy: Misattribution in AI-Assisted Cognitive Workflows", by researchers at ddai Inc. And it names something I have been writing about for three and a half years, but from an angle I want to push back on.
Here is the paper's core argument, in plain language.
When you use a large language model to help you write, code, analyze, or communicate, the output often feels like your own work. The interaction is smooth. The model disappears into the background. You end up with something that looks and sounds like you produced it yourself.
The researchers argue that this creates a cognitive attribution error. You start to misinterpret AI-assisted outputs as evidence of your own independent competence. Over time, a gap opens between what you think you can do and what you can actually do without the tool.
They call this the LLM Fallacy.
And they are right about the risk.
But I think they are partially wrong about the solution. And I want to tell you why.
What the Paper Gets Right
Let me steelman it first.
The paper makes a real observation. When the tool is fluent, fast, and invisible enough, it becomes very easy to confuse "what we did together" with "what I did alone."
That is not just a theoretical concern. It is a practical one. If a student uses an LLM to write an essay and gets an A, they may walk away thinking they have the ability to write that essay. If a junior engineer ships code that Claude wrote, they may believe their technical skills are growing faster than they are.
This matters for education. For hiring. For skill development. For self-knowledge.
The paper situates this inside existing research on automation bias, cognitive offloading, and the Dunning-Kruger effect. It argues that LLM workflows create a new kind of attributional distortion because of how opaque, fluent, and low-friction they are.
I agree with all of that.
I have been warning about it in different words since Edition 329 ("Building Your Second Brain"), where I introduced the Single Source of Truth framework specifically so your AI would reflect you back rather than overwrite you.
The risk is real.
Where the Paper Stops Short
Here is what the paper does not say.
It treats the LLM Fallacy as a general human problem with AI-assisted workflows. Opacity plus fluency plus low friction equals attribution error. That formula applies to everyone, in theory.
In practice, it lands very differently on different brains.
For a neurotypical professional who has always been able to write, code, or analyze competently on their own, an LLM can absolutely create the illusion of skills they do not actually have. The paper is writing for that person.
For a dyslexic thinker who has spent a lifetime being told that they are less capable because they struggle with spelling, sequential text, or conventional writing formats, an LLM is doing something completely different.
It is not inflating a competence that does not exist.
It is exposing a competence that was always there and could never get through.
That is a critical distinction that this paper misses entirely.
In Edition 324 ("When Voice Stops Working"), I argued that voice-to-text is not a convenience for me. It is an accessibility tool. It is the bridge between the ideas in my head and the words on a screen. Before these tools existed, the bridge was broken. My ideas never made it across at full resolution. That is not a fallacy. That is a disability being accommodated.
A wheelchair does not make a paraplegic "falsely believe" they can walk. It gives them a different way to move. The goal was never walking. The goal was movement.
That is what AI is for a lot of dyslexic thinkers. The goal was never writing in the conventional sense. The goal was communicating. And now we can.
But the Risk Is Still Real
I do not want to wave away the paper's warning just because the framing is incomplete.
Because here is the thing. Even for dyslexic creators using AI as accessibility, the fallacy can still show up. Just in a different shape.
A dyslexic entrepreneur might believe they are building a scalable business on AI output when in reality they have not developed the underlying strategic thinking.
A neurodivergent student might believe they understand a subject because they can produce polished essays about it with AI assistance when in reality they have not done the learning.
A homeschooling parent might believe their child is progressing academically when in reality the AI is doing most of the work the child should be doing.
All of these are real. None of them are imaginary.
The LLM Fallacy is not just a risk for neurotypical professionals. It is a risk for anyone who stops paying attention to where the work is actually coming from.
That is the real question. Not "is the tool helping?" Of course it is.
The real question is: "How much of this is me, and how much of this is the tool, and do I know the difference?"
If you do not know the difference, the fallacy has you.
If you do know the difference, you are in a completely different position.
The Cognitive Balance Model as the Antidote
Here is where I want to push this conversation forward.
Back in Edition 332 ("A Year Ago, I Was in a Hospital Bed"), I introduced the Cognitive Balance Model. It is a framework for how humans and AI should work together. Three phases:
Human Initiation. You set the direction. You decide what problem to solve, what quality bar to hit, what outcome matters.
AI Expansion. The AI does the heavy lifting. Drafts. Alternatives. Calculations. Research. Options.
Human Integration. You make the final call. You review, edit, approve, or reject. The output is yours because you decided what passes.
Each phase gets a score from 1 to 5 on the Human Guidance Index (HGI). Total score from 3 to 15. The higher the score, the more human guidance is in the loop at every phase.
Here is what the LLM Fallacy paper does not know.
The fallacy only happens when HGI is low.
If you type a vague prompt, accept the first output, and paste it somewhere, your HGI is around 3 or 4. Human Initiation is weak. AI Expansion is the whole show. Human Integration is basically zero.
That is where the attribution error lives. That is where the illusion of skill takes hold. That is where you lose track of where the thinking came from.
But if you enter the interaction with a clear intention, give the AI specific context, push back on outputs that miss, iterate deliberately, and make integration calls based on your own judgment, your HGI is 12, 13, 14 out of 15.
At that level, the output is genuinely co-produced. You know what you contributed. You know what the tool contributed. There is no fallacy because there is no misattribution.
The Cognitive Balance Model does not prevent AI collaboration. It preserves the boundary that the paper is worried about eroding.
And it does it in a way that works whether you are a neurotypical professional or a dyslexic creator. The framework does not care about your cognitive style. It cares about the structure of the collaboration.
The HGI as a Diagnostic
I want to give you something practical.
The next time you finish a piece of work that AI helped you with, run this quick check:
Human Initiation (1-5): How specific was my starting point? Did I know what I wanted, or did I let the AI guide me into an answer?
AI Expansion (1-5): Did I use the AI to explore options, or did I take the first thing it gave me?
Human Integration (1-5): Did I meaningfully change, improve, or challenge the output, or did I just clean it up and move on?
If your score is below 9, you are in fallacy territory. Not because the work is bad. Because you may not know how much of it is yours.
If your score is 9 to 12, you are collaborating. Good territory.
If your score is 13 to 15, you are using AI as a genuine cognitive partner. You know exactly what you brought. You know exactly what the tool brought. You built something together that neither of you would have built alone.
That is the model. That is the antidote.
The paper identifies a real problem. The HGI gives you a way to measure whether you are walking into it or walking around it.
The Dyslexic Dimension They Missed
One more thing, because I cannot write this edition without saying it.
The paper talks about the LLM Fallacy as if it applies uniformly to all knowledge workers. It does not.
For dyslexic and neurodivergent creators, there is a separate phenomenon happening alongside the fallacy the paper describes. I do not have a perfect name for it yet, but it goes something like this.
Before AI, I was underproducing. My actual capability was high but my output was low because the conventional tools were broken for my brain.
With AI, my output has caught up to my capability. Not because my capability grew. Because the tool finally reads in the language I have always been speaking.
The LLM Fallacy paper assumes that AI output is inflating users above their actual capability. For many neurodivergent users, AI output is simply revealing capability that was always there.
These are not the same thing. And any honest framework has to hold both truths at once.
Some users are being lifted above their skill level by the tool.
Some users are being unblocked from an expression barrier the tool removes.
You can be in both categories at once. A dyslexic entrepreneur can have genuine strategic insight that AI finally lets them articulate (real capability, newly unblocked) AND overestimate their technical depth because AI wrote the code (fallacy, real risk).
The Cognitive Balance Model handles both. Because HGI does not measure whether the tool helped. It measures whether you stayed in the loop at every phase.
If you stayed in the loop, both things are true and manageable. If you did not, only the fallacy wins.
OK But What Do I Actually Do With This?
Three things.
1. Score Your Last Three Pieces of AI-Assisted Work
Pull up the last three things you did with AI help. Run each one through the HGI. Human Initiation, AI Expansion, Human Integration, 1 to 5 each. Honest scores.
You will probably find that some of your work was genuine collaboration and some of it was drift. That is useful information.
2. Ask Yourself the Hard Attribution Question
For one piece of recent AI-assisted work, ask: "If I had to reproduce this without AI right now, could I?"
This is not about whether you should have to. You usually should not have to. The question is whether you still know where your actual competence ends and the tool's starts.
If you can reproduce the thinking (even if it would take you longer), you own the work. If you cannot, you have attribution homework to do.
3. Raise Your HGI Deliberately on Your Next Project
Pick your next AI-assisted project and set a minimum HGI of 12 out of 15. That means stronger initiation, more deliberate iteration, more meaningful integration. It will take a little more time. You will learn more from the work. And you will own the output in a way that no paper on the LLM Fallacy can dispute.
What This Means for You Right Now
Researchers are finally starting to ask the questions this newsletter has been asking for years.
The LLM Fallacy is real. The risk is real. The attribution error is real.
And the answer is not to use AI less. The answer is to use AI with more structure, more intention, and more measurement.
That is exactly what the Cognitive Balance Model was built for. That is exactly what the Human Guidance Index measures. That is why I have been writing about this since Edition 332.
The academic world is starting to name the problem. Our community already has the framework. That is a lead, and we should use it.
For dyslexic and neurodivergent creators specifically: you are not walking into the LLM Fallacy every time you use AI. In many cases, you are walking out of a lifetime of being underestimated. The tools that finally fit your brain are revealing capability that was always there.
But you still need the framework to stay on the right side of that line. Not because you are suspect. Because the tools are powerful and the stakes are real.
Know where you are. Score your work. Stay in the loop at every phase.
That is how we use AI without losing ourselves to it.
Previously
Edition 348: Claude Design follow-up with a real brief and actual results
Edition 347: "I Played With Claude Design for Ten Minutes" (Anthropic Labs launch, design accessibility for neurodivergent creators)
Edition 346: "The Meta Layer" (builder evaluation, cognitive fit)
Edition 345: "We Have Been Asking the Wrong Question About AI" (evaluation framework manifesto)
Edition 343: "Stanford Just Measured Everything About AI" (AI Index 2026, jagged frontier)
Edition 332: "A Year Ago, I Was in a Hospital Bed" (Cognitive Balance Model and HGI introduction)
Edition 329: "Building Your Second Brain" (Single Source of Truth)
Edition 324: "When Voice Stops Working" (voice-to-text as accessibility)
Next
Edition 350: MIT just taught AI to say "I am not sure." Why this calibration breakthrough matters more than people are noticing, why ternary thinking (yes, no, maybe) might be the most important cognitive architecture insight of the year, and why dyslexic and neurodivergent brains may have been doing this naturally all along.
Matt "Coach" Ivey Founder, LM Lab AI | Creator, The Dyslexic AI Newsletter
Dictated, not typed. Obviously.

TL;DR- For My Fellow Skimmers
📄 A new paper names the "LLM Fallacy": users misattributing AI-assisted output as evidence of their own independent competence. It is a real phenomenon. The risk is real.
🎯 The paper frames this as a general human problem. In practice, it lands very differently on neurodivergent brains. For dyslexic creators, AI often reveals capability that was always there, not inflates competence that was not.
⚖️ The Cognitive Balance Model is the antidote. Three phases: Human Initiation, AI Expansion, Human Integration. Scored on the HGI from 3 to 15.
📊 If your HGI is below 9, you are in fallacy territory. If it is 13 to 15, you are using AI as a genuine cognitive partner with clear attribution.
🔍 Two things can be true at once: AI can unblock real capability in neurodivergent users AND create attribution errors in the same users on different tasks. The framework handles both.
🧩 Three things to do this week: score your last three AI-assisted pieces on the HGI, ask if you could reproduce them without AI, and set a minimum HGI of 12 for your next project.
🔒 The academic world is naming problems we already have frameworks for. Cognitive Partner Members get those frameworks first. 50 founding spots at $19/month, locked forever.
🧠 FREE RESOURCES FROM DYSLEXIC AI
The Cognitive Partner Playbook (Free E-Book) Everything I've learned from 330+ editions, 2+ years of research, and thousands of hours building AI tools for dyslexic minds — condensed into one guide. How to set up AI as your cognitive partner, not just another app. Voice-first workflows, the 10-80-10 framework, and the exact prompts I use every day.
[Download the Free E-Book →]
Enter your email to get instant access. You'll also get the weekly Dyslexic AI newsletter if you're not already subscribed.
The CPM Prompt Guide 27 ready-to-use prompts built on the Cognitive Partner Model — designed for dyslexic and neurodivergent thinkers. No perfect spelling required. No linear thinking assumed. Just copy, paste, and let AI do the heavy lifting where it actually helps.
[Get the Free Prompt Guide →]
More from Dyslexic AI: 🧠 Try the Dyslexic AI GPT — A custom AI assistant built for how your brain works 📄 Read the Research — The Cognitive Partner Model white paper 🎯 Work with Matt 1:1 — 90-minute Cognitive Partner Strategy Sessions 📬 Share this newsletter — Know someone who thinks differently? Send them this.


