
Edition 345 | April 18, 2026The Dyslexic AI Newsletter by LM Lab AI
What You'll Learn Today
Why most people are asking the wrong question about AI tools
Why dyslexic and neurodivergent thinkers need a completely different benchmark
What a real dyslexic AI evaluation framework should measure
Why businesses should build their own internal AI scorecard instead of copying everyone else
Why families, homeschoolers, and non-traditional learners need their own version too
How one evaluation engine could serve three very different users
Why evaluation itself might be the most important skill of the AI era
Reading Time: 11 minutes
Listening Time: 15 minutes
Happy Sunday.
This morning I found myself going down a rabbit hole that felt very familiar.
It started with a simple question.
Which AI tool is best?
That sounds like a smart question. It sounds practical. It sounds like what everyone wants to know right now.
But the more I sat with it, the more I realized it is still the wrong question.
Because best for who?
Best for what?
Best in what situation?
Best for a dyslexic thinker? Best for a parent trying to homeschool? Best for a small business owner trying to save time? Best for a team that needs structure and compliance? Best for a kid who learns better by talking than typing? Best for a founder whose brain works in loops, patterns, and voice notes instead of neat little spreadsheets?
That is where this whole thing started opening up.
And once I followed that thread, I realized we may be building not just one idea, but an entire framework.
The Real Problem Is Not the Tool
The real problem is that most people are trying to choose AI tools before they even know how they should evaluate them.
That is backward.
Right now, most people are buying AI the same way people used to buy supplements, software, or shiny new productivity hacks. They hear the hype. They see a cool demo. They try the newest thing. They compare headlines. They ask who has the smartest model.
But that still skips the most important part.
What does "good AI" actually mean for your life, your work, your brain, your child, your business, or your family?
Because raw intelligence alone is not enough.
A model can be powerful and still be a bad fit. It can be too text-heavy. Too cluttered. Too rigid. Too hard to steer. Too weak on voice. Too generic. Too overwhelming. Too complicated for real life.
So the deeper question is not "which AI is smartest?"
The better question is:
Which AI is the best fit for how you think, learn, work, and live?
That is a very different category. And I think it matters a lot.
Dyslexic Thinkers Need a Different Benchmark
This is the part that matters most to me personally.
As a dyslexic thinker, I know firsthand that the smartest tool is not always the most usable tool.
That may sound obvious, but it is not how most people talk about AI.
Most AI rankings focus on reasoning, coding, speed, benchmark scores, model size, and enterprise features. Those things matter. In Edition 343 ("Stanford Just Measured Everything"), we looked at the AI Index and saw exactly what the industry measures: capability on academic tests, adoption rates, patent filings, investment dollars.
But from a dyslexic AI perspective, I would add a completely different set of questions.
Does the tool reduce cognitive load? Does it let me speak instead of type? Does it help me organize messy thinking? Does it make complex information easier to understand? Does it chunk ideas clearly? Does it work with nonlinear thought instead of fighting it? Does it help me feel more capable, not less? Does it support confidence, clarity, and action?
That is a different kind of benchmark.
This is not a benchmark for intelligence. This is a benchmark for cognitive fit.
That phrase matters. Because I believe the future of AI is not just about how intelligent the tool is. It is about how usable that intelligence is for different types of minds.
A tool can be brilliant and still be exhausting.
A tool that reduces overwhelm, supports voice, tolerates messy prompts, and helps people stay in motion may actually be the better tool for many dyslexic and neurodivergent users, even if it does not win every mainstream benchmark.
That is a huge shift. And it connects directly to what Stanford called the jagged frontier in Edition 343. The idea that AI is brilliant at some tasks and bafflingly bad at others. If that is true of the tools, it is doubly true of the fit between a tool and a specific brain.
What a Dyslexic AI Benchmark Should Actually Measure
If I were building a benchmark specifically for dyslexic thinkers, and as I mentioned in Edition 341, I have already started, I would not just rank models by "best answer." I would score them based on how well they support real human thinking.
Some of the categories I would want to measure:
1. Voice-First Support. Can I talk to it naturally? Can it understand me well? Can it help me think out loud? Can it read things back to me clearly?
2. Reading Load. Does it give me a giant wall of text? Or does it break ideas into usable chunks?
3. Writing Support. Can it help me take messy thought and turn it into clear language without flattening my voice?
4. Error Tolerance. Does it handle typos, fragmented prompts, and nonlinear input well?
5. Clarity. Can it explain something in normal human language instead of jargon soup?
6. Adaptability. Can it adjust to the way I like to learn, process, and communicate?
7. Cognitive Load Reduction. Does it save me mental energy, or does it create more work?
8. Confidence Support. After using it, do I feel more capable and more willing to keep going?
This is where I think Dyslexic AI has a real opportunity.
Not just to talk about AI. But to help define how AI should be evaluated for people whose brains do not work in the default factory setting the world tends to optimize for.
If you have been following since Edition 332 ("A Year Ago, I Was in a Hospital Bed"), you know the Cognitive Balance Model was built to respect how neurodivergent minds actually work with AI. The Human Guidance Index measures collaboration quality. These frameworks already exist. What is missing is a standardized evaluation lens applied to every tool on the market through this filter.
That is the opportunity.
Then the Idea Got Bigger
Once I started thinking about benchmarks for dyslexic thinkers, I realized this same logic applies to businesses too.
Because businesses are asking the same wrong question.
They keep asking which model is best. Which AI platform they should use. Which tool they should buy. Which agent they should install.
But they are still skipping the first step.
Before a business chooses AI tools, it should build its own internal evaluation system.
It should ask: What are our actual use cases? Where are we wasting time? What are our biggest bottlenecks? What kind of outputs do we need? What matters most to our workflow? What risks do we care about? What kind of team are we supporting? What does success actually look like here?
Every business should have its own AI evaluation framework. Not a generic list from the internet. Not a copy of what another company is doing. Not a hype-driven guess.
A real scorecard based on its own needs.
The Business Version
This is where LM Lab AI comes in.
The business version of this concept would help a company create its own internal benchmark for AI adoption. In simple terms, it would help a business define what "good AI" means for them before they waste time or money choosing tools.
That system could guide them through questions like: What department are we evaluating for? Sales, operations, customer service, HR, education, leadership? Are we trying to save time, improve quality, reduce overload, or increase consistency? Do we need speed, accuracy, explainability, privacy, integrations, or ease of use? What tasks should we test? How should we compare tools? What weights should we give each category?
Then the software or consulting process could generate custom evaluation criteria, weighted scorecards, model comparison templates, test scenarios based on real workflows, recommendation reports, and pilot rollout suggestions.
That turns AI adoption into something much more thoughtful and useful.
Not hype. Not chaos. A framework.
In Edition 334 ("The Data Is In"), we looked at the Anthropic labor market study that showed cognitive flexibility as the new job security. Businesses with evaluation frameworks are more cognitively flexible by design. They are not locked into one tool or one vendor. They have a repeatable process for assessing new options as the landscape changes. And it changes a lot. Edition 341 covered the insane pace of releases just in the last two weeks.
Most businesses do not need more AI noise. They need structure, literacy, priorities, and a way to make better decisions.
This Is Not Just a Business Tool
Then I realized something else.
This same process could work for families too.
And maybe that is the part that hits closest to home.
Because in my daily life, working through tools for my own kids, my own family, and my own educational experiments, I know how confusing this space can be.
Families have more freedom than schools in a lot of cases. But they also have less guidance. They can choose almost any AI tool they want, but they still do not know which ones are actually helpful, which ones are good for learning, which ones are good for reading support, which ones are safe enough, which ones work well for voice, which ones can support executive function, which ones are age appropriate, or which ones help without creating more screen addiction or mental clutter.
That is a real problem.
Especially for homeschool families. Neurodivergent learners. Dyslexic kids. Parents trying to use AI intentionally. Non-traditional education settings. Families that want more agency and customization.
So now this idea is not just a business tool. It is also a family and learning fit tool.
If you have been here since Edition 325 ("My 14-Year-Old Daughter Just Proved Me Wrong"), you know this is personal. My daughter Makena is homeschooled. I build her tools. I make these evaluation decisions every week. And every week I see how much harder it is than it needs to be.
Families Need Their Own Version of "Best"
A family should not evaluate AI the same way a corporate operations team does.
That sounds obvious. It matters more than most people realize.
A family or homeschool version of this framework would ask different questions.
Does this tool help my child understand concepts? Does it support reading differences? Does it help with writing without taking over? Does it work well with voice? Is it easy for a parent to manage? Does it support curiosity and confidence? Can it adapt to different ages and subjects? Does it help with executive functioning, routines, and planning? Does it fit our values, our daily rhythm, and our educational style?
Very different evaluation system. Same core engine. Different lens.
That may be one of the most useful applications of all of this. Schools are often limited by policies, budgets, restrictions, and slow decision-making. Families have more autonomy. That means they can become one of the earliest proving grounds for better AI fit.
The Stanford dyslexia research we covered in Edition 343 showed that the right tools, applied the right way, physically rewire the dyslexic brain. That finding cannot stay in a lab. It needs to reach homes. And it needs a framework for choosing those tools intentionally.
One Engine, Two Tracks
At one point this morning, I had to stop and ask myself a real question.
Are these two separate products? Or is this one product with two lines of business?
The answer I came to is this.
It is one core product with two front-end tracks.
Pathway 1: Family, Personal, Homeschool.
Pathway 2: Business, Team, Organization.
Under the hood, both would be doing the same thing. Understanding goals. Identifying pain points. Building evaluation criteria. Assigning weights. Comparing tools. Generating recommendations. Creating an action plan.
That is the core engine. What changes is the language, onboarding, templates, scoring priorities, and outputs.
That feels like a very strong product direction. Not two totally different companies. Not two giant separate software projects. One evaluation engine. Two applied markets.
This also ties directly to what I was building in Edition 344 ("I Woke Up at 4AM With a Random AI Idea"). The Cognitive Partner OS. The underlying engine for understanding how you actually use AI. That tool and this evaluation framework are siblings, not rivals. One maps your current usage. The other helps you choose tools that fit.
Both are powered by the same core insight. Fit matters. And you cannot measure fit without knowing yourself first.
The Meta Layer (A Cliffhanger)
Just when I thought we had the framework, I realized there was one more level to it.
If I am going to build a software product that helps businesses and families evaluate AI tools, then I also need a way to evaluate the AI tools I use to build that software.
Yes, I know. That is very meta.
It is also true. And it is a whole edition of its own.
I will get into that recursive layer another time. For now, just know that the same principles apply. Speed to prototype. Logic flexibility. Cognitive fit for my own brain. The tools we use to build the evaluator need to pass the evaluator's own test.
Save that thought. We will come back to it.
The Bigger Point
This is what I am circling around.
The future is not going to belong only to the people who know how to use AI.
It is going to belong to the people who know how to evaluate it well.
That means knowing what matters. Knowing what to test. Knowing what outcomes you actually care about. Knowing how your brain works. Knowing how your team works. Knowing how your child learns. Knowing what "fit" means in context.
That is a much more useful skill than chasing the newest tool every week.
And I think this is where Dyslexic AI and LM Lab AI both have a lane. Not just teaching people how to prompt. Not just reviewing tools. But helping define the standards by which tools should be judged in the first place.
That is a more important game.
OK But What Do I Actually Do With This?
Three things. This week.
1. Define Your "Best" Before You Shop
Before you try another AI tool, write down five to ten things that "good AI for me" actually means. Voice-first? Clear outputs? Low cognitive load? Specific to your work? Good memory? Pick the ones that matter and write them down. This is the seed of your personal evaluation framework.
If you already have a Single Source of Truth from Edition 329, add a section called "What Good AI Looks Like for Me." Three bullet points. Done.
2. Score One Tool You Already Use
Pick an AI tool you use regularly. Score it from one to ten on each of your criteria. Be honest.
You might be surprised. You might find out your favorite tool is actually not serving you well on the things you said matter most. That is useful data.
3. Apply the Lens Somewhere It Matters
If you run a business, ask your team what criteria they would use to evaluate AI tools for your specific workflow. Not what vendors are selling. What you actually need.
If you are a parent, do the same for your family or homeschool. What does good AI for your kid look like? Now you have a framework, not a hunch.
What This Means for You Right Now
AI is moving so fast that people are understandably overwhelmed.
Too many tools. Too many claims. Too many demos. Too many experts telling everyone to jump in.
But very few people are stopping to ask the foundational question.
What should we actually be measuring?
That question matters for dyslexic thinkers. It matters for families. It matters for homeschool environments. It matters for schools. It matters for businesses. And it even matters for those of us trying to build the next generation of tools ourselves.
For me, this is one of those moments where a bunch of threads I have been circling for a long time suddenly started snapping into focus.
Dyslexic AI is not just about using AI. It is about helping define what better AI fit looks like for different kinds of minds.
LM Lab AI is not just about building tools. It is about helping people and organizations make smarter, more human-centered decisions about the tools they choose.
And the deeper I get into this work, the more I believe this may become one of the most important conversations in the next phase of AI.
Not which tool is smartest.
But which tool actually fits.
And how do we know?
That is the question I want to keep building around.
Next
Edition 346: The meta layer. How to evaluate the AI tools you use to build the tools that evaluate AI tools. Yes, really. This is the recursive problem underneath everything I am working on right now, and it is weirder and more important than it sounds.
Matt "Coach" Ivey
Founder, LM Lab AI | Creator, The Dyslexic AI Newsletter
Dictated, not typed. Obviously.

TL;DR- For My Fellow Skimmers
❓ Most people are asking the wrong AI question. "Which tool is best?" skips the real question: best for whom, for what, and for how you actually think and work.
🧠 Dyslexic thinkers need a different benchmark. Not raw intelligence. Cognitive fit. Voice support. Low reading load. Error tolerance. Clarity. Confidence support.
🏢 Businesses are making the same mistake. They should build their own internal evaluation framework before choosing tools. Not copy someone else's scorecard.
👨👩👧 Families, homeschoolers, and non-traditional learners need their own version too. Same engine. Different lens. Different priorities.
⚙️ One evaluation engine. Two tracks. Personal/family on one side. Business/team on the other. Same core. Different applications.
🔮 The future belongs to the people who know how to evaluate AI well, not just the people who use it. This might be the most important skill of the next phase.
🧩 Three things to do this week: define what "good AI" means for you in five bullet points, score one tool you already use against those criteria, and apply the lens somewhere it matters.
🧠 FREE RESOURCES FROM DYSLEXIC AI
The Cognitive Partner Playbook (Free E-Book) Everything I've learned from 330+ editions, 2+ years of research, and thousands of hours building AI tools for dyslexic minds — condensed into one guide. How to set up AI as your cognitive partner, not just another app. Voice-first workflows, the 10-80-10 framework, and the exact prompts I use every day.
[Download the Free E-Book →]
Enter your email to get instant access. You'll also get the weekly Dyslexic AI newsletter if you're not already subscribed.
The CPM Prompt Guide 27 ready-to-use prompts built on the Cognitive Partner Model — designed for dyslexic and neurodivergent thinkers. No perfect spelling required. No linear thinking assumed. Just copy, paste, and let AI do the heavy lifting where it actually helps.
[Get the Free Prompt Guide →]
More from Dyslexic AI: 🧠 Try the Dyslexic AI GPT — A custom AI assistant built for how your brain works 📄 Read the Research — The Cognitive Partner Model white paper 🎯 Work with Matt 1:1 — 90-minute Cognitive Partner Strategy Sessions 📬 Share this newsletter — Know someone who thinks differently? Send them this.


