
Edition 344 | April 17, 2026 The Dyslexic AI Newsletter by LM Lab AI
What You'll Learn Today
The random 4AM thought that kicked off this whole project
Why I want a tool that analyzes my own AI history (and why you might too)
The three engines I think this platform could eventually include
What I actually got working this morning (and what I did not)
Why this matters for dyslexic, neurodivergent, and lateral thinkers
Three things you can do with your own AI history this week
Reading Time: 8 minutes Listening Time: 12 minutes
Sometimes my best newsletter idea is just telling you what I actually worked on.
Today is one of those days.
I did not wake up with a polished topic. I woke up at 4AM with one of those half-awake, half-dreaming thoughts that hits you out of nowhere and will not leave you alone.
The thought was basically this.
What if I could build a tool that takes all of my ChatGPT history, all of my conversations, all of my back-and-forth thinking, and actually helps me understand how I use AI?
Not just search it. Not just scroll through old threads.
Really analyze it.
What topics do I come back to over and over?
How often am I using AI to brainstorm versus write versus research versus build? What words do I repeat most? What concepts keep resurfacing? What unfinished ideas am I clearly obsessed with?
And maybe even more importantly: what does all of that say about how I think?
That one random thought turned into a couple hours of work this morning. And by the end of it, I had already built the first version of the idea in two different tools.
One in Google AI Studio. One in Codex. I have not tried it in Claude Code yet because my MacBook Air was already running hot with too many things open at once. That will probably be version three.
But I did get both versions to upload the JSON file from my ChatGPT history.
That may sound small. It was a huge first step. The concept is real enough to start.
The Original Question
At first, this started as a simple question.
If these AI platforms are already learning from how we interact with them, and all these companies are already tracking clicks, searches, prompts, time spent, behavior, and patterns, why should we not be able to use that same kind of logic for ourselves?
Why should all of that behavioral value belong only to platforms, advertisers, algorithms, and recommendation engines?
Why can I get custom ads, custom feeds, custom recommendations, and custom content ranking from companies trying to influence me, but not have a custom system that helps me understand myself better?
That is what really got me going.
In Edition 342 ("The Weight in My Chest"), I talked about wanting autonomy, agency, and sovereignty over how AI shapes my life. This is that. Not in the abstract. In code. This morning.
Because I do not want infinite customization. I do not think most people do. Most people do not want to build their own software from scratch any more than they want to code their own website.
They want something that learns them. Adapts to them. Helps them without drowning them in settings.
That is the difference. And that led me to a bigger product idea.
The Bigger Vision
The working name for this, and I want to stress that it is a working name that could change, is Cognitive Partner OS. It is related to but distinct from the Cognitive Partner Membership I launched in Edition 333. Think of it as the underlying engine rather than the community.
The idea is one platform with multiple engines inside it. Not a million random features. Not a giant mess. A real system with modular parts.
Engine 1: Conversation Intelligence
This is the first engine and the one I started with today.
Its job is to take exported AI history, ingest the JSON file, normalize the data, and analyze it through what I am calling a cognitive partner lens. It would help me answer questions like:
What projects have I returned to the most? What words and concepts do I repeat? How often do I use AI for writing, strategy, research, or technical building? What ideas keep resurfacing? What is exploratory thinking versus explanatory writing? How much of my interaction is me versus the tool?
That last one is especially important.
I want to separate my inputs from the tool's outputs. My words. My prompts. My ideas. My phrasing. Versus what the model gives back.
That becomes incredibly valuable if I want to build books, white papers, newsletters, podcasts, and all the rest from my own thinking. This is the natural extension of the Single Source of Truth framework from Edition 329. Not just a document that tells your AI who you are. A system that shows you who you are based on everything you have already said to your AI.
Engine 2: Publishing Intelligence
If Engine 1 helps me understand my thinking, Engine 2 helps me publish it.
This would be a publishing team of agents working together across books, white papers, newsletters, podcasts, audiobooks, and eventually video.
The publishing engine could pull recurring ideas, my strongest phrases, quote candidates, research themes, unfinished concepts, and connections between the book and newsletter.
Instead of starting from scratch every time, it would help me build from my own existing thinking and content ecosystem. Three years. 344 editions. That is a lot of thinking to mine.
Engine 3: Workflow Calibration
This is a longer-term idea, but it may be the most interesting.
What if the system could observe how I actually work? What tabs I open. What tools I use. How I move through a workflow. How I research, write, rewrite, post, and publish.
Not in a creepy way. Not spying. Not stealth.
A tool I own and turn on and off. An observe mode. A learning layer. A personal assistant that watches how I do my newsletter or content workflow and helps me recreate parts of that later.
Not just understanding my conversations with AI. Understanding my workflow with AI.
That is where this gets really powerful. And it ties directly back to Edition 340, where we built the self-improving loop. This is the same idea at a system level.
What I Actually Got Working This Morning
I need to be honest here.
This is still early. I do not have a magical finished product.
What I got done today was the beginning.
I built two versions of the idea. I got both versions to the point where they could finally upload the JSON file from my ChatGPT history. That took a few iterations. A little debugging. A few moments of "why is this not working?" And then finally, both versions could see the file and begin the process.
That matters. Because this is how these things start.
The first win is not perfection. The first win is proving the concept has a pulse. And today, I got it to breathe.
It was not yet doing enough with the data to give me all the insights I want. It was not yet surfacing enough specifics or value. But it could ingest the file and start the process.
That is the bridge from idea to product.
And all of it started from a random thought between sleep and wakefulness at 4AM. That still kind of blows my mind.
Why This Matters for How I Think
This project connects a lot of the things I have been thinking about for a long time.
Voice-first tools. Cognitive partner AI. Adaptive systems. Data sovereignty. Neurodivergent accessibility. Publishing. Personal dashboards. Agents. Custom software that adapts to the human instead of forcing the human to adapt to the software.
That is the larger story here.
I do not just want better prompts. I do not just want another chatbot. I want a system that helps me understand how I think, how I work, what I keep coming back to, and where I should go next.
And I think a lot of other people would want that too.
Especially dyslexic thinkers. Especially neurodivergent thinkers. Especially people whose minds are full of patterns, ideas, loops, unfinished projects, and a thousand tabs open at once.
This kind of system could help turn that chaos into structure without killing the creativity.
That is the sweet spot.
In Edition 343, we talked about Stanford's "jagged frontier" finding. The idea that AI capability is spiky, not smooth. Brilliant at some things. Bad at others. And I pointed out that dyslexic minds have been living on a jagged frontier our whole lives.
A tool like this could help us map our own jagged frontier. Where are our peaks? Where are our valleys? What do we keep coming back to that we have not finished? What patterns in our thinking are invisible to us but obvious once the data is organized?
That is not a feature. That is a mirror.
The Part I Keep Coming Back To
The more I think about it, the more I believe the real value is not just importing data.
It is taking back the algorithm for ourselves.
That might be the simplest way to say it.
Big platforms already analyze how we use their tools. They already use our behavior to shape feeds, ads, recommendations, and content.
What I want is the reverse.
I want to use my own data to shape my own cognitive environment. My own workflow. My own learning. My own publishing. My own dashboard. My own AI partner.
That starts to feel like a real future.
Not just more software. Better software. More human software. More personalized software. More adaptive software. More useful software for people who think differently.
In Edition 342, I said I refuse to wait for someone else to tell me how AI changes my life. This is what that looks like in practice. It looks like waking up at 4AM and building the first version of a tool by noon.
OK But What Do I Actually Do With This?
You do not need to build software. But you can start the same kind of self-analysis today with three simple steps.
1. Export Your AI History
Most AI platforms let you export your data. In ChatGPT, go to Settings and then Data Controls. In Claude, check your account settings for data export. The JSON file is usually easy to download and contains a lot of what you have actually said to your AI over time.
You do not have to do anything with it yet. Just have it. It is your data. Keep a copy.
2. Ask Your AI to Analyze a Small Sample
Pick a recent conversation. Paste it into a new chat and ask this:
"Based on this conversation, what patterns do you notice in how I think, what I focus on, and how I communicate? What themes keep coming up? What does this tell you about my strongest interests and my blind spots?"
You will be surprised what shows up. This is a mini version of Engine 1, no code required. And if you have been using your Single Source of Truth from Edition 329, you have a head start.
3. Notice What You Keep Returning To
For one week, at the end of each day, write down what topics you opened AI for. Just a list. No judgment.
By the end of the week, you will see patterns. The real ones. The ones you are too close to notice in the moment.
That is the beginning of your own Cognitive Partner OS. No engineers required.
Question for You
If you could upload your full AI history into a tool and have it show you how you think, what would you want to know first?
Your most repeated ideas? Your biggest unfinished projects? Your strongest phrases? Your blind spots? Your best insights?
Hit reply and tell me. I genuinely want to hear. Because this kind of tool is going to matter a lot more in the future than most people realize. And the people who help shape what gets built are the ones who describe what they actually need.
What This Means for You Right Now
I did not have a newsletter topic this morning.
Then I had a random 4AM thought.
A couple hours later, I had two prototype versions of a tool I have wanted for a long time without fully realizing it.
That is the kind of thing AI makes possible now. You can go from a half-formed idea to a working first version in a morning. Not finished. Not polished. Not ready for the world. But real enough to test. Real enough to improve. Real enough to believe in.
That is where I am today. And honestly, I love days like this.
Because this is what building in public looks like for me. Messy. Fast. A little obsessive. A little chaotic. Deeply exciting. And very, very real.
If this makes sense to you, if the idea of taking back your own algorithm lights something up, you are exactly the kind of person this is being built for.
Previously
Edition 343: "Stanford Just Measured Everything About AI. They Forgot to Measure Us." (AI Index 2026, jagged frontier, Stanford dyslexia research)
Edition 342: "The Weight in My Chest" (being understood, autonomy, sticktoitness, sovereignty)
Edition 341: "I Have Never Seen Anything Like This Before" (state of AI, ArtQuest, building in chaos)
Edition 340: "I Have Four of the Five Layers. Time to Close the Loop." (self-improving AI loop)
Edition 339: "Your AI Just Forgot Everything. Again." (Karpathy, five-layer stack, memory architecture)
Edition 333: "25 Tools. Zero Memory." (Cognitive Partner Membership launch)
Edition 329: "Building Your Second Brain" (Single Source of Truth)
Next
Edition 345: We have been asking the wrong question about AI. "Which tool is best?" skips the real question entirely. This is the manifesto for why evaluation is the most important skill of the AI era, and why dyslexic thinkers, businesses, and families all need their own framework. Plus a cliffhanger about the meta layer underneath everything.
Matt "Coach" Ivey Founder, LM Lab AI | Creator, The Dyslexic AI Newsletter
Dictated, not typed. Obviously.

TL;DR- For My Fellow Skimmers
⏰ Woke up at 4AM with a random thought: what if I could analyze all my AI history to understand how I actually think? By noon I had two working prototypes.
🧠 The working name is Cognitive Partner OS. Three engines: Conversation Intelligence (understand how I use AI), Publishing Intelligence (turn my thinking into books, newsletters, podcasts), and Workflow Calibration (learn how I actually work).
🔒 The core idea: take back the algorithm. Big platforms already analyze how we use their tools. This is the reverse. Use your own data to shape your own cognitive environment.
🛠️ Built in Google AI Studio and Codex this morning. Claude Code version coming when my Mac cools down. Both versions ingest the JSON file. Not finished. Real enough to start.
🧩 For dyslexic and neurodivergent thinkers: this is a tool for mapping your own jagged frontier. Where are your peaks? Your valleys? What do you keep returning to that you have not finished?
📨 Three things you can do today: export your AI history, ask your AI to analyze a small sample for patterns, and track what you keep coming back to for a week.

