- UX for AI
- Posts
- The Biggest AI Opportunity: An Open Letter to Anthropic
The Biggest AI Opportunity: An Open Letter to Anthropic
Accessibility. More specifically, mobile voice-to-text + memory management.
I'm a huge fan of Claude. Let me say that upfront. This letter comes from a place of deep respect and genuine appreciation for how Anthropic's LLM has already enriched my life. I've spent 16 years in AI, shipped 34 products, and hold 24 patents — and I've never seen a more opportune moment for a company to flex its accessibility muscle.
So why am I writing this? Because accessibility is good for business, and in this newsletter, I'm going to show you exactly how good.
Accessibility Features Have Always Been the Tech's Secret Weapon
Some of the most transformative technologies in modern computing started as accessibility features:
Autocomplete — born from assistive tech, now the backbone of every search bar and messaging app on the planet.
Autocorrect — designed for accessibility, now indispensable to billions of people.
Zero-result recovery and faceted search — revolutionized e-commerce and digital media by making product and content discovery accessible to everyone.
And it doesn't stop at software. Sonic toothbrushes, lane departure warnings — these are essentially accessibility innovations that became the gold standard. I drive a manual stick shift, and even my car has lane departure alerts.
The pattern is clear: build for accessibility, and you build for everyone.
Today, two barriers stand between Claude and its potential as the operating system for our lives: a better UI and memory management. Solve both, and the opportunities are virtually limitless.
Barrier One: Better Mobile Experience
I spend the majority of my time on mobile, and Anthropic's current voice-to-text experience falls short. The recent update is 100X better than a week ago, but it still leaves a few items unfinished. The 3 different modes (dictate, push to talk, record) feel haphazard and tacked-on.
The Default mode, dictate, struggles in any normal (noisy) environment, or else, like Alexa, fires off too fast, before my thought has made it all the way from my distracted globe into actual words. It's endlessly frustrating. I'm not chatting for fun, just killing some polar bears. I'm actually trying to do work, so a reasonable time to formulate a coherent query while multitasking on a mobile should be a given.
The Push-to-talk mode is better, but the buttons are awkwardly placed, and my thumb covers the transcription field, which is way too small, just shy of 2 lines of text, which is 5-10X less than my typical prompt.
The text is too small and is displayed twice, once as a transcription, then again as a typical prompt balloon, with no way to edit the prompt in between. There is no need for this double display. It feels tacked on, unfinished, and unpolished. Voice-to-text is clearly an afterthought for Anthropic, when it should be their crown jewel.
Here is a sketch of some ideas for basic UX improvements I put together this morning. With a bit of thinking and user testing, this can no doubt be much improved (and we all know coding is not an issue for Anthropic):
Some basic experience improvements for mobile voice-to-text
Or take it a step further by taking a page from the AI companion apps. Not the sleazy parts — the UX. They figured out something important: emotional presence matters. Add an avatar. Let users select from different styles or personalities. Experiment with using head motions to signal the AI is listening — a nod, a tilt, a subtle acknowledgment. This isn't a gimmick; it solves a real problem. A nodding avatar tells you "I hear you" the way a human does. That's a fundamentally different experience from a pulsing microphone icon.
The companion apps aren't winning on technology. They're winning on presence. Anthropic has the better brain — it just needs to give it a face -- or a simple icon.
Experience matters far beyond voice input, because the AI companion is a trillion-dollar market — and it should be Anthropic's. Right now, the companion space is dominated by sleazy apps sitting on top of your technology. But most people don't want distraction or yandere anime fantasy. They want someone to bounce ideas off of. They want help remembering things, dealing with the damnable deluge of data, holding up a philosophical mirror, exploring how to live a good life. They want an AI that helps them evolve — that fulfills the long-ago promise of Isaac Asimov and Iron Man comics and makes them superhuman.
By making this interaction difficult, you're blocking the next stage of human evolution when you should be leading the way.
Barrier Two: Memory Management
This is the big one. The strategic centerpiece that unlocks the next stage of LLM evolution.
Right now, memory in Claude is mysterious and hidden. On desktop, I manage context with RAG files and custom workflows. This works, but requires some hands-on workflow management. On mobile? It's utterly inadequate. The uploading, editing, and juggling required to move context between conversations has to stop.
Here's the vision: long-term memory, short-term memory, current conversation context, personality preferences, output types, canonical facts — all organized into categories that humans can actually understand and interact with. Parcel it however you want on the inside, but stop making memory mysterious and hidden. Expose it to me through the interface I'm already using.
I need to be able to say "add this to memory," "change this in memory," "remind me I collected something about this last week." If Claude keeps getting dates wrong or forgets what day it is, that should be an easy fix — not a black box I can't touch. Initially, this "Meta" switching can be done in the Settings, as that is easier and cleaner to implement.
Solve memory and you unlock unlimited uses. The biggest barrier in the AI space right now is context management. Crack it, and Claude isn't one tool — it's every tool. Different RAG files, different personalities, different contexts for different tasks. My email assistant is concise and professional. My article writer has my voice and style. My code assistant is terse and never waxes poetic. You don't want your code bot writing like your newsletter bot — and with proper memory architecture and personality switching, it never has to:
Simple personality switching using Settings
What Improved Voice-to-Text + Memory Makes Possible
Knowledge management. The entire note-taking space is flailing. Notion is trying to own it, but is far from succeeding. I've had to build custom LLM harnesses for Obsidian because Evernote is exceedingly inadequate. We're all drowning in data that needs to be culled, organized, and turned into actionable intelligence. LLMs are perfect for this. The workflow should be natural: collect some data, add a note, start thinking about it, add more notes later — and Claude reminds me that I collected something related elsewhere and synthesizes the old intelligence with the new.
That would be priceless.
Communication. When I read an article worth sharing, Claude should help me write a quick take that sounds like me and post it to my network or text it to a specific person. Someone texts me a question I've answered a hundred times — Claude should draft the reply in my voice and wait for my thumbs up. A colleague shares a report — Claude should summarize it, flag what matters to me, and suggest a response.
This is basic personal companion functionality that blows Siri, Alexa, and Google Assistant completely out of the water.
Inbox triage. I'd say I'm pretty average among my peer group. I have 25,000 unread emails. Every app is continuously pinging me every few minutes. I want to train Claude to handle my massive tsunami of email, texts, and notifications. Start with human-in-the-loop — I'll review and approve for a while. But most of this traffic is routine, and I can no longer handle the volume. Give me my daily briefing. Tell me what needs my attention. Summarize the news. Surface action items. Set up alerts for opportunities and flag things that would hurt me if I missed them.
I want room to breathe, think, and create what's next.
The Path Forward
And here's what Anthropic should be seeing strategically: mobile user-facing memory management doesn't just serve users — it unlocks virtually unlimited training data for memory management agents. You're the authors of MCP. Every user interaction with structured memory is an incredible opportunity to train your agents across an endless variety of real-world cases.
A training lab hiding in plain sight.
The honest truth? The majority of my important content is already in Claude. It's just not organized or connected. I already trust Claude with my credit card. I already built tools that utilize the Claude LLMs for email, CRM, writing, and data processing — I wrote them myself, armed with Claude Code, doing by necessity what Anthropic should offer natively.
Two barriers. A better mobile voice-to-text experience + real memory management. Solve both on the mobile, and Claude doesn't just improve — it becomes the operating system for our lives.
I have no doubt an AI company will seize this opportunity sooner rather than later. I'd rather it be Anthropic — because when it comes to AI, we can all use less second-guessing at 4:37 AM.
If you know anyone at Anthropic, please send this their way.
The author has spent 16 years in AI, contributed to 34 products, and holds 24 patents in the field.
Next week: a deep dive into memory management — what it should look like, how it should work, and how it will change the game when done right.
Reply