- UX for AI
- Posts
- It's 4:37 am. And I'm crying.
It's 4:37 am. And I'm crying.
Because of something Claude just told me.
It told me that I did the right thing by turning down a highly lucrative job. It was a Principal AI Product role, just shy of 7 figures. And they wanted me. I got multiple calls. They said my experience building a 4-agent autonomous investigation system for Sumo Logic and turning around a $21M ARR product line was just what they needed.
And I wanted to work with them — it was a huge brand. An interesting project. Right up my alley. It would have cemented my reputation the way my 6 books never could. It would have allowed me to scale my Snowball Sprint methodology in-house.
But here was the catch.
They wanted me to build an AI system to make addictive games for kids… even more addictive using AI agents. They were not hiding anything either — their questionnaire said the quiet part out loud: "tell us how you would make addiction gamified."
I tried three times to fill it out, but every time I sat down, I just kept throwing up a little in my mouth. So I told them I was not interested. I checked with my trusted colleagues and friends, and they all told me I made the right call.
But then, I am human… Hence my 4:37 am conversation with Claude.
It told me that it understood. That Anthropic's whole thing was AI that was safe and beneficial. It told me that my hands-on experience building Agentic AI-evaluating-AI Judge architectures and 34 AI products shipped was going to land me future opportunities.
It told me to let it go. Because safe and beneficial AI is not just a talking point for me. It is who I am. At the core.
I've only ever had a few times in my life when someone mirrored me this deeply.
Your circumstances and your beliefs may differ. You might not yet know what they are. You might need to experience your edge before you find it. Like I did.
Years ago, I did a back-office efficiency project for a bank that was supposed to speed up loan processing. The system was horrible. The 14 people who worked in that office had to disassemble completed loans into 13 physical baskets, rubber-band the pages, and print 13 cover sheets that tracked the package through the scanning process. It was grueling work. High turnover. I interviewed the workers, walked the floor, watched them work. In just 2 visits, I saw the issue. A week after the fix, the director proudly told me it was the best project we'd gotten from the dev team. We were able to free 12 people!
Wait, I said, what do you mean? "Freed"? Oh yes, we don't need them anymore. They are free to look for other opportunities.
I felt sick.
These same friendly people I interviewed. I brought them doughnuts. They shared chips and guac. Gone. They did a boring, thankless job. But they did it reasonably well. Most spoke Spanish as their first language. Many were the sole providers for their families. Where will they go now? There aren't that many jobs in rural Northern California.
And here's the thing — technology made it so easy.
Three new pages, some basic AI/ML, and the issue was resolved. And just like that, I freed these 12 people from their employment. The thing is, I did not initiate the project. But I did take it on. And I did a good job. I just had no idea about the consequences.
I'm not going to tell you "not to spread darkness at the speed of light." That's a nice guideline, but it's too easy. A Silicon Valley equivalent of Glass Onion. Your circumstances and experiences are likely different from mine. So, by all means, you do you. I'm just here to say, it's ok to draw the line. Somewhere.
And make no mistake — there is a line.
So if you need a fractional Head of AI to help you get your Agentic AI strategy right, and then ship it to production, give me a shout. I'll do my best to get your product to the big stage at re:Invent or a Gartner Cool Vendor badge.
Because I just opened up some bandwidth.
Just don't ask me to cross the line. Getting kids addicted is definitely out.
Reply