OpenAI: Sam Altman is Gone. Now What?
My take on the future of the world's top AI company.
In a stunning piece of news this afternoon, Sam Altman has been ousted by the board as the OpenAI CEO: https://www.nytimes.com/2023/11/17/technology/openai-sam-altman-ousted.html
We explore what this means for OpenAI based on the direction of the company up to this point, last week’s San Francisco DevDay talks, and their partnership with Microsoft.
Better, Cheaper, Faster?
In Sam’s last keynote as CEO, OpenAI announced some significant advances in its technology. The biggest surprise for me was that the advances are happening faster while making the technology cheaper than anyone had predicted:
We knew that the AI tech would be cheaper… it is 3x cheaper.
We knew there would be an allowance for a larger volume of input… it is 16x the input content length – 300 pages, enough to accommodate completely the text of most books and websites.
We knew they would be adding some new input/output modalities… they completed adding ALL of them, completing the magic trifecta of input/output for text, voice, and pictures.
The Mood All Changed?
This new iteration of OpenAI is a radical departure from the previous “play around and find out” “toy” model. As Eminem famously said in “Lose Yourself”: “I was playin' in the beginning, the mood all changed…” AI is entering the next phase of critical differentiation and aggressively “eating the world,” now in every modality: text, visuals, voice, location, relationships, etc. at a vastly increasing pace because it is able to leverage the existing internet content that was developed in the last 30 years.
To help AI do that, Sam announced a suite of brand-new tools and improvements.
Take just one example: the Interpreter Call API that OpenAI has just released. This feature is incredible: based on pretty nebulous user requirements, the ChatGPT4 Turbo writes its own code and runs it. In the example below, the presenter uploads a spreadsheet of financial transactions and asks for a spending graph, and it spits out an aggregate bar chart ordered by the day of the week. ChatGPT wrote all the math to do these computations, picked a chart type, and called the appropriate charting API, passing it to the computer data. Start at 30 minutes:
Those who tried to make a computer create a chart would know how difficult this is. Or used to be. Plus, OpenAI now has threads and completely stateful runs with multiple function calls. This makes Alexa and Siri look like the 90s student project that it was.
#1 and #2 Problems, Solved? Private Secure Data Handling and Up-to-Date Data
OpenAI has correctly Identified key barriers to entry for businesses – the need for completely private, secure data handling. Sam has announced that OpenAI has addressed this need with Enterprise features, custom models, and direct partnerships, all aimed at securely handling private data and providing potentially significant competitive advantage aimed at complete BPR (Business Process Re-engineering) using AI LLMs at the core of the enterprise.
They also addressed copyright protections by reducing liability with the Copyright Shield feature, where in Sam’s words, Open AI will help defend you in court in case you are accused of copyright violations.
Plus, anyone can now easily upload private PDF documents that are (according to the company, so don’t take my word for it, please!) guaranteed to stay private and will not be used to train the ChatGPT model.
And BTW, it’s now almost completely up to date (terminating April 2023), so no more September 2021 restriction. Coming soon: near-real-time data ingest where ChatGPT can read and interpret the news.
GPTs Apps — a Giant Step Toward Independent AI Agents?
One of Sam’s biggest feature announcements was GPTs, basically mini-assistant chat apps, which are, according to his announcement, “precursors of independent AI agents.” The GPTs can be configured with a stateful personality, as well as unique voices with very natural inflections, practically indistinguishable from real people.
And this is just for starters.
GPTs can now be configured with proprietary data so that the AI will draw from specialized knowledge to make decisions, give answers, and take action. That’s right, GPTs can now be easily configured to take lots of different actions on behalf of the user. In the demos, we saw everything, from adding a calendar entry and reading and writing to-dos in Asana to deciding to honk a horn based on a comment: “Someone just cut me off.” All in a stateful conversational space with full auditory, visual, and auditory-digital (text) context awareness. This solves the two biggest problems with Alexa and Siri, who have the attention span of an ADHD poodle on crack. (See the “Designing Digital Assistants” chapter of my last book, Smashing Book 6)
Code Talks, BS Walks?
Although the design was not mentioned directly in the keynote (besides Sam coining the saying “UI harmony with AI,” which I quite liked, actually), UX Design, use case selection, and user empathy was the consistent theme that ran through the entire developer presentation. This theme of UX-first was highly reminiscent of Apple developer conferences run by Steve Jobs.
With one caveat: focus on execution. Sam’s vision for OpenAI was a powerful, super-cool tool for all kinds of creators.
As part of the demo, we saw that ChatGPT, now armed with multiple modalities, can handle incredibly diverse use cases:
Custom GPT to answer technical questions on a set of documents, with a Salty Pirate personality, unique voice, logo, name, and Disney-level “Pirates of the Caribbean” quality. All with a few voice prompts.
“Check the inside of my fridge and tell me what I’m missing for making tacos,” and have the AI read the list of ingredients back to you and order them. And BTW, AI knows you can make salsa from tomatoes, onions, chile, and cilantro.
Low-code embed of an interactive map into the UI, which zooms based on a query and is able to create map pegs that correspond to contextual text output.
Create a poster for an event with Canva based on text query, and keep chatting to get more iterations of the design (a use case we often mentioned previously https://www.uxforai.com/p/how-to-create-figma-tables-for-ai-designs-in-less-than-3-minutes ).
Third-Party: “AI Pin” that observes the world and comments on things in context: https://hu.ma.ne/ and recognizes a variety of hand gestures, ala William Gibson’s “The Peripheral” (NOTE: According to this article, Sam is a majority shareholder in Humane.)
Third-Party: Instacart AI Assistant helps you craft recipes to fit your nutritional and taste requirements and order the ingredients.
Where do we go from here?
Regardless of where the leadership changes at OpenAI, one thing is clear: the last year of OpenAI operations has left behind a world that was irrevocably changed. What OpenAI has started is truly “Nothing short of Technical and Societal Revolution.”
I’ve been in AI for 20 years, and I cannot believe OpenAI made this much progress in a single year. This new iteration of OpenAI is literally having me “tremble with greed” (to use Warren Buffet’s expression) – in my case, tremble with the excitement for all of the amazing things we can help usher into the world… But I am also quaking in my shoes because I am honestly terrified of the unforeseen ways this tech can be misused. If, as Michelle Pokrass, one of the presenters, said, “most great products start as toys,” then OpenAI is the equivalent of a toy nuclear reactor… That is now all grown up and has become a smart nuclear bomb.
Sam’s vision was focused on using AI for an incredible level of individual empowerment and agency at a scale never seen before. As Sam said, he was building “superpowers on demand,” and I very much fervently hope that the new leadership will continue developing the company with a similar focus.
Because, in Sam’s words, “it’s all going to look very quaint compared to what’s coming.”
Are you Ready?
Which brings us to the most important topic: you.
Do you know to watch out for pitfalls that might sink your next AI project?
How about which key UX for AI design patterns and data visualizations to use?
Have you made a GPT bot?
What are you waiting for?
UX for AI: A Framework for Product Design
3-day virtual workshop
December 6-8 2023, 9:30am-11:30am PT
IMAGE SOURCE: UX STRAT UX for AI Workshop, Boulder, CO 091123
Discover the path to AI project success in this practical workshop. Drawing insights from 32 UX for AI projects, Greg will equip you with advanced UX Design skills to lead your next AI project:
Identify the right AI use case
Deploy effective research techniques for AI applications
Create use-case storyboards
Leverage the “Value Matrix” to train AI models with human values
Harness established UX for AI design patterns like Digital Twin and Chat
Deploy advanced Data Visualizations to communicate AI predictions and insights
Structure your team for optimal project success.
Get the practical tools you need to succeed in leading your next UX for AI project.
And because you are a loyal reader of this Newsletter, you get $75 off each ticket. Just use the code:
Did I mention your workshop ticket should be fully reimbursable? Yes – it is, in most cases — please check with your HR department.
See you soon,
Greg (with Daria’s creative editing)
What do you want to read about next?