The Rise of AI-First Products

If Mobile-First thinking has revolutionized the UX Design industry, AI-First is promising to be an even more spectacular kick in the pants.

If Mobile-First thinking has revolutionized the UX Design industry, AI-First is promising to be an even more spectacular kick in the pants.

For likely the first time in history, we can build products and services truly centered around functional AI. The next wave of AI products promises to combine LLMs with mobile, audio, video, vision, movement, and much more. This is giving rise to a functional set of products that can be called “AI-First.”

And many of the design “rules” are going out the window.

As a concept, AI-First Design was introduced to me by Itai Kranz, the Head of UX at Sumo Logic, who wrote this nice article: "AI-First Product Design." One of the earliest mentions of the concept in online literature seems to point to Masha Krol’s Medium.com article “AI-First: Exploring a new era in design,” published Jul 13, 2017.

However, AI-First is not exclusively the domain of designers. As Neeraj Kumar helpfully explains in his LinkedIn article “The AI First Approach: A Blueprint for Entrepreneurs:”

In an AI-first company, AI is not an afterthought or a tool to be tacked on later for incremental efficiency gains. Instead, it is an integral part of the company's DNA, influencing every decision, from the problem the company chooses to solve, the product it builds, to the way it interacts with its customers.

Well said.

Co-pilot is not an AI-First Design

The first wave of LLM-enabled products has largely been ad-ons, the now so-called “co-pilots.” We explored various co-pilot design patterns at length and even sketched a few that have not yet been made into products on our blog, UXforAI.com, in "How to Design UX for AI: The Bookending Method in Action." Essentially the idea behind a co-pilot is to retrofit an existing product with a side panel that will work with the LLM engine and information in the main screen in order to produce some acceleration or insight. A nice recent example of this is Amazon Q integration with QuickSight:

Amazon Q co-pilot panel answers natural language questions, explains dashboards, creates themed reports, and more. While this is pretty impressive and useful, it is not an AI-First approach. It is a way to retrofit an existing product (QuickSight) with some natural language processing accelerators.

We tried AI-First with Alexa

We’ve seen a few attempts at AI-First products in the past: Amazon Echo with Alexa, for example. However, Alexa suffered and continues to suffer from a lack of context, as I wrote about in my 5th book, Smashing Book 6.

Echo with Alexa also lacks access to essential secure services that would allow the product to actually “do stuff” outside of Amazon’s own ecosystem. If you ask Alexa to add your dog food to your Amazon shopping cart, it will do it quite well. However, don’t expect Alexa to work when ordering a pizza. Much less to execute a complex multi-step flow like booking a trip. In fact, any multi-step experience with Alexa is borderline excruciating.

The Alexa “Skills” (Amazon’s name for voice-activated apps) is the worst failure of the platform, in my opinion. Greg wrote extensively about this previously (https://www.smashingmagazine.com/2018/09/smashing-book-6-release/), but it comes down to a problem of lengthy invocation utterance, inability to pass the context, clunky enter and exit strategies, and inability to show system state (like are you inside a Skill or inside Alexa?). And the worst part is that you have to say everything very, very quickly and concisely, or else Alexa’s minuscule patience will time out, and you’ll have to start all over again.

I once did a pilot project spike for GE where I created an Alexa skill called Corrosion Manager to report on the factory assets that were about to rust out and thus were posing an increased risk. (See our UXforAI.com article, "Essential UX for AI Techniques: Vision Prototype") The easiest Alexa Skill invocation command we could come up with was something like: “Alexa, ask Corrosion Manager if I have any at-risk assets in the Condensing Processor section in the Northeastern ACME plant.” (Try to say that five times fast. Before Alexa times out. Before your morning coffee. I can tell you my CPO at the time was not impressed when he tried it.)

Alexa skills don’t just fail the smell test for serious SaaS applications. One memorable experience came from trying to introduce a nice middle-aged religious couple who were friends of mine to Bible Skill on Alexa. Let’s just say they did not have the pre-requisite patience of a saint and, therefore, failed to invoke even a single Bible Skill task successfully. (They eventually forgave me for introducing a satanic device into their home. Yes, we are still friends. Barely.)

Humane AI Pin

Humane AI Pin was arguably the first commercially available AI-first product of the new generation. We already discussed the issues with the AI Pin at length in UXforAI.com article "Apps are Dead." Among the problems were awkward I/O and controls. While it seemed to be able to mimic Alexa’s functions on the go, it was hard to see people doing real work on this device, even something relatively simple like ordering a pizza. Booking a trip was definitely out of the question. However, this device helped show that the new paradigm seems to be about the unabashed and uncompromising death of the app store paradigm.

We wrote about that extensively in the past issue of our column, “Apps are Dead,” here: https://www.uxforai.com/p/apps-are-dead (It’s a quick read, and I highly recommend a refresher as it will help put this next product in the proper perspective.)

r1 rabbit

This new AI-first product, r1, from rabbit, was launched just three short days ago, on January 9th, 2024. The r1 is part of the next wave of AI products promising to combine LLMs with mobile form-factor, voice, and vision capabilities. The r1 appears to be a smaller version of a cell phone with a touch screen and a spinner wheel, somewhat reminiscent of late Crackberry designs. (Have you seen the movie Blackberry? It’s excellent. Must watch for all the mobile design nerds.)

The most prominent feature of the r1 device is what it does NOT have: apps.

All of the usual apps are available instead as permanent integrations that are embedded behind the scenes into the ChatGPT voice-assistant interaction. Here’s a full transcript of r1 ordering a pizza:

Rules for Rule-Breakers

AI-First is hard.

However, while there is still much to learn, we can already deduce a few rules for this new AI-First design paradigm. Like the pirate code from Jack Sparrow’s famous adventures,

“The code is more what you'd call 'guidelines' than actual rules.”

Captain Barbossa

But here’s what we’ve got to go on so far:

  1. Smooth, simple, seamless. The AI-first experience must feel much simpler and smoother than the current app paradigm. This is where r1 takes a hit by requiring the use of another device (a computer with a keyboard and large screen) to set up all the app integrations. We already do everything on the mobile. Not being able to do everything on the AI-First device is a step backward and just will not work. The sub-second LLM response speed is nice, though.

  2. Personalization: The AI assistant must learn my preferences quickly. The AI assistant must know whether I like pepperoni, or want vegan cheese, or gluten-free crust. It should know where I live and what I prefer at what hour of the day, above and beyond the app preferences. For example, the Amazon app keeps trying to make me return my packages at Kohls's two towns over when I have a UPS store next door. This nonsense simply must cease.

  3. Data privacy: with this intimate knowledge of my life across all of the apps, I must know that data about my personal habits will not be used to enslave me and sell me down the river. AI is powerful enough for me to pay extra to have my interests served first. Not make me into another piece of rabid rabbit robot food.

  4. Use existing phone, watch, earbuds, glasses, tablet, headphones, etc.: Please, please, please – I mean it! Use the same device if possible. I already have too many devices. There is no new interaction in r1 to warrant me owning yet another device. None. I don’t need a smaller screen, it's a bad idea. I already have two cameras on my phone, and I’m used to that, so there is no need to reduce it back to one camera. That’s another bad idea.

  5. Security of transactions: we are going to be doing everything with our AI-First device, so use established high-security methods like facial recognition and fingerprint. I like what r1 is doing with the transaction confirmation dialog, but this needs to be more secure, like double click + facial recognition Apple iPhone provides.

  6. Non-voice is more important than voice: both r1 and AI Pin are missing the most important lesson from the mobile-first. Voice is not going to be the primary UI. Voice control is just too public. Imagine saying your password out loud, like in Star Trek! (That’s “O-U-C-H,” Capt’n) Mobile use is popular in both quiet (doctor’s offices, meetings) and noisy (metro, bus, cafe) environments. Text input via keyboard is a primary, not secondary, use case.

  7. Avoid cutesy form factors: be friendly without being cloying. You don’t need to invoke the Adventures of Edward Tulane – that story is creepy enough to be left alone! Avoid bright colors, especially orange (even if the CEO really seems like it. Designers, please try to talk your executives out of making crazy color choices. Orange is a warning. Or a rescue craft. Or a child’s toy. This thing is none of those.)

Again,

AI-First is hard. These products are still baby steps. Remember that the first iPhone did not have cut and paste. And the first Facebook “app” was actually just a website and only allowed reading and liking of messages. It took over a year for the first true mobile Facebook app to be ready.

Baby steps.

The time will, of course, be as unkind as it can possibly be to any new product named “rabbit” that is produced by a company called “Teenage Engineering” (if disabled comments on the launch video on YouTube after only two days out is any guide…) However, this author is of the opinion that  r1 is a very clever ChatGPT wrapper built on top of the usual phone OS+apps play that has basically remained unchanged since the first release of the iPhone in 2007, for almost 17 years!

Apps Must Die

Recall that we recently discussed how InVision failed to implement the key strategy for the age of AI: “simple end-to-end experience that worked reliably and consistently, together with simple pricing.” (See "InVision Shutdown: Lessons from a (Still) Angry UX Designer" on UXforAI.com) AI-first products like the r1 from rabbit are early attempts at this 3S: Smooth, Simple, Seamless experience.

One thing that rabbit r1 emphatically demonstrates is that under the pressure of LLMs, apps must die.

Think of your phone now not as a collection of Mobile-First UI designs but as a platform for AI-First experiences.

The APIs and services apps deliver will, of course, remain alive and well. What must, however, be allowed to pass away is the need for the customer to go in and out of a specific UI silo (or a voice silo if we are talking about Alexa Skills).

With AI-First design, as simply and as frictionlessly as possible, we simply ask the assistant for what we want, and the assistant goes into specific services it needs to accomplish the task armed with a deep knowledge of your preferences and inner desires. LLMs like ChatGPT are making this shift away from apps not just possible but simply imperative.

We see the AI-First Design movement quickly becoming the avalanche that will sweep away the outdated siloed app environments in favor of 3S: Smooth, Simple, Seamless experiences that bring together various app capabilities and content under the umbrella of an AI-First approach.

So, enough talk! Go forth and design some cool AI-Frist sh*t.

We can’t wait to see it!

Greg & Daria

Join the conversation

or to participate.