• UX for AI
  • Posts
  • Navigating the Abyss: The Dark Side of Synthetic AI User Research Tools

Navigating the Abyss: The Dark Side of Synthetic AI User Research Tools

In this series of 5 articles, we’ll review the advances AI is making in critical areas of the UX industry and make educated guesses about what’s right around the corner.  Part 1: The Bizarre: these UX tools should be abandoned on the trash heap of history!

In this series of 5 articles, we’ll review the advances AI is making in critical areas of the UX industry and make educated guesses about what’s right around the corner. 

This series of articles was inspired by the commendable AI for UX, a Fireside Chat, where Jakob Nielsen called AI “the forklift for the mind” and said that in his studies, the people who used AI in their work became 40% more productive. Jakob Nielsen’s discussion with Kate Moran echos many of the themes in this article series, including our recent bizarre obsession with replacing user research with talking to robots, which we further explore below. If you have 1 hour to invest, I highly recommend watching this insightful and thoroughly entertaining interview in its entirety: https://youtu.be/T2NcRBlh3Dc?si=eumFWL_M4WBH9cbX 

You have surely heard of the “The Good, the Bad and the Ugly” organizing principle of the Clint Eastwood fame. In this series of articles, we will attempt to classify the AI tools for the UX Industry in 5 sections, according to use cases AI is addressing:

  1. The Bizarre: these UX tools should be abandoned on the trash heap of history

  2. Use with Caution: AI is useful in these cases, but tread with care

  3. Embrace Totally: these offer fantastic productivity gains; you should already be doing this with AI

  4. Rapidly Evolving: expect rapid, industry-shaking changes shortly

  5. Future: what’s right around the corner

Let’s get into it now with our first installment:

The Bizarre: these UX tools should be abandoned on the trash heap of history

AI tools for UX that cover the following use case seem particularly egregious:

As a UXer, I hate talking to my customers. I already know what they want (even when they tell me they don't want what I designed and they couldn't figure out how it works.) I want AI to quickly give me cherry-picked "feedback" to bring to my PM/devs so they'll build what I want without pushing back.

We are witnessing a veritable barrage of AI tools and research papers looking to impersonate users and generally aim to limit the need to talk to anyone alive who knows anything about how the product will be used. This category of AI “tools” includes companies like Synthetic Users: https://www.syntheticusers.com/  who proudly advertise:

“User research. Without the Users.”

There are also numerous papers, like the one on the supposed “success” of AI-based card sorting: https://www.linkedin.com/feed/update/urn:li:activity:7159259196268404736/ 

I guess the desire to hide away and only talk to robots should not be surprising in the wake of the COVID pandemic and the ongoing epidemic of loneliness and isolation we are experiencing now as a species. However,

If you are only talking to robots while designing a system, you will end up designing a system for robots to use.

Or, as Daniel Clamp famously said in the movie Gremlins 2:

“It wasn't a place for people anyway. It was a place for things. You make a place for things... things come.”

The whole point of talking to people in the first place is to figure out what PEOPLE would find useful and joyful. What, for the love of the Matrix, is the point of talking to a robot to do your research instead of a real person? As we wrote about here, doing "research" against AI-generated personas is akin to sending a robot on vacation instead of going yourself:  https://www.uxforai.com/p/ai-and-ux-research

Sending a robot to the Bahamas may be cheaper and faster than going yourself – this is true! But what is the point of doing so exactly?

The idea of ongoing heavy VC investment into researching people’s needs against AI models as a “cure” against doing research with real people crosses my personal idiocy threshold.

When I lead the UX team, I start by creating a panel of 30-50 near-ideal customers we interview monthly, whereby a minimum of 6-8 customer conversations happen every week, set up on an ongoing and largely automated basis. Over time, the UX team develops a close and personal relationship with these customers, who help us build the product that fits their needs. Add to this close personal relationship I encourage the UX team to develop with account executives, support, and other front-line customer-facing internal folks, and you have a comprehensive 360-degree view of your product. 

If the dev team in India or Europe has an unexpected urgent question, the UX team can brainstorm and mock up a few ideas in the morning, then test the alternatives with 2-3 customers and 4-6 internal folks during the day, and deliver a complete, thoroughly user-tested solution that very afternoon. I think that should be your goal as a UX professional – a lean, direct path of communication with your customers and stakeholders that you utilize and renew on an ongoing basis. 

Your customers should be at the center of your innovation process.

You don’t have to be an expert interviewer like Oprah, but talking to users with empathy, compassion, and a certain creative service mindset are the staples of our industry. It’s all that makes use of human beings. This is the fine line that separates us from machines. Please stop trying to cross that line. Let the machines handle what they do best, and let humans handle empathy, compassion, and creative listening. 

Now, there is a narrow band of designers who do not need to talk to customers: they are visual design people, brand specialists, DesignOps pros, Design System tooling specialists, and the like. There is no need for those folks to talk to customers in order to bring about successful product outcomes if they don’t want to (though it’s always a good idea!) These professionals are not what we are talking about here. We are talking about generalists: UX Designers and researchers whose job is to talk to customers but who, for some reason, dislike having actual conversations with their customers.

If you are a UX Designer who doesn’t like talking to customers, consider that you might be in the wrong line of work. Rather than getting deeper into a hole, I suggest getting immediate help from a mentor, a friend, or a professional.

Talking to AI instead of people is NOT the way to design products for humans. 

Using AI for user research is not only unproductive but is actively harmful to the experience, as this paper from Baymard Institute demonstrates: Testing ChatGPT-4 for ‘UX Audits’ Shows an 80% Error Rate & 14–26% Discoverability Rate: https://baymard.com/blog/gpt-ux-audit 

Jakob Nielsen, one the leading UXers tackling the topic of AI tooling, had this to say in a recent blog post:

AI cannot substitute for user research with real users. It can give you plentiful ideas for issues to look for in a usability study, but it can’t predict what your customers will actually do. For better or worse, humans are very unpredictable beings. Even more important, these AI tools are currently mimicking their understanding of “typical” human behaviors. Your specific user groups likely have very different backgrounds, needs, and motivations than the “typical human” — that’s the whole reason we conduct research with our own users.”

Source: Jakob Nielsen “Getting Started with AI for UX” https://jakobnielsenphd.substack.com/p/get-started-ai-for-ux

And as Pavel Samsonov so eloquently says in his recent UX Collective article: 

“There is one more very important difference between an LLM and a customer: The LLM can’t buy your product.”

Pavel Samsonov

Source: Pavel Samsonov “No, AI user research is not “better than nothing”—it’s much worse” https://uxdesign.cc/no-ai-user-research-is-not-better-than-nothing-its-much-worse-5add678ab9e7 

As far as I’m concerned, the line of tool development that allows people to “escape” talking to other humans for the purposes of user research should be completely abandoned. No further investment should be made into this bizarre area of inquiry, and we will not further speak of it here.

Tune in next week for our second installment of “AI Tools for UX, Part 2: Use with Caution,” where we’ll discuss the special use cases where AI can be useful but needs to be tempered with special care.

Did you talk to your customers today? If not, why?

Do you or someone you know use AI tools instead of talking with customers? Please let us know why in the comments. We promise not to flame you -- we are genuinely curious. 

Greg Nudelman with Daria Kempka (Contributing Editor)  

P.S. Want some real humans to help you succeed in your next AI project? Don’t miss these 5 unique learning opportunities:

3 Workshops

Workshop: AI in Search UX: A Framework for Product Design (on sale now)
04/22/24 Haystack Search Conference, Charlottesville, Virginia

2 Presentations

Plenary: How to F*ck up Your AI Project
03/02/24 World IA Day Conference Richmond (online)

Panel: Writing in the Age of AI (get your ticket)
04/03/24 UX Salon WORDS 2024 (online)

P.P.S. If none of these events are in your area (or dates do not work for you), please consider sharing them with your colleagues and friends. If you share using the Referral Program link below, I will know that I can send you a free copy of my most popular book, $1 Prototype, as my way of saying “thank you!”

FAQs

How do AI tools for UX differ from traditional user research methods, and what are the potential benefits of incorporating AI into UX processes? AI tools for UX introduce automation and predictive capabilities that traditional methods lack. Unlike conventional user research, AI can analyze vast amounts of data rapidly, offering insights into user behavior patterns and preferences. Incorporating AI into UX processes can lead to improved efficiency, personalized experiences, and better-informed design decisions, ultimately enhancing product usability and satisfaction. However, none of this negates the need to talk to real people first, second, and last. Synthetic personas are not a replacement for empathy, compassion, and creative listening.

Are there any successful case studies or examples where AI-driven user research has actually improved product outcomes despite the concerns raised in the article? There are numerous case studies on both sides of the debate. One thing is clear: companies using AI demonstrate the efficacy of AI-driven experience design in enhancing product outcomes. For instance, companies like Netflix and Spotify leverage AI algorithms to analyze user interactions and recommend personalized content, leading to increased user engagement and satisfaction. Such success stories highlight the potential of AI to help optimize UX design processes and deliver superior user experiences.

What are some alternative approaches or strategies for UX professionals who may struggle with or dislike engaging in direct customer conversations? Although talking to people directly is one of the best ways to get value from user research, UX professionals who prefer indirect approaches to user research can explore alternative methods for understanding customer needs and behaviors. These may include observational studies, usability testing, analytics data analysis, A/B testing, and watching remote user testing done by someone else. By diversifying research methods, UX professionals can gain valuable insights into user preferences and behaviors without relying solely on direct customer conversations.

Are there any ethical considerations or guidelines for using AI in UX research, especially in terms of data privacy and algorithm biases? Ethical considerations are paramount when integrating AI into UX research. Guidelines emphasize the importance of informed consent, data privacy protection, transparency in algorithmic decision-making, and mitigation of biases in AI models. Adhering to ethical principles ensures that AI-driven UX research respects users' rights, maintains trust, and minimizes potential harms associated with data misuse or algorithmic discrimination. However, one can avoid all this toil and trouble and just talk to real people instead!

How do industry experts and thought leaders view the future trajectory of AI in UX, particularly in terms of balancing the use of AI tools with maintaining meaningful human connections in the design process? Industry experts foresee AI playing an increasingly significant role in UX design, offering opportunities for automation, personalization, and predictive analytics. However, they emphasize the importance of maintaining meaningful human connections in the design process. Balancing AI tools with human-centered design principles is crucial to ensure that user experiences remain empathetic, inclusive, and aligned with user needs and values. By integrating AI strategically while preserving human empathy and creativity, UX professionals can leverage the best of both worlds to deliver exceptional user experiences.

Reply

or to participate.