x

Save Time and Frustration

Say No to Poorly Designed Products!

Save Time and FrustrationRegister for free

AI in UX Research: Benefit or Detriment? [Expert Roundup]

AI in UX Research: Benefit or Detriment? [Expert Roundup]
Marek Strba
•  28.08.2023
Welcome to the first part of our deep-dive on the topic of AI in UX research. In this episode we are asking UX experts if the rise of AI is benefit or detriment to user research.

In the first 4 episodes of the AI in UX research series we will be asking industry experts questions on the AI-UX research relationship. Make sure not to miss the following episodes and the final report from our AI in UX research survey.

Our series will cover the following topics:

In this first episode, we will look at the answers to the question: 

“Overall do you think the rise of AI use is a benefit or a detriment to UX research? Please explain your stance.”

Before we get into the answers, we first need to explain why we decided to ask this question. It’s no secret that AI has been one of the biggest buzzwords of the whole IT industry for some time. In the UX field, it has been used mainly for the analysis of the results (for example the transcripts, translations, sentiment analysis etc.) or to generate and estimate more complex results such as saliency maps or heat maps. Some of these approaches can be deemed as more “expert” level.

However, with the rise of ChatGPT and other similar language models, the use of AI in UX research has become more approachable. We wanted to learn more about how this “reputation change” of AI in UX from the expert tool to a more day-to-day use will impact the state of UX research. We had some opinions ourselves, but we wanted to prepare a real community-wide dive and get industry experts’ opinions for you.

In the next paragraphs, we will list the answers we managed to gather and at the end, we will let you in on our stance as well.

use-of-ai-in-ux-research-good-or-bad

Debbie Levitt, MBA

Lead UX Research and CXO at Delta CX. You can find Debbie on her LinkedIn or YouTube channel.

I have two opinions here. One is, do I think AI in the future will be a benefit? I do, I think eventually AI will be good at some things that Researchers and our teammates wish would take less time like some of the research analysis and possibly some of the synthesis.

But for my second opinion, I would say where machine learning is now, it is not a replacement for our work. And in that sense, it’s become a detriment because you have people who believe that we as individuals or our work as Researchers can be replaced by this machine. So in that sense, it’s definitely working against us. And I wonder if it has contributed to some of the layoffs that we’ve seen in 2023. 

There are so many people giving messages that research needs to be fast and cheap and just have AI take care of these things for you. But time after time, whenever somebody actually uses AI for these purposes and shares that in an article or a LinkedIn Post, it always looks quite poor to me. They usually complain that AI didn’t do what they wanted it to. They talk about asking AI to summarize things and AI leaves out important points, findings, or insights. They ask AI to find certain patterns in the data. AI misses some very clear and important patterns. The other day, I asked Google Bard if there were anything in Zoom’s terms and conditions that I should be worried about, considering Zoom is so controversial right now with their updated terms and conditions. And it mostly told me that I should be careful that you can’t sue them. They prefer arbitration. Hey, you’re missing a couple of really key points here! And it’s a great reminder that AI doesn’t replace a lawyer. 

At this time, AI doesn’t replace a Researcher, especially when we’re talking about qualitative research, where it’s about human interaction, human observation, and human conversations between two or more people. So I would say AI or what we are calling AI, which is really machine learning, is currently a detriment because it’s making people think they don’t need Researchers or good research, and that we can just feed some stuff into these tools, whether it’s feeding some of our existing data or if it’s just feeding a prompt, and that we’re going to get back something great. 

We forget that good research is supposed to be guiding a company’s strategies, initiatives, decisions, products, and services. And if we’re going to let the machine guide that, then number one, be prepared to not innovate or disrupt because your competitor could put a similar prompt into a similar system and be told you should make the same product or feature. And number two, be prepared to become quickly out of touch with your users, especially given some of the problems around AI being generally considered racist, sexist, ableist, and a number of other unpleasant “-ists”. 

Do we really want to ask the robot that has proven to be these negative things and not yet great at the tasks we wish it would do? Do we really want to put our company’s success in its hands just because we think that’s going to be faster or cheaper? If your company doesn’t care about quality, outcomes, customer satisfaction, or customer loyalty, then absolutely, you should pick fast over quality every time. But if your company actually cares about your own company values, how you serve your customers, being better than your competition, and really understanding your customers so that you can meet or exceed their needs, this is currently not done well with AI.

Darren Hood, MSUXD, MSIM, UXC

UX Research and Design Educator, Mentor. You can find Darren on his LinkedIn or on The World of UX podcast.

I think it’s (currently) a detriment – for several reasons: First of all, UX and UXR are not in a good place currently. There are too many people (stakeholders, practitioners, and budding practitioners alike) that don’t understand what UX is, what UXR truly is, and how to represent or value UX operations. 

Secondly, the common denominator among stakeholders and budding practitioners is the desire to accelerate and templatize the work that we do. This strips us of the expertise needed to excel, which directly impacts the value that we bring and can bring. And finally, the potential AI engagement also deters some practitioners from working to develop the expertise needed to excel at the craft, diluting our presence and value proposition.

Caitlin D. Sullivan

Founder of User Research Consultancy and UX Research Advisor. You can find Caitlin on her LinkedIn.

I think it can be a huge benefit. I’ve spent a lot of time lately testing as many tools as I can for research that use AI in at least some part, for some use cases. The potential time saving for me is really impressive so far. 

I know AI feels scary to a lot of people. Many wonder if it’s going to replace us. But change is inevitable, and I tend to try to embrace it. I believe that learning how to use AI tools in customer discovery can make the repetitive parts of the process easier for us, and can leave us with more time for the really important, human parts that often become rushed. Our role is to deeply understand people, to take time to analyze what we’ve heard and observed and make sense of it. There are at least a few research process steps that I see a lot of people rush through, and I think AI can actually help us make more time for those important parts, by streamlining the rest.

Joel Barr 

Joel is Lead User Researcher. You can find him on his LinkedIn.

Mixed bag, but mostly detrimental. Oh sure, it’s fun to make AI pictures and it’s fun to talk philosophy with it and see how far the limits of it can go. But in a professional AI-meets-UX way? It’s dangerous and potentially harmful and could set back UX many, many years of progress of helping Enterprise understand their users. 

Enterprise has a habit of glomming onto shiny pennies and are as amused with them as children are with new toys. So they’ll shift everything over into AI all at once without having the foresight that maybe to measure humans, one might need a human. The truth is, AI’s aren’t really generative AI’s – they are narrow AI – one or two trick ponies, which means they’re programmed by humans for humans, which means they have the same biases and prejudices built into them that humans do. They also serve in a handful of useful functions like coding and image generation…and video…and that’s all pretty cool stuff. Heard about them being able to write whole movies and things.  And that’s swell.

But don’t tell that to big enterprises, who will cut staff first, invest in some newfangled technology, only to realize they have to hire back the same staff they let go at a higher price because they’re poaching from smaller competitors who saw wisdom in not following the big guys.

 “AI” is SO new. We’re in the just-discovering-fire days of it. Pre-prehistoric times of it. Everyone has stars in their eyes as though Buckminster Fuller walked back into life from 1983 moaning the word “Ephemeralization” and everyone is flopping over from astonishment. We’re not there yet. It’ll take another 40+ years before we are – and I’ll be long retired and sunning myself on the beaches of Goa by the time it becomes a major issue.

But right now we can’t get AI to *not* equate “thug” “terrorist” etc with darker skin tones. That’s a huge, glaring issue. Enterprise is really good at glossing over the glaring details in favor of  selling the bigger picture and promising to fix it after release – but you know as well as I do – they never, ever do. This is one of those times when humans need to be every part of the building of a human-like AI – but only select humans with very sharp biases are building these things. And that makes them influentially dangerous to every human living on this planet especially as we are starting to catch journalists having AI write their stories for them and influencing what gets written on social media.  

And just so I am not just negative and pessimistic – I’m a proud futurist and look forward to the day of being able to work with neural networks and self-sustaining, planet friendly, energy efficient AI that runs everything from flights to logistics in moving goods from one place to another so we can basically be on vacation full time while AI does all the complicated work for us.  

I have clients right now who are in the middle of designing and building AI solutions for their business. First question out of my mouth, “How do you know this is a problem your user faces that you need to spend all this time, money, and sweat equity on building a GPT-clone for your business?”  Crickets.  They don’t know because they’ve never asked.  They are like the overbearing partner – they want to give the user everything, without understanding what their user needs in the first place: to solve their problems effectively by listening and then driving toward an actionable solution.

Dr Gyles Morrison MBBS MSc

Clinical UX Strategist and UX Mentor. You can find Gyles on his LinkedIn.

I think AI is overall a detriment of UX research only because there are too many people who don’t know the true limitations of AI. There is still a lack of knowledge of what good UX research looks like.

Stéphanie Walter

UX Research & UX Design Consultant. You can find Stéphanie on her LinkedIn or website.

There are a lot of AI tools that can do a lot of different things. Like any tools, if we use them properly with care, I think there can be a lot of benefits. We can reduce repetitive tasks, for example, and save some time that we can then allocate for more research. So, no, I think, in the long run, it will be beneficial. In the short term, though, it will be a mess in certain areas. I’ve seen people who want to create personas with ChatGPT and replace user research with “asking the AI tool”. This is not user research, and will lead to poor product decisions and damaging the user experience for that product. Would it be worse than not doing research at all? Probably. So, yeah, misusing tools is going to be damaging.

Kevin Liang

UX Research Manager & Instructor at Zero to UX. You can find Kevin on his LinkedIn or his YouTube channel.

Both, it depends. AI is still a tool. The main benefits I see are efficiency, being able to handle large volumes of data, and the ability to serve as an inspiration. I won’t speak too much to its benefits, as the benefits are pretty apparent. From generating things based on a prompt, to analyzing large datasets, its benefits can help us be very productive. Even if AI isn’t perfect, even its mistakes can show us different possibilities, serving as inspiration. It reminds me of the DNA replication process; we consider mutations mistakes, but sometimes those mistakes are what helps us evolve into things we couldn’t think of before.

In regards to detriments, I see four main culprits: ethics, bias, overreliance and hype.

With the ethics, there was an article saying that DALL-E covertly modified prompts to include diversity keywords as a bandaid to produce more diverse photos, rather than actually fixing their algorithms. So that’s exhibit A on how NOT to build AI. Quite lazy and frankly deceptive. However mired in controversy, calling something a Beta is one way of protecting your reputation, but nonetheless, pressure testing is necessary. But another question is, to what degree of transparency, explainability, and trustworthiness does the end user need? 

For example, when I’m driving a car with an internal combustion engine, I need to know I’m not going to blow up driving to Costco. That’s trustworthiness. Am I thinking about how a combustion engine works during my drive? It’s nice if the information is there, but not really. 

Same for AI algorithms. It depends on what the AI is used for. If I’m creating images from MidJourney, maybe I want to know what went into the prompt to get the output, so I know what to change if I’m not satisfied with it. When I review my students’ work, I always want to see their thought process. What went into your answer? As a UXR, my answer is to always go back to the user, the human. What are our expectations of AI? How would AI regain trust when trust is broken?

In regards to bias, data privacy, garbage in, garbage out, inclusion, algorithmic biases (lots of the language training models like cGPT are based on English-speaking Western countries, so outputs will reflect these cultures). To mitigate these biases, it starts from the beginning of development, and as UX researchers, designers, make sure to test with a diverse group of people. Not to treat inclusion as an afterthought, but a mandatory part of development.

To address the overreliance, a few months ago, I did an experiment where I asked chatGPT to write me a Likert scale for familiarity once a week. It gave me a wrong answer, but it didn’t know it was wrong. It kept giving me the wrong answer for months until recently. 

The point is, if I simply took chatGPT’s answer at face value, I would have used an erroneous scale for my survey. But I wouldn’t KNOW it was wrong unless I had the fundamental knowledge of constructs. Try it, you can thank me for helping train ChatGPT for that prompt 🙂 

So if you’re just starting out in UX, my suggestion to you is to focus on mastering the craft first. Forget the shiny stuff. And don’t mistake fundamentals for easy. Kobe Bryant, all he worked on was the fundamentals, that’s what made him great. Not trying fancy stuff until you get the fundamentals down. In my eyes, AI is a shiny object. Still a tool to help augment our human work and it’s a pretty damn good tool at that I will admit. But if you can’t tell your left from your right first, you can’t really start navigating. If you’re already doing good work and want to explore using AI, remember to test your work with diverse groups of people early and often as a means to gather continuous feedback, remember that AI augments YOU, it shouldn’t replace your critical thinking.

And of paramount importance, please remember, instead of approaching problems by limiting yourself to asking “how can AI solve this?”, expand your horizon and ask “how can this problem be solved?”. This is a turning point in which we can either let AI take us over, forgo critical thinking, or we can continue doing what we humans do best for centuries; challenging the status quo.

Finally the hype, there’s a lot of it around AI. AI has been around for decades, but generative AI like ChatGPT was a huge breakthrough. At the end of the day, remember, good design solves a problem. Don’t ask “how can AI solve this problem?” Ask “what IS the problem?” Don’t forget UX starts with PEOPLE, not AI.

Overall, AI can be a benefit that streamlines our work, but it can be a detriment if we stop using our critical thinking and over-rely on it.

Nikki Anderson-Stanier, MA

User Research Lead, Founder & Managing Director at User Research Academy. You can find Nikki on her LinkedIn.

I believe that it is up to us as an industry (product/tech, not just user research) to react appropriately in order to make AI a benefit to all of our roles. I think that mindset is the key to using AI in the best way that makes the most sense. We need to properly understand the limitations of AI tools versus when it makes sense to use AI to our advantage. By trying to skip ahead or replace user research with AI, we are potentially setting ourselves up for poor decision-making and misguided data.

Julian Della Mattia

UX Researcher Lead & ResearchOps Specialist. Founder of research operations agency the180. You can find Julian on his LinkedIn

AI is a tool. And like with any other tool, the problem is how you use it. Some AI tools can indeedmake your life easier if you use them properly. As a ReOps person, I’m always looking for ways to make researchers’ work more efficient, so I believe there’s a lot of potential in AI.

Kelly Jura

Vice President, Brand & User Experience at ScreenPal. You can find Kelly on her LinkedIn.

AI can be beneficial in UX research and in streamlining some of the time-consuming processes, making the process and people more efficient. AI can be a beneficial additive, but not a replacement for UX research. AI is only as valuable as the data it is built on and the precision of the prompts that are provided. UX researchers play a crucial role in examining experiences from a holistic perspective, determining the most suitable research methods, understanding nuance, and delivering actionable insights that connect with stakeholders based on business goals and user needs.

Ben Levin 

UX Researcher & Strategist. Managing Partner, Chamjari. You can find Ben on his LinkedIn. 

Like any new technology, AI is going to substantially impact all of the fields of the touches. But to talk about this intelligently, we need to distinguish between different types of AI and think deeply about what they are fundamentally capable of.

That’s difficult to do because at this point we don’t actually know what AI is capable of, nor do we fully understand why the capabilities it does seem to exhibit, exist. 

If we, as researchers, are worth our salt, we’ll quickly dive into exploring how users interact with AI, because doing so will put us at the forefront of understanding how AI in general, and large language models in particular, work, and what they can do.

In that way, AI will be a boon to UX research in the way all new technologies are: it will give us much to study and learn about, particularly in the context of how its capabilities impact other fields.

And if past technologies are any guide to the future, there will be lots of really subpar implementations that need diagnosing and fixing.

Ben’s opinion was quoted from his article Is the rise of AI good or bad for UX research?  read his full opinion there. 

Our two cents

use-of-ai-in-ux-research-good-or-bad

After hearing from experts from all around the UX industry, let us share our opinion with you as well. We align with the idea that AI has a game-changing potential for the UX industry. However, we are strong advocates for accountability. If you let AI make your decisions, then who is accountable? You? The expert who trained it? Or the one who came up with the algorithm many years ago? You could take this question even further. According to which moral principles did the AI make the decision? A UX researcher knows that increasing your profit by designing a roach motel is wrong. On the other hand, AI will only see the maximized profit.

Another key factor for us is being genuine. AI is not human, if created correctly it can play human… in a limited scope …and be almost believable. The correct decision in the UX process requires nuance and an in-depth understanding of human behavior, preferences, and quirks. We simply don’t believe AI has arrived at a point where it would be capable of genuinely performing on this level.

Here at UXtweak, we take research seriously and this is where we see AI as the biggest help now. When used as a tool, it can indeed help you with many mundane tasks such as transcripts, translations, sentiment analysis, and more. We just wanted to ask you to tread lightly and always remember that AI is a tool, not an all-knowing wizard. Always question the results you get from any AI analysis.

The final factor, which is inherently bound in our company DNA, is curiosity. And it’s curiosity, which allowed us to end our opinion on a positive note. We couldn’t be more excited about all the possibilities that AI will inevitably bring. All we ask is that we all approach it with respect and a clear mind.

Conduct UX Research with UXtweak!

The only UX research tool you need to visualize your customers’ frustration and better understand their issues

Register for free

What to look forward to?

We hope you found the insights we have gathered as exciting as we did. The next episode will be focused on the second question:

“What would be the one aspect of UX research that you think is best compatible with using AI and why?”

After all the episodes are out, we will bring you a comprehensive report containing the results of the survey on how the UX community views the current state of AI. Stay tuned! 

Let us know your thoughts in the comments on our LinkedIn post here!

Share on socials |

Read More