In the first 4 episodes of the AI in UX research series we will be asking industry experts questions on the AI-UX research relationship. Make sure not to miss the following episodes and the final report from our AI in UX research survey.
Our series covers the following topics:
- Episode 1: Is the rise of AI use a benefit or a detriment to UX research?
- Episode 2: What would be the one aspect of UX research that is best compatible with using AI?
- Episode 3: Can UX researchers remain market viable if they don’t choose to adopt AI?
- Episode 4: Thoughts on AI-generated responses / AI-based users
- Final report: Results of the AI in UX research survey
In this episode, we will look at the answers of industry experts to the question:
“What would be the one aspect of UX research that you think is best compatible with using AI and why?”
AI offers many different possibilities, and most of them are rather captivating, to be honest. This can easily awaken our curious inner child, who wants to explore and try everything. But we all know that it would take too much time and money to go through all of them. Therefore we asked our expert respondents to share their insights on what is worth exploring the most. Which path could offer the most and put you ahead of everyone else.
Even the experts who embraced a more negative stance with the first question (in episode 1) had some portions of the AI spectrum they believe has at least a small chance to succeed.
Here are the industry experts and thought leaders we asked for their opinions:
- Debbie Levitt, MBA
- Darren Hood, MSUXD, MSIM, UXC
- Caitlin D. Sullivan
- Joel Barr
- Dr Gyles Morrison MBBS MSc
- Stéphanie Walter
- Kevin Liang
- Nikki Anderson-Stanier, MA
- Julian Della Mattia
- Ben Levin
- Kelly Jura
In the next paragraphs, we will list the answers we managed to gather and at the end, we will let you in on our stance as well.
Debbie Levitt, MBA
I think eventually AI will help us with data processing. I envision a future where we can feed in our recordings and videos of our interactions with our research participants. While AI probably won’t be great any time soon at anything that’s observational, AI should be good at processing what was said plus additional notes we’ve taken that we can feed into it. It would help us quickly find key insights, patterns, themes, allowing us to make some good actionable suggestions. We can write excellent problem statements because we really understand all of the users’ problems and the opportunities to serve them in good detail. I don’t think it’s there yet. That is still something I think is coming in the future. But I still think it’s not going to be: snap your fingers and it’s done.
There’s always going to be a need for observational research, where we watch someone do something and we take subtle cues from their body language, their voice, and their face. It’s a longer time down the road for AI to read that accurately. Even in my wilder dreams, I still think we’re a ways away from AI taking on more of that analysis and synthesis since in the near future, it won’t be able to really understand people the way a human does.
Darren Hood, MSUXD, MSIM, UXC
While I’d still be extremely hesitant to use it in my own practice, I’ve heard that AI can be used to analyze data quickly. It should be noted that being dependent on AI-based analysis can prove extremely detrimental AND (in an environment with a very low UX maturity level) can result in practitioners being displaced.
Caitlin D. Sullivan
Founder of User Research Consultancy and UX Research Advisor. You can find Caitlin on her LinkedIn.
I work with a lot of teams that are earlier in their product and company development, and they often struggle to argue the value of the time they think research takes. It can be challenging for them to document all the customer discovery work so that they can use it quickly, and reuse it later.
I’m hopeful that AI can support automating and better linking documentation, so that we can pull out relevant observations faster, and make sense of them faster. I’m hoping that analysis can also improve by improving how we document discovery work. But I don’t believe that AI will do the analysis for us, and replace us there.
Joel is Lead User Researcher. You can find him on his LinkedIn.
Background/contextual research, and maybe idea generation for gaps in a research record. Maybe personas if one has the real research to back it up with. Yes, it can be used for analyzing scripts and long form answers, but one doesn’t need an AI to do that – most of the office products have that functionality already – in other words, AI is just a more automated version of what we already have, with a narrower algo than a search engine for background research on a particular subject.
There are no neural networks yet, no generative thinking based on factual experience or probability of action or potential. It cannot self-sustain. They are essentially a search engine and virtual assistant that uses contextual NLP to talk back. Humans know more about the world naturally in their first year through living in the environment than it takes an AI to learn truth and fact about the world in the same time frame. Ever do the woodchuck test? Ask a 9 year old American kid How much wood could a woodchuck chuck? It’s a goofy little nursery rhyme – it’s fun. Ask an AI, and it’ll quote a Stanford study on the average amount of wood a woodchuck can consume in a given period given several variables. You see? Nothing human about it at all. Boring!
Dr Gyles Morrison MBBS MSc
Clinical UX Strategist and UX Mentor. You can find Gyles on his LinkedIn.
AI, specifically machine learning, is very useful with analysing and summarising large sets of data. Sentiment analysis and identifying action items are also well completed by AI.
First, I think they can help with repetitive tasks. Like, glorified assistants. If you train a tool like chatGPT on your previous email for session scheduling and templates, you can have it create for you a lot of the templates that take time to manually write. Everything from recruitment to emails to schedule sessions, thank you notes, etc. I also think that it could help with a first round of data analysis. But, I would be very careful with that. AIs are quite biased, so, use it as assistants. Double-check the data.
Last but not least, if you work alone, maybe it can help bounce some ideas, and think outside your own box. If we think about wireframes, I think that tools that can generate multiple versions of the same component can be useful. They could help go beyond your own idea, and maybe reach a solution to a user problem that you didn’t think about on your own.
Again, using this while being aware of the limits, always second guessing the output is key.
Templates for study plans, rewording certain phrases, helping with recruitment, analysis, and report writing summaries can all be effective ways AI can augment a UX researcher.
In my experience, AI has been helpful whenever I have writer’s block. It helps give you some ideas, but you first need the right goals in mind to prompt ideas. However, AI should not be the first resort. Focus on mastering fundamentals first. AI can only augment what you know.
Recently, I had my students experiment with Miro’s AI tool for analyzing qualitative data. I had them analyze data from our client, and make a copy of the data. You’re going to analyze the data first, then let Miro AI try, and then compare the results.
What we found was that the AI wasn’t great at analysis. It summarizes the raw data, but it doesn’t do a great job of making inferences, at least not yet. Of course, this is just Miro, I know GPT-4 and other tools have started to make inferences. So, it wasn’t helpful with the raw dataset. But when I had my students do a first pass and then let Miro AI augment our work, it was helpful to have it summarize our themes. It wasn’t 100% correct, nor did we expect it to be. That wasn’t the point anyway. What it did help with was enhancing how we told the story of our data. How else could we think about framing our story?
In summary, do your due diligence first. It seems like we always gotta double check AI, but it can give us some creative ways to frame our work that we may not have thought of.
Nikki Anderson-Stanier, MA
I really believe that AI can be a wonderful thought partner for difficult situations and to gain multiple perspectives on an idea. For instance, if you are concerned about conveying a controversial finding or point to stakeholders, you can ask AI to create some counter-points to your argument.
I think that, with situations like this, AI can help prepare us to have more effective and efficient conversations and communication with our stakeholders because we come more prepared for these discussions, having thought through multiple perspectives.
I also believe AI can specifically help user research teams of one or small UXR teams who are already so stretched for time by automating certain tasks such as text-mining, sentiment analysis, analyzing large quantitative data sets, turning insights into visualizations. I think user researchers should still be able to complete these tasks without AI, but utilizing AI to save time on tasks that don’t require context, empathy, and nuance can be a great help for user researchers.
Julian Della Mattia
I’m experimenting with AI and automation for participant management, I believe there is some potential there. Also AI for transcription, advance note taking, and theme sorting. It can save researchers a considerable amount of time in the analysis phase.
UX Researcher & Strategist. Managing Partner, Chamjari. You can find Ben on his LinkedIn.
AI’s impact on UX Reseach will probably center around transcription and summarization – if we can get the abelist, English-first, western-centric bias out of the system.
And yet, the evidence that tools can do this well at the moment (at least in so far as Chat-GPT 3.5, llama, and Bard are concerned) is still uncertain. To understand why that’s the case, we need to think about what LLM‘s are, and what they can and cannot do.
LLMs are quite good at reducing the relationships between seemingly unrelated things, and following linkages that may not be apparent on a surface level. Think of a simple spreadsheet, listing types of animals as column headers (mammals, birds, reptiles, invertebrates, etc.), and a list of characteristics as rows (average height, average weight, number of legs, etc.)
An extremely simplistic LLM, looking at the data filling in this spreadsheet, will “notice” that mammals tend to have two or four legs, and run larger and heavier than invertebrates.
If you then tell the LLM that you’ve found a new animal that weighs 300,000 lbs and is about 100 feet long, and ask it, “what kind of animal is this”, the LLM will probably guess “mammal” – without knowing anything else.
But what happened here? Does the LLM “understand” that such a large creature would, biomechanically, need a skeleton to support that much weight? Would it infer that this mammal lives in water? Would it “know” the animal lives in the present epoch, and is not extinct?
Not in the slightest. LLMs are essentially sentence fragment completion engines. So when we “ask” it to identify this new animal, what we’re really asking it to do is complete a sentence like, “an animal which is 100 feet long and weighs three hundred thousand pounds is likely to be…”
And then the LLM spits out: “a Blue whale.”
Likewise, when we ask an LLM to summarize that session what we’re really asking it to do is complete the sentence: “An abbreviated version of this body of text would be:”
And then the LLM will essentially search its vector space, and find a sequence of letters, spaces and punctuation that resides “close“ to the space occupied by the body of text, the transcript, that it’s been fed.
Just as the LLM can’t infer that the whale has a skeleton, there’s no semantic understanding of the intent of the words spoken by the participant, or the context that exists outside of what’s captured in the transcript; details which a skilled Researcher would immediately understand, and are vital to interpreting the meaning of what participants are saying.
Moreover, the transcript, in all likelihood, includes nothing about gestures, body movements, and nonverbal expressions made by the participant. All of these are vitally important to understanding what’s going on in our research..
This might point the way towards how AI might best be used in UX research. Image recognition, facial recognition, emotional recognition, and speech to text could all be combined into a fairly accurate “session analysis.“
A possibly more fruitful area for UX Researchers to work AI into their process centers around conducting sessions in a non-native language. With speech to text, text to text translation, and text to speech capabilities all vastly enhanced by artificial intelligence, I think we’re not very far from a world where researchers can conduct one-on-one sessions with participants who do not speak the same language.
This opens up even more interesting areas of research, where researchers may find it easier to conduct sessions with participants who have a variety of disabilities that inhibit either movement, speech, or other methods of interaction.
Ben’s opinion was quoted from his article Where is AI Most Compatible with UX Research? read his full opinion there.
AI can be beneficial in streamlining manual and time-consuming tasks, such as scheduling, transcribing, translating, and tagging interviews. AI may also help slice the data in different ways quickly, helping researchers use their time more efficiently and effectively while analyzing the data.
Our two cents
The keyword is augmentation. We agree with our experts that the AI is not there yet to be working alone in an automatic fashion, bar a few examples such as transcripts or translations. To provide an example: do we believe that AI can generate a full Survey for you? No, well at least not with a sizable input from you, which might warrant doing the survey by yourself anyway (read more about this in our post: Creating a Survey with ChatGPT). But can AI help me with the tone of a specific question? Yes. Can AI translate the question? Yes. Can it check my spelling, grammar and sentence structure? Also yes.
Another type of task, where we see great potential is any type of mundane, repetitive work, which doesn’t require too much cognitive effort. A great example could be copying a design from a screenshot to Figma elements. We bet that most of you remember a moment like this. “Just design how the new banner should fit on this page”. Easy enough, except the page is years old and it doesn’t have a Figma counterpart…AI should be able to generate elements based on image recognition and can do this task for you in a moment. All you need to do is “trim the edges” afterward. Alternatively, we can see potential in generating additional wireframes and designs based on an established design library.
All in all, we see the biggest potential of AI in being a work minimizer and an efficient tool for mundane tasks. The key point for us is that the AI shouldn’t be left alone to do the work. You always should treat it as a tool. Advanced, intriguing and state of the art, but a tool nonetheless. Please always make sure to question the results you get from AI, double, triple check them and don’t blindly believe them. It would be cool to see AI doing complex inferences, analysis and come up with advanced reports. However, we believe that it is still a little too artificial for that.
What to look forward to?
We hope you found the insights we have gathered as exciting as we did. If you haven’t yet, we recommend reading the first episode: Is the rise of AI a benefit or a detriment to UX research? The next episode will be focused on the third question:
“Do you think a UX researcher can remain market viable down the road if they don’t choose to adopt AI into their research process?”
After all the episodes are out, we will bring you a comprehensive report containing the results of the survey on how the UX community views the current state of AI. Stay tuned!
Let us know your answer to our question in the comments on our LinkedIn post here!