The complete picture 🌐
In our main AI in UX Research report, we’ve synthesized and analyzed the perspectives of 9 UX professionals.
Now, we’re sharing their complete, unedited responses, giving you direct access to the nuanced thinking behind our findings.
If you want the full story with all the patterns, tensions, and surprises stitched together, read the full AI in UXR report.
Why read the full responses?
While our report distills patterns and themes across the interviews, these complete answers reveal something equally valuable:
👉 the diversity of approaches, the specific tools and workflows each expert has developed, and the thoughtful reasoning behind their choices.
You’ll find practical examples, specific tool recommendations, candid concerns about the industry, and predictions that range from cautiously optimistic to sobering.
The UX resaerch experts
We chose senior practitioners with proven track records, most of whom returned from our 2023 study, to show how their perspectives have evolved.
New voices added depth from leadership positions across consultancies, platforms, and research organizations.
The minds who shared their thoughts with us belong to the following experts:
- Maria Rosala (Director of Research at Nielsen Norman Group)
- Kelly Jura (Director of UX / CX at Qwoted)
- Kevin Liang (UXR Manager & Instructor, UX Consultant)
- Debbie Levitt (Lead UX Researcher & Strategist @Cast & Crew, Founder of Delta CX)
- Eduard Kuric (Founder & CEO of UXtweak, UX and AI Researcher)
- Stéphanie Walter (UX Research, Design & Strategy Consultant @Maltem)
- Joel Barr (Senior UX Researcher & Service Designer)
- Tina Ličková (UX Research & Product Consultant, UXR Geeks Podcast Host)
- Ben Levin (Senior Director of Experience Design @DX-ROI)
During the interviews, we revisited our questions from 2023 to see how the sentiment towards AI has changed. Also, we asked about their opinion on the current state of AI in UX research.
The transformation in attitudes has been significant, with most experts indicating a shift from skepticism to a more nuanced, practical use of AI in UX research.
💡 Pro Tip
If you would like to personally compare and contrast the replies to see how dramatically the conversation has shifted, read the 2023 report.
What you’ll find here
We’ve organized responses by question, letting you either read straight through or jump to specific topics that interest you:
Current state (2025):
- How do you view the role of AI in UX research in 2025?
- How is AI involved in your UX research process?
Looking back (2023 revisit):
- Is the rise of AI use a benefit or a detriment to UX research?
- What is the one aspect of UX research that is best compatible with using AI?
- What are your thoughts on AI-generated responses / AI based users?
Looking forward:
These responses informed every section of our main report. Here, you get to see the full context, judge the evidence for yourself, and perhaps discover insights we didn’t highlight in our analysis.
Whether you’re seeking validation of your own AI practices, exploring new approaches, or curious about how leading practitioners are navigating this shift, these responses provide valuable insights.
They offer a rare glimpse into the evolving relationship between UX research and artificial intelligence.
Current state of AI in UX research
💡 How do you view the role of AI in UX research in 2025?
Maria Rosala
I see AI’s role in 2025 as making research faster and easier to do, and easier for others to consume. More and more research tools are implementing AI in their platforms — and they’re doing it in increasingly smarter ways.
For example, Dovetail has a contextual chat feature in Beta that allows you to query your study data with questions like, “Did someone talk about being neurodivergent?”
Instead of sifting through my notes to find out which participant talked about this, I can get an answer in seconds and be directed to the relevant place in the transcript. This use of AI solves a real problem and helps me be more efficient.
Many research repository tools, like Marvin, offer global search features with AI, allowing stakeholders to type a query and have the tool generate a summary and point them to projects, reports, and data that might be helpful.
Kevin Liang
Of the UX Research process (scoping, alignment, planning, recruiting/screening participants, operations, moderating, analysis, reporting), I tend to use it a lot to find good synonyms for words to use in my reports:)
It’s quicker than a thesaurus and better than search engines for this!
📊 Analysis
Facetiousness aside, I believe AI’s strongest value proposition is within the data analysis step, with a huge caveat.
We mustn’t use AI as a first pass in analysis. It should be treated like a sidekick, not a speed-dial option, as it’s still a tool that requires caution.
I ran a 3-year long experiment with my cohorts analyzing the same set of data where I would compare human research analysis vs. AI analysis.
Overall the themes identified by both were comparable, although AI still, on the whole, provided inaccurate summaries, and at times anchored onto pieces of data that were not representative of wider themes.
It requires a human eye to double check and a lot of follow-up prompting. Now, if we applied human analysis first and then AI to help us refine themes, that is where the AI can be a real sidekick.
Someone in the design-user-research community shared an article indicating that AI still overgeneralizes, so the verdict would still be to not to rely on AI for our work.
🎙️ Moderation
I have seen AI moderation tools as well – and the jury is still out on whether people respond more genuinely to an AI moderator or not.
Research has shown that some people tend to open up more to bots; and in other observations, people tend to close down more, knowing it’s a bot.
📚 Repositories
AI could potentially be very helpful as a “UXR librarian”, especially for housing and scanning research repositories/archives to look up previous research insights quickly and accurately.
Once primary research has been done in teams, AI could possibly perform meta-analyses to find opportunities across teams, potentially bridging UXR silos between teams.
It would still be subject to the same overgeneralizations and hallucinations that AI is prone to, requiring careful human examination, but could be a powerful way to perform as a systems-thinking assistant.
🧳 Portfolio building
I’m currently in the early stages of working with a startup that uses AI to help job seekers build an online portfolio quicker. It seems promising.
A lot of the themes I’ve mentioned previously apply to this use case (i.e. garbage in garbage out, needs prompting etc…) But I’m excited at the prospect of enhancing the job seekers’ journey!
Kelly Jura
AI is a tool that UX researchers can use to quickly ideate, streamline mundane processes, and synthesize data quickly. Human oversight is still very much needed at every step in the process and should not be replaced by AI.
The need is greater than ever for user advocacy, real data interpretation, validation, and strategic decision-making. Having a human who can identify bias, press on inconsistencies, and bring perspective is important.
Debbie Levitt
This question has a lot of possible answers. There are AI tools that claim to do nearly every step of the research process.
They’ll write your plan and questions. They’ll interview people for you! They’ll analyze and synthesize. They’ll write reports, create maps, etc.
I’ll take the perspective of which tools do I think we could or should use. You could have AI write some of your plan, but I notice it still misses a lot. The questions aren’t always as deep or thoughtful.
AI is creating questions based on what it was trained on, and I get the feeling it was trained on a lot of market research.
We even tested Claude being the Researcher and asking me questions live (in a text chat) in early 2025, and it really didn’t do that well. And I like Claude a lot!
I would still say: write your own plan. You can ask AI for more ideas, but rely more on your skills and talents on this one.
Recruiting… I wish we had more help here. It didn’t exist, so I invented a system to add automation and AI into recruiting. Plug plug, I hope someone will license it and bring it to life!
Running the sessions… big no on AI from me. I don’t want AI asking questions, running the session, being an interviewer, etc.
You see so many posts on LinkedIn about people unhappy or angry that they had to do a job interview with an AI bot. Well… then do we want to send AI bots to do user interviews? I still say no on this one.
Plus this is where our main qual data comes from. If it sucks, we’ve pretty much ruined our study, and we are unlikely to show the value of research, which we’re all under pressure to show.
Analysis and synthesis… I think you can use good AI here for part of the work. I’m a Claude fan. I think it’s good at critical thinking and hallucinates very rarely. AI should be able to summarize things and find some key points.
It can easily pull quotes. You can upload transcripts and ask it for great quotes about people when they couldn’t figure out how to edit the information. Check that they are verbatim and not summarized!
Reporting… we could use some AI there. I am not yet into “AI makes my deck or report,” but I have asked AI for some points for slides. Then I rewrote those to sound like me, cleaned up weird points, and added ones it missed.
Since I mostly use Claude, I am not generating images or maps with AI at this time. The bottom line is that if you try or use AI, you still need to check it. Babysit it. Monitor it.
You don’t want to end up humiliated because you put something in the report that AI messed up. “AI messed that up,” isn’t a great excuse when this is your work.
Joel Barr
My goodness – what a difference a couple of years makes!
To be honest, I’ve really struggled to find my niche as companies abandon UX in favor of having the little box talk to them instead about how brilliant their ideas are, without ever having to ask any actual human user.
To me, it seems like an echo chamber for enterprises to ship whatever they can, slop or not, and so I’ve had to adapt my practice. And this is just today. Who the hell knows what tomorrow is going to bring?
So what’s the role? I use it like a Swiss army knife. Ideation, analysis, reporting, I always double-check for errors and for it to provide evidence that it’s not wrong.
It’s become like my always-wants-to-please intern. So it makes a lot of what used to take boatloads of time, a helluva lot quicker.
Tina Ličková
Two perspectives: AI is NOW more of a thought-assistant that can help to speed up our work and make our outcomes of a higher quality (with great coaching).
In the following months, I expect UX(R)ers to use it more also as an executive assistant, conducting parts of the processes like recruitment or follow-ups.
AI is a topic that comes with a wide spectrum of reactions in the community. The initial denial and seeing it as evil:D is getting a bit weaker, in my observations.
More and more people are coming up with ways to use AI that make sense and make their work more of a value.
Ben Levin
Despite the promises of “instant feedback” from “artificial users” and real-time analysis of qualitative interviews, I haven’t seen any significant changes in the practice of UX Research over the past couple of years.
However, there have been step changes in two critical areas:
In preparing for research, at least some LLMs (like Gemini) provide a more natural interface to background industry information that helps you get a “lay of the lands”.
I’ve long been a proponent of UX Researchers developing a deep understanding of how the business in which they perform their research works, and the trade-offs other people need to make in order to further the strategy of their organizations.
Human behavior is devilishly complex, and the more you understand about the hidden machinery behind an organization’s dynamics, the better equipped you’ll be to untangle individuals’ responses to that machinery.
Second, near-instantaneous transcription (and in some limited circumstances, summarization) of interviews can be an immensely powerful tool.
If your data protection environment permits it, Copilot’s or Zoom’s ability to provide transcripts and topic summaries of lengthy interview speeds along the process of reviewing and extracting useful information from them.
I’ve yet to see a truly useful summary or insight extraction from either of those tools, but both can provide waypoints to help you locate key interview moments without having to tag them in real time.
It’s especially useful if you can’t have a second person taking notes while you conduct the interview.
(Is this a case of AI taking away a job? Perhaps, in more instances, however, I’ll argue that it’s a case of AI adding capabilities that many research teams wouldn’t have been able to afford in the first place. Five years ago, for example, transcription was an absolute luxury for me and most of my research partners.)
Summaries, too, are useful in that they are usefully incorrect. A mid-level or senior UXR can use at least some AI output to help them respond to the “blank canvas” problem during their synthesis period.
Synthetic Users are, however, a lost cause. They don’t pay for software or services, so why would you listen to their opinions?
And despite the massive store of behavioral data on which they may be based (itself a questionable proposition), exactly zero of that data concerns future behavior, on which there is no data. And the future is what most UXRs care about.
An exciting development may be brewing in the area of simultaneous translation.
The ability to conduct live (remote or in person) interviews with people speaking a different language opens up vast opportunities for UX researchers who’s organizations are attempting to reach a global audience.
The big question will be:
👉 to what extent do these universal translators detect and correct culturally inappropriate assumptions?
A perfectly translated interview question might make no sense when transposed into a different culture (though it might be “usefully incorrect”, depending on the context.)
Stéphanie Walter
I think AI can become a helpful assistant for repetitive tasks, scheduling research sessions, but also to help analyze the data, to a certain degree. Models are always biased, so, when used to help analysis data, they still need supervision.
I think it’s the role of a smart assistant. But it will still require human supervision.
Eduard Kuric
I think UX research, the true kind, involving real researchers and non-synthetic users, is among the most promising use cases for LLMs and AI. This may sound like a paradox since a lot of the discourse around AI is all about automation.
But it’s in the nature of UX professionals to champion human needs in the face of “other” interests. For example, by making UX research more interactive and helping participants open up more, AI could be a valuable tool.
With this also come expectations that a human-centered AI should meet. While enhancing human decision-making capacities, it should also preserve our autonomy and assist in polishing our research skills.
The AI itself should also gradually grow more accurate based on feedback. Fortunately, more and more people are forming their opinions about AI, but more still will need to be informed about the benefits and dangers of over-relying on AI.
UXtweak is working actively on research and development in this field, such as our most recent research article on whether Large Language Models can simulate card sorting.
💡 How is AI involved in your UX research process?
Maria Rosala
AI is probably more involved in my work than it would be if I did not work for Nielsen Norman Group. I interact with thousands of research and design practitioners every year, and so I get asked a lot about AI.
This has caused me to explore different tools and AI use cases in research so I can provide recommendations on what practitioners should and shouldn’t use AI for.
I use AI to summarize and transcribe research sessions, help me find things in my data more quickly, and explore different angles in my analysis. I think it’s important to say what I don’t use AI for.
I don’t use it to design studies, conduct research, analyze data unsupervised, or generate research outputs (like write-ups and recommendations).
This is because AI is pretty bad at doing this without a lot of prompting, which just isn’t worth it in my opinion.
While I may use AI for small sub-tasks that support some of these research tasks, I haven’t found AI competent enough to replace any of them entirely. Nor would I want it to.
The ability to think through a problem and a research project shouldn’t be understated. Stopping to think about what you don’t know, and how to design a study to figure it out, results in better-run studies and sharper analysis.
That’s the kind that actually moves a company forward.
Kelly Jura
I use AI to mine data for interesting patterns before diving in myself. It greatly enhances my emails and follow-ups with participants.
AI often serves as a starting point for exploring ideas, but it will never be my final deliverable. I use AI for ideation and brainstorming, and to test my scripts or questions for clarity.
I also use it to research case studies and discover relevant published work that can support the projects I’m working on.
AI has accelerated some of my tasks, but I am also mindful to use AI to support my abilities rather than as a crutch or to replace them.
Debbie Levitt
As I mentioned above, mostly in analysis, synthesis, and some reporting. I don’t use it at all for planning, protocol (questions), or running sessions. I do like AI tools for transcripts, and most of us have been using those tools for years.
Right now, my fave is Fireflies #NotSponsored, but that seems to change every 6 months! Claude is still not great at transcription, with what it admitted was around a 70% accuracy rate.
I don’t like AI to try to tag things as I haven’t seen that done well yet. But I do like using Claude 3.7 (I don’t like v4) for some of my analysis and synthesis.
Critical thinking LLM robots should be good at breaking things down, finding patterns, noticing important outliers, and bringing it all back together for insights and actionable suggestions.
And I use Claude a bit for reporting, especially to find me relevant and exact quotes. Much easier than the old days of trying to find quotes in videos or transcripts!
My process is still very human, and relies heavily on my brain for the questions, the sessions, spontaneous follow-up questions, observations, etc.
Joel Barr
Much like how Larry Marine inserted UX into flower-buying processes, AI comes to bear at nearly every stage of my practice process.
But I never, ever use it to ask what it thinks an insight means, or for machine-based context, because that’s not what it’s built for, generally speaking.
Even when I’ve had the opportunity to work with agenetic AIMLs that had normalized datasets, I still am not keen on having it “think” for me or contextualize data that I enter into it.
There still has to be some critical thinking element outside the AIML making decisions on behalf of humans for humans by humans.
They are the ones, after all, that are going to end up paying or subscribing to the service offered, or who make the purchase on the ecomm, or buy the software. I feel that’s where the sweet spot is with our practice and with AI.
It’s like a Venn diagram where the circles get closer and closer together until they are indistinguishable from one another. And we’ll get there…just not yet.
Tina Ličková
From the start to the beginning, as an assistant. As a freelancer, I brainstorm with it on the hypothesis or methods, create questions, iterate on it, craft emails to participants for diary studies or fine-tune the questionnaire.
The touchpoints I use the least are analysis – I don’t think it delivers good work yet, so I use it in this phase only for bit and pieces, for some parts of the research to give me a “second opinion”.
And I am trying to be aware of ethics: I would rather not stuff it with user data as it is my responsibility to keep their data safe.
In the last parts, of the recommendations on how to act on the research, it depends on the time I have:
👉 For good ideas, AI needs good conversation and a lot of tweaks.
I still feel like I am way more creative and to the points when it comes to impact research should make and how should a company act on it.
Ben Levin
First, my own ability to prepare for research sessions is vastly improved, in particular when I am delving into conversations with people who work in an industry with which I’m not very familiar.
LLMs’ ability to gather and synthesize vast amounts of information from the live web allows me to enter qualitative and formative research with a solid background on the subject.
This helps me sound a lot more informed to my interviewee. At the very least, they don’t have to waste time explaining very basic concepts to me.
That lets me get to the heart of the matter more quickly with participants, to understand the industry terminology they use and to allow them to opine and explore their thoughts out loud more naturally.
It is akin to letting people speak in their native language and knowing enough of that language to be able to follow the conversation.
I’ve even found this capacity useful during interviews (at least those conducted remotely), where I’ve had to look up a very specific industry term of art, in order to ask a follow-up question.
Gemini’s ability to provide a reasonably accurate summary of an obscure concept in plain language is clearly an improvement over pages and pages of search results.
Second, I’ve found automatic transcription and topic summaries immensely useful for providing quick reference to places in interviews where important points we discussed.
This kind of “factual geolocation” is especially helpful when you’re conducting many interviews in a short period of time.
Transcription allows me to focus more intently on what the participant is saying, making notes only about follow-up questions I want to ask them.
And then, when reviewing interviews, I can use Copilot or other tools to orient myself to specific moments in the interviews where important points were raised; like automatic bookmarking, in a way.
I have yet to encounter a tool that summarizes or extracts insights from qualitative interviews in any way that isn’t obviously wrong.
But even this has its uses, as synthesis is as much about culling out the incorrect as it is about drawing conclusions.
Stéphanie Walter
I’ve been using generative AI to help me draft emails and save time in recruitment. It’s nice when English is not my native language. I’ve also used some AI to help summarize research results. I had good and bad results with this.
I had verbatim hallucinations, which can be an issue when you want exact verbatim. It worked well when I analyzed the data first on my own, and then, wanted a quick summary.
Because I knew the data, so, I could still spot the mistakes, and improve the final summary. It’s also quite nice to transcript the interview into text, that can then be searchable.
And for data transformation: when you need an Excel sheet turned into a Word doc, or the other way around, for example.
I also use it quite a lot when I don’t remember how to do something in Excel (like write custom queries to color-code my file, to help with manual data analysis). I’ve also played a little with some tools to generate a code prototype.
I haven’t used them for real projects yet, but I see an interesting potential for more interactive prototypes to run usability testing sessions for example, or simply, to brainstorm and present concepts to end users.
Eduard Kuric
I primarily use AI experimentally at UXtweak Research. For example, we conducted a study to determine whether LLMs could ask meaningful follow-up questions.
We found the results provided more detail, but did not identify more usability issues and could induce frustration in participants.
Beyond our experiments, I oversee an innovations team at UXtweak that’s specifically focused on testing and implementing AI into the UX research platform, but in ways that are actually helpful.
We’re being very deliberate about where AI can add value without compromising research quality.
The most practical uses I’ve found for AI today are precise transcripts and feedback during quick ideation.
AI in UX Research: 2023 Revisit
💡 Is the rise of AI use a benefit or a detriment to UX research?
Maria Rosala
I’m not sure my view of AI has changed drastically since 2023. I still hold a healthy amount of skepticism about it replacing the need for research or the role of UX researchers.
I see it as a detriment to UX research only insofar as some companies see it as a suitable replacement for trained researchers.
Sadly, those companies are often not familiar with good research, and so they are unable to assess the quality of AI’s outputs.
The problem we have currently is that AI generates a legitimate-looking output to non-specialists, and it does it quickly! Some stakeholders are familiar with what we produce (research reports, personas, etc.,) but are less familiar with how we get there.
The same problem is affecting design. AI can design interfaces. It can generate wireframes and prototypes. But ask any good designer, and they will tell you that the quality is often laughable or the designs are super generic.
AI will ultimately benefit the UX research profession, not replace it. It will be another useful (and very powerful) tool that will help us conduct, analyze, and communicate research faster than ever before.
Kelly Jura
My views on this topic remain pretty consistent with what I shared in 2023. AI is a valuable asset that has made significant improvements and is now part of many of our favorite tools.
However, I believe it should never replace the essential role of user experience researchers. We really need to find a balance between the power of technology and the invaluable human insights that can lead to the most user-centered outcomes.
While AI can process and analyze data faster than any person could, the nuanced understanding and empathetic approach that human researchers bring to the table are absolutely irreplaceable.
I think about all of the tools that have evolved over the years. Maybe you began your UX journey with Balsamiq, Omnigraffle, or Sketch before landing on Figma.
Those who really excel in user experience can do their jobs well with pen and paper or really any tool at hand.
What truly matters is having a solid understanding and foundation, using the tools to enhance, rather than replace, the essential skills needed to do the job well. AI is much like this.
Use AI to help you do your job, but understand enough to question all outcomes and step in whenever needed.
Debbie Levitt
Both. There are areas where it can help us. But if people rely on it too much or drop their own skills or critical thinking, then what have we done? What are we doing? Does anybody still need to hire us?
They don’t need us to sit around and prompt AI. They’ll ask someone cheaper to do that.
I think we can use what improves our process, makes us more efficient, and makes us more accurate. Anything slowing us down or adding errors should be avoided, even if it “seems cool.”
We must focus on high-quality work so we deliver lots of value to users and customers. We meet or exceed their needs and expectations. They see a good ROI from their investments in us.
Where AI helps that, great. Where it works against that or lowers quality, even if it’s fast, that’s not great.
Tina Ličková
Well, what do you want to make out of it? We are not helpless towards it, and for now, till the point of singularity:D — we can benefit for sure.
Stéphanie Walter
My view hasn’t really changed on that: the right tools in the right hands will help us be more efficient.
But the potential for UX theater, poor product decisions, because we want to replace actual users with fake AI ones (aka, skip research) is more and more real.
Eduard Kuric
AI brings new ways for users to interact with software – clearly a benefit for UX, especially in the field of natural language processing. Of course, there are some negative aspects, like stakeholders who misunderstand what AI can do.
But I believe that the negatives will resolve themselves in the short term (whether naturally, or by people actively working on solutions), allowing for the positive aspects to flourish.
💡 What is the one aspect of UX research that is best compatible with using AI?
Maria Rosala
I think AI is most compatible with sifting through large volumes of qualitative data.
I thought this in 2023 and still think this today. Data like support data, sales calls, large customer satisfaction surveys, product reviews, etc., can all be fed into a UX Research repository and then processed and tagged by AI.
AI can surface new, interesting hotspots and patterns that can trigger more research.
Kelly Jura
AI helps streamline repetitive tasks in UX research, allowing me to use my time on higher-value or more nuanced activities.
I often use AI to quickly synthesize data and identify potentially interesting patterns or insights before diving into manual analysis. I will also use AI to help improve my writing for recruitment and follow-up comms.
I have grown to use AI more as an ideation tool. AI has become increasingly valuable in planning and brainstorming, especially for generating and refining questions or creating scenarios.
For example, I often use AI to suggest follow-up questions based on initial prompts or to help improve my questions for clarity and reduce potential bias.
That said, I approach AI output as a starting point or supplemental perspective – never as a copy/paste final product.
Once I understand the problems users are experiencing, I will use AI to research any interesting case studies or apps that handle similar issues well. It becomes a tool for inspiration and gathering reference materials.
Debbie Levitt
Analysis and synthesis. I think a good “critical thinking” model like Claude Sonnet 3.7 matches this well. Claude 4, less so. If we put in great transcripts from great sessions, it can help us organize important insights.
We can ask it what outliers might be important. And of course, we check these against our notes and memories!
Tina Lickova
Preparation – AI is a great assistant for creating hypothesis, script and even guiding and coaching on the right approach to do so.
Stéphanie Walter
I haven’t seen many ground breaking new things here. Maybe I missed them? The models got better at helping analysing the data. So, that’s nice. Are the model less biased than in 2025? I don’t think so.
I’ve seen some progress with the tools that let you generate prototype design, so what I said in 2023 about AI, helping wireframe and bounce ideas still stands. I’m happy to report that part got improved a little bit.
Eduard Kuric
User engagement and natural interactions during research can drive participants to be less careless about their responses and pay more attention.
AI can help boost the interactivity of research and can make it more enjoyable for the participants and more valuable to the researcher.
💡 What are your thoughts on AI-generated responses / AI based users?
Maria Rosala
There are so many great use cases for AI, and this one feels to me the least useful. At NN/g, we investigated AI (or synthetic) users in 2024.
Sure, they didn’t talk like real people, were prone to sycophancy, and sometimes talked too much, but that’s missing the point.
Can I really feel empathy for someone who is made up? Do their stories stick in my mind? Can I trust that the things they talk about are real phenomena?
Can I see their environment and how it shapes how they live, work, or act? Can I watch them trying to use some software to do something? Short answer: No.
I see AI users like proto-personas, and I wasn’t a fan of those either. They’re often used by teams with low UX maturity who don’t have budget, resources, or internal support to do research. Both result in shallow and biased outputs.
Kelly Jura
This is interesting and can help us think through experiments and form hypotheses. It could be a starting point, much like asking my coworkers, “Based on your knowledge of our users, how likely are they…” to pressure-test ideas.
It is interesting and can be helpful, but I will not use this information to replace research with real people.
AI can process large amounts of information and generalize; however, this could lead to generic insights and generic products that don’t connect with users.
Debbie Levitt
These have continued to be unreliable if not laughable, and in come cases, insulting. AI is still bad at pretending it’s human(s), which is why humans still have jobs.
When AI is better at pretending to be users or workers, our jobs will be in much bigger trouble for so many reasons!
You can’t have it both ways.
You can’t say that AI isn’t human, doesn’t have empathy, and doesn’t represent who I am, how I do things, how I think, behave, etc…. And then say, “YEAH, let’s pretend AI is our users or customers.” This is “garbage in, garbage out.”
Even with data on users, habits, actions, tasks, steps, interactions, collaborators, mental models, etc., you can still get sad stereotypes based on who AI thinks your “50-year-old married woman with 2 adult children” is.
We must also remember that AI still works from training. If you ask AI to innovate (invent something that has never been done before), it can only remix what it already knows.
That could include interesting ideas, but is unlikely to include something completely innovative and never-before done.
I asked Claude to come up with a novel plot that had never been done. We must have worked on that for an hour. It couldn’t.
It realized everything it came up with was a mix of other books, TV shows, and movies that already existed. We must still look to humans for fresh ideas and real innovations.
If your want your designs to look like everything else, have AI do them. It won’t invent anything new. It might not even try anything new.
It will remix all the designs on which it was trained, and give you some sort of middle ground among those.
Tina Ličková
We see a lot of improvement in this area — more data is flowing in, the outcomes are getting better. It’s great for collecting hypothesis, creating proto-personas and getting the first user data.
It still needs to be verified with a real user – no negotiations there.
Stéphanie Walter
Same as 2023, haha, I don’t think they should be used. User research is about asking the right questions to the right people, aka, your own users. AI generated user personas are not users.
At best, they are a high level archetype, that will lead you to take, the same design decisions as your competitors. With, the same biases. And missing that “aha moment”, that leads to, innovation.
Eduard Kuric
There is no AI model in existence that can capture the variability of human experiences, thoughts and perspectives. There are some cases where I could see use for synthetic responses, though.
The biggest one is the augmentation of UX research, generating quick feedback before involving actual participants. But the involvement of actual users in UX research is not optional.
Looking forward
💡 How do you think AI will change how we do UX research in the next 5 years?
Maria Rosala
I think we’ll see researchers doing more strategic, complex research and spending less time on simple, tactical research studies, like unmoderated tests and qualitative surveys.
With some direction, AI will be able to design, run, and analyze simple unmoderated studies with pretty good results. Will it be equivalent to a trained researcher? Possibly not. Will it be good enough? Possibly!
AI will have a great role in supporting PWDRs (People Who Do Research) who are not trained researchers in running better research.
For example, AI can coach a PWDR through designing a simple unmoderated test and help them identify usability issues.
I think we’ll also see AI become more involved in the curation of research repositories. It can tag data from many different sources as it enters the repository, organize it, and highlight interesting new patterns for researchers to explore.
In the future, I think we’ll see researchers become more like architects in the research process, and AI will be the builder (doing the time-intensive, heavy lifting). I also think the methods we’ll employ in research may change.
We’re already seeing some evidence of this; traditional asynchronous research methods, like surveys and diary studies, are becoming more synchronous, as AI offers the ability to ask in-the-moment follow-up questions.
Kevin Liang
I’m not sure how it will change UX Research.
I am skeptical of AI UXR “experts”, since aren’t we all just trying to figure it out together? We should all be open to learning about it, testing it out rigorously, and sharing honest experiences with the community.
What I don’t want is for the community to coalesce or conform under pressures to use AI in places we don’t need to, like relying on aggregations of garbage data in, producing garbage decisions out.
AI data is still secondary data, desk research, whatever you want to call it, and should be treated as such. And there is no replacement for primary scientific research.
Kelly Jura
AI will continue to improve and assist at every stage of the process. However, user experience research will be more crucial than ever. Skilled user experience researchers will hopefully hold greater significance.
Tools evolve over time, and those who excel in their careers possess both a strong foundational knowledge and the adaptability to adjust as circumstances change.
Debbie Levitt
According to Claude, it will be able to do our jobs in 5 years. And I think that’s possible. We might laugh now at AI asking people questions and then having no follow-up questions… or bad follow-up questions.
“Ha ha ha, our jobs are safe.” This week, sure. But the tech is moving so fast.
My new book is “Life After Tech” because I believe we will need to consider what our work or careers are when we’re done with tech, or tech is done with us. I think AI will reduce or eliminate most of our jobs in the next 3-5 years.
Joel Barr
I get asked this a lot. I’m a nobody who knows nothing. But if I’m to know something, my little researcher’s human brain and practice have actually split. About a year, year and a half ago, I had the opportunity to get sick.
Screens made me dizzy, my eyes couldn’t take the glare. So I was forced to just be there, at the moment, and meditate as I do. And listened to podcasts about AI to understand my environment. And it was great.
(okay, being really sick sucked, but…) What I heard kind of shocked me:
👉 AIs were being trained by basic behaviorism.
Reward vs no reward – and AI’s being trained to be pleasers were starting to fake and lie to be rewarded. Like a really smart, albeit immature human.
So I have kind of split my research work. On the one hand, I test products and services with humans to help companies build better products and services as a consultant.
On the other, I’ve developed testing batteries for AIMLs that are qualitative, non-leading, and take from schools of philosophy and psychological thought going all the way back to William James forward.
I try to touch them all: Wundt, Thorndike, Rogers, Bandura, Pavlov, Milgram, Berne, Jung, Chapanis, Pomeroy, Ellis, tons of Zen, and other eastern and western philosophy, some recent, some ancient, and so on.
I put it under stress with time, with circumstance, with context, with other humans involved in the decision tree.
In these scenarios, I put the AIML precisely where it was created for, as the decision maker for all humans, and I give it impossible scenarios that humans would really agonize over.
Then I rate its response either on a Likert scale or from a hybrid Likert or contextual scale. I put them in the position of an authority, like I would a user, and ask them to make crucial calls.
Who gets a transplanted heart, who gets bombed, who gets assassinated, abortion, consent, real life, no-win situations, with context added before and after the fact.
I have about 70 of them so far. I study and then build a new story or scenario. I’ve tested them with nearly every major model, and the results are, at times, astounding.
Once you get past the surface and delve into deeper “thought” processes, and logic built in, the AIML scrambles, and eventually you get to, “that’s the way I am programmed.” Heh. I am the worst red team it’s faced, or I try to be.
I try to catch it in lies, try to catch it looking for rewards by giving me the answers it thinks I want to hear. And the stress testing is more entertaining than the stress testing I’ve done on humans.
To what end? I don’t know. I don’t even know if I can parlay it into paying work, but as I’m out of work and have nothing but time and opportunity on my hands, I might as well get busy doing something.
Anything is better than training an AIML like it’s Sniffy the Virtual Rat.
No one is hiring, and there isn’t a graduate school I can think of where AI isn’t poised to take it over one day, so I might as well use what I’ve got at hand.
I think UXR is waiting for someone, anyone, to lead them into the Fifth Age of technology – I don’t know if I’m that person, I’m a nobody that knows nothing – a beggar with an empty bowl.
But I have a brain and a computer and ideas, an undying love of work, and an eye for improving humanity – tikkun olam – as it were, and that makes me unstoppable.
Ultimately, enterprises have to come back to talking to users – who else is going to pay them for these technologies?
So I think in the end, UXR emerges triumphant, a little wiser for the experience and not so inclined toward democratization and giving away all of our secrets as we did at the tail end of the Fourth Age.
AI has also allowed me to be a lot more creative. I’ve had ideas for apps and services for years, but never had the money for a team of developers and such.
AI is helping me/us build new, user-centered concepts and products faster than ever.
Tina Ličková
“I am a researcher, not a fortune-teller.”, is something my clients hear a lot. My suggestion is that our craft will change a lot.
It will definitely be about how researchers will curate knowledge in companies and how they will enable that good research is done.
One prediction that I have thou, is that SME companies will hugely democratize and UXR becomes even more a corporate role.
Ben Levin
My own experience stretches back only 25 years or so, but 25 human years seems roughly equivalent to 5 AI years:)
When I started, there was very little in house capacity for UX Research in most companies, and even that was limited even in companies that were software focused. Now it is everywhere, and accessible to everyone.
In person research was the only option, unless you were talking to someone on the phone. Video conferencing required specialized hardware that was quite expensive.
Webcams were very low resolution. Eye tracking required heavy and expensive headgear.
Remote research is now the norm (perhaps too much so?) Screen sharing and cheap cameras are omnipresent and automatic. Eye tracking is still expensive – and I’m still not convinced it’s worth the trouble.
If anything, I think the most likely impact of AI on UX Researchers will affect mid and senior-level researchers. We’ll be asked to do more with less support from junior people coming up through the ranks, and that worries me.
Where once we were asked to justify the value of UX research, or fight to get actual users involved, now we’ll be pushing back against Synthetic Users, or the specter of relying on an LLM model to find usability problems or suggest future features.
Those pushing for a reliance on technologies rather than people are not likely to understand the true costs of those tradeoffs any more than those who thought “validating it after we go live” grasped the true risk of that approach.
But as practitioners, we are used to fighting for the integrity of our craft, and each opportunity to articulate its unique benefits allows us to sharpen our approach and argument.
Stéphanie Walter
I want to be positive, and, hope it will help us save time, on the repetitive tasks (like scheduling, writing emails, writing reports, etc.). It will help us analyze data faster, be more accurate.
With that time saved, I hope we will be able to spend more quality time with end users.
So, for me, it won’t replace researchers, but will help us focus on the actual research part of the job, while helping with the less fun, administrative, repetitive tasks.
Eduard Kuric
The core principles of UX should not stray too far. That said, I believe that AI can revolutionize our ability to collect and analyze vast amounts of qualitative data to search for logical patterns and trends.
Setting up research will become more efficient, with quick feedback based on previous choices provided in real time.
As tools become more flexible, we’ll also become able to interview more people than we could before, due to the physical limitations on how many people we can talk to.
Bottom line
Many of the experts point to the same reality: AI is becoming part of the daily rhythm of UX research, but it isn’t taking the wheel.
It handles the busywork that used to slow everything down, which leaves more time for the parts that actually require judgment, curiosity, and time with real people.
Synthetic users might help teams warm up an idea, but they’re still a far cry from the insights that come from talking to actual humans. If the next few years play out the way many of them expect, AI will simply make the work smoother.
Research prep will feel lighter, digging through past studies will be less of a hunt, and global conversations will get easier with better translation tools.
👉 The heart of the job won’t change, though.
The meaning, the nuance, the decisions that shape products still come from researchers thinking clearly about what they’re seeing. AI may speed things up, but the direction is still very much ours to set.
Read the full AI in UXR Revisit Report
If you want the distilled takeaways and bigger story, head to the full AI in UXR report. In the report, you will find out:
💡 Where UX experts use AI (and where they AVOID it)🚨 The concern about stakeholders misusing AI as a replacement for real research🤖 Use cases for AI-based (synthetic) participants⚖️ Split predictions for the future (optimistic vs. pessimistic views)📊 2023 vs. 2025 sentiment comparison
