What is this report about? 📚
In 2023, at the beginning of the still-ongoing AI frenzy, we asked expert UX researchers how they viewed the rise of AI in the UX Research industry. At that time, responses were a mix of curiosity, skepticism, and early experimentation.
Two years later, driven by rapid progress in AI adoption, we are revisiting this topic.
Our goal is to examine how stances on AI in UX Research have evolved in light of everyone’s deeper experiences with AI models and their enhanced capabilities.
We reconnected with 2023 participants, as well as some new faces, bringing in diverse perspectives.
Now, let’s see how their perspectives have changed as AI continues to make its mark on the industry.
The minds who shared their thoughts with us belong to the following experts:
- Maria Rosala (Director of Research at Nielsen Norman Group)
- Kelly Jura (Director of UX / CX at Qwoted)
- Kevin Liang (UXR Manager & Instructor, UX Consultant)
- Debbie Levitt (Lead UX Researcher & Strategist @Cast & Crew, Founder of Delta CX)
- Eduard Kuric (Founder & CEO of UXtweak, UX and AI Researcher)
- Stéphanie Walter (UX Research, Design & Strategy Consultant @Maltem)
- Joel Barr (Senior UX Researcher & Service Designer)
- Tina Ličková (UX Research & Product Consultant, UXR Geeks Podcast Host)
- Ben Levin (Senior Director of Experience Design @DX-ROI)
During the interviews, we revisited our questions from 2023 to see how the sentiment towards AI has changed.
Also, we asked about their opinion on the current state of AI in UX research:
👉 How do you view the role of AI in UX research in 2025?
👉 How is AI involved in your UX research process?
👉 How do you think AI will change how we do UX research in the next 5 years?
The transformation in attitudes has been significant, with most experts indicating a shift from skepticism to a more nuanced, practical use of AI in UX research.
Where 2023 brought cautious experimentation and defensive skepticism, 2025 reveals a budding yet already sophisticated ecosystem of AI-augmented research practices.
💡 Pro Tip
If you would like to personally compare and contrast the replies to see how dramatically the conversation has shifted, read the 2023 report.
In this revised 2025 report, you can read about:
- The current state of AI in UX Research – How practitioners are using AI today, where it delivers real value, and what critical boundaries they’ve established.
- Where AI Fits in Research Workflows
- AI in UX Research between 2023 and 2025 – A comparison showing how sentiment and practice have evolved over two pivotal years of the AI boom.
- The AI-based users debate: 2023 vs. 2025
- Looking ahead – What UX experts predict about AI and its impact on UX Research and its future.
The Current State of AI in UX Research

Our takeaways could be summed up into the following three points:
- AI acts as an efficiency multiplier, not a replacement.
- It is most useful for speeding up analysis, knowledge management, and transcription.
- Researchers see AI as a “smart assistant” that requires human oversight
Main insight
Consensus (Explicitly mentioned by) | Insight |
9/9 experts | When using AI in UXR, human oversight is non-negotiable due to bias, hallucinations, and wrong interpretation of data |
9/9 experts | AI as an augmentation tool, not a replacement |
8/9 experts | Quality validation is the researcher’s responsibility |
7/9 experts | Data analysis as primary value area |
6/9 experts | Strategic planning remains in the human domain |
After analyzing perspectives from the top UX voices using AI in their work, a clear picture emerges:
👉 AI has moved from being an experimental curiosity to a practical tool.
The experts described their progression from initial skepticism to finding specific, valuable applications while defining clear boundaries for work that needs to be preserved under human responsibility.
Here is a quote from Maria Rosala that articulates it nicely:
I see AI’s role in 2025 as making research faster and easier to do, and easier for others to consume. More and more research tools are implementing AI in their platforms — and they’re doing it in increasingly smarter ways.
AI as efficiency multiplier, not replacement
The most consistent theme across responses is that AI’s role in UX research is as an efficiency enhancer rather than a replacement for human judgment.
Kelly Jura captures this sentiment:
“Human oversight is still very much needed at every step in the process and should not be replaced by AI.”
This represents a mature understanding of AI’s current capabilities: researchers are using it strategically rather than wholesale.

As Joel Barr put it:
“So what’s the role? I use it like a Swiss army knife. Ideation, analysis, reporting, I always double-check for errors and for it to provide evidence that it’s not wrong. It’s become like my always-wants-to-please intern. So it makes a lot of what used to take boatloads of time, a helluva lot quicker.”
Where does AI deliver real value?

> Data processing and analysis
AI shows its strongest performance in helping researchers navigate large amounts of qualitative data. Maria Rosala describes a transformative use case:
“Instead of sifting through my notes to find out which participant talked about this, I can get an answer in seconds and be directed to the relevant place in the transcript.”
However, Kevin Liang describes how the reliability of AI used for analysis can be limited:
“I ran a 3-year long experiment with my cohorts, analyzing the same set of data where I would compare human research analysis vs. AI analysis. Overall, the themes identified by both were comparable, although AI still, on the whole, provided inaccurate summaries, and at times anchored onto pieces of data that were not representative of wider themes. It requires a human eye to double check and a lot of follow-up prompting.”
While AI can identify themes comparable to human analysis, it still produces inaccurate summaries and focuses on unrepresentative data points.
His recommendation? Conduct a careful first-pass analysis personally, then refine it with AI.
“If we applied human analysis first and then AI to help us refine themes, that is where the AI can be a real sidekick.”
> Knowledge management
The concept of AI as institutional memory shows promise. Kevin Liang envisions AI as:
“…housing and scanning research repositories/archives to look up previous research insights quickly and accurately,”
potentially connecting insights across team silos through meta-analysis capabilities.
> Documentation and transcription
Ben Levin and multiple other UX professionals identify transcription as one area showing genuine advancement:
“Near instantaneous transcription (and in some limited circumstances, summarization) of interviews can be an immensely powerful tool.”
This democratizes access to capabilities that were previously cost-prohibitive for many teams.

Research tools are adapting to AI too

The broader research tools ecosystem is evolving alongside practitioners’ growing sophistication in using AI. Platform developers are moving beyond basic automation to create genuinely useful features that solve real research problems.
Maria Rosala highlights how modern tools are implementing contextual AI in a helpful way:
“Dovetail has a contextual chat feature in Beta that allows you to query your study data with questions like, ‘Did someone talk about being neurodivergent?’ Instead of sifting through my notes to find out which participant talked about this, I can get an answer in seconds and be directed to the relevant place in the transcript.”
This shows a positive shift from generic AI features to research-specific applications. And the repository tools are following:
“Many research repository tools, like Marvin, offer global search features with AI, allowing stakeholders to type a query and have the tool generate a summary and point them to projects, reports, and data that might be helpful.”
The development approach is becoming more deliberate and research-focused.
Eduard Kuric describes UXtweak’s methodology:
“I oversee an innovations team at UXtweak that’s specifically focused on testing and implementing AI into the UX research platform, but in ways that are actually helpful. We’re being very deliberate about where AI can add value without compromising research quality.”
Tool quality is also becoming a differentiator.
Debbie Levitt emphasizes the importance of choosing capable AI:
“Analysis and synthesis…I think you can use good AI here for part of the work. I’m a Claude fan. I think it’s good at critical thinking and hallucinates very rarely.”
For interview transcripts and research prep, Ben Levin recommends Copilot and Gemini:
“In preparing for research, at least some LLMs (like Gemini) provide a more natural interface to background industry information that helps you get a “lay of the lands”.
Near instantaneous transcription (and in some limited circumstances, summarization) of interviews can be an immensely powerful tool. If your data protection environment permits it, Copilot’s or Zoom’s ability to provide transcripts and topic summaries of lengthy interview speeds along the process of reviewing and extracting useful information from them.”
Critical limitations and boundaries
⚠️ Planning remains a human territory
Despite AI’s assistance capabilities, experts emphasize human capability for context-aware, logical, even strategic thinking and decision-making.
Debbie Levitt notes that AI-generated research plans lack depth:
“The questions aren’t always as deep or thoughtful. AI is creating questions based on what it was trained on, and I get the feeling it was trained on a lot of market research.”
⚠️ Human moderation is still important
At the time of asking, many contributors to this report explicitly stated or implied indirectly that core research activities that rely on human interactions, like moderation, should not be replaced by AI.
They require the human factor and oversight.
“I don’t use it to design studies, conduct research, analyze data unsupervised, or generate research outputs (like write-ups and recommendations).” – Maria Rosala
“Running the sessions… big no on AI from me.” – Debbie Levitt
“Human oversight is still very much needed at every step in the process” – Kelly Jura
“There still has to be some critical thinking element outside the AIML making decisions on behalf of humans for humans by humans.” – Joel Bar
Debbie Levitt draws a parallel to recruitment and explains the possible cost of overrelying on AI:
“I don’t want AI asking questions, running the session, being an interviewer, etc. You see so many posts on LinkedIn about people unhappy or angry that they had to do a job interview with an AI bot. Well… then do we want to send AI bots to do user interviews? I still say no on this one. Plus this is where our main qual data comes from. If it sucks, we’ve pretty much ruined our study, and we are unlikely to show the value of research, which we’re all under pressure to show.”
Additionally, the current AI-moderated studies still work almost exclusively out of transcripts, missing subtle cues from body language, voice, and face.
This was an issue that Debbie Levitt identified and brought attention to back in her answers in 2023, where she mentioned:
“There’s always going to be a need for observational research, where we watch someone do something and we take subtle cues from their body language, their voice, and their face. It’s a longer time down the road for AI to read that accurately.”
⚠️ Validation and bias detection depend on human judgment
Kelly Jura emphasizes the continuing importance of
“having a human who can identify bias, press on inconsistencies, and bring perspective.”
This becomes particularly critical given AI’s documented tendency to overgeneralize.
Industry challenges and risks
Joel Barr provides a sobering perspective on broader industry trends:
“I’ve really struggled to find my niche as companies abandon UX in favor of having the little box talk to them instead about how brilliant their ideas are without ever having to ask any actual human user.”
This highlights a concerning trend where organizations might use AI as justification to skip user research entirely, creating what Barr describes as “an echo chamber for enterprises to ship whatever they can, slop or not.”
Emerging opportunities
> Global research capabilities
Ben Levin identifies simultaneous translation as a significant opportunity:
“The ability to conduct live (remote or in person) interviews with people speaking a different language opens up vast opportunities for UX researchers whose organizations are attempting to reach a global audience.”
> Operational efficiency
Tina Ličková observes AI’s evolution from analytical assistant to operational support, handling “parts of the processes like recruitment or follow-ups.”
This suggests expansion into time-consuming administrative tasks that don’t require human judgment.
Practical Recommendations
➡️ Implement AI strategically
Focus on high-impact applications where AI demonstrates clear value:
- data querying,
- transcription,
- repository search,
- synthesis assistance.
Avoid delegating responsibilities to AI in areas that require nuanced human judgment, like strategic planning and intricate person-to-person interaction.
➡️ Maintain quality standards
Debbie Levitt offers a crucial reminder:
“You don’t want to end up humiliated because you put something in the report that AI messed up. ‘AI messed that up,’ isn’t a great excuse when this is your work.”
This should go without saying, but make sure to validate any AI outputs that you work with.
➡️ Preserve human-centered principles
Eduard Kuric emphasizes that effective UX research involves “real researchers and non-synthetic users.” AI should enhance human capabilities while preserving researcher autonomy and skill development.
Bottom line

The UX research community’s relationship with AI has evolved significantly. Tina Lickova highlights this shift:
“The initial denial and seeing it as evil is getting a bit weaker, in my observations. More and more people are coming up with ways to use AI in ways it makes sense and makes their work more of a value.”
This maturation led to a more sophisticated understanding of AI’s role – neither dismissing it entirely nor accepting it uncritically.
Stephanie Walter summarizes this balanced approach:
AI functions best as “a smart assistant” that requires human supervision.
Where AI Fits in Research Workflows

Researchers consistently use AI as a support tool. Tina Lickova describes it simply: AI works “from the start to the beginning, as an assistant.”
Kelly Jura reinforces this approach: “I use AI to support my abilities rather than as a crutch or to replace them.”
Clear boundaries emerge
What researchers don’t use AI for is as important as what they do.
Maria Rosala is explicit: “I don’t use it to design studies, conduct research, analyze data unsupervised, or generate research outputs.” The reasoning is practical – AI isn’t competent enough and the prompting required “just isn’t worth it.”
Debbie Levitt follows similar boundaries: “…I don’t use it at all for planning, protocol (questions), or running sessions.”
5 common use cases of AI in UX research
📋 Research preparation:
Ben Levin uses AI to get up to speed on unfamiliar industries:
“LLMs’s ability to gather and synthesize vast troves of information and organize it into a sensible briefing on a subject allows me to enter into qualitative and formative research with enough background knowledge.”
🧠 Data mining:
Kelly Jura uses AI to “mine data for interesting patterns before diving in myself.”
This creates a starting point for human analysis rather than replacing it.
🗣️ Quote extraction:
Debbie Levitt highlights a practical time-saver:
” I use Claude a bit for reporting, especially to find me relevant and exact quotes. Much easier than the old days of trying to find quotes in videos or transcripts!”
💬 Communication:
Stéphanie Walter finds AI helpful for “draft emails and save time in recruitment” especially when “English is not my native language.”
📊 Analysis support:
Debbie Levitt uses AI for “some of my analysis and synthesis” because “critical thinking LLM robots should be good at breaking things down, finding patterns, noticing important outliers.”
Ethics and quality check
Tina Ličková raises an important consideration: “I am trying to be aware of ethics: I would rather not stuff it with user data as it is my responsibility to keep their data safe.”
Stéphanie Walter emphasizes the validation requirement: “It worked well when I analyzed the data first on my own, and then, wanted a quick summary. Because, I knew the data, so, I could still spot the mistakes.”
The human element remains central
Joel Barr captures the core principle:
“There still has to be some critical thinking element outside the AIML making decisions on behalf of humans for humans by humans.”
Maria Rosala emphasizes why this matters:
“The ability to think through a problem and a research project shouldn’t be understated. Stopping to think about what you don’t know, and how to design a study to figure it out, results in better-run studies and sharper analysis.”
Practical shift in sentiment: 2023 vs. 2025

Stéphanie Walter is blunt about what hasn’t changed: “Are the model less biased than in 2025? I don’t think so.”
The technology still has the same fundamental limitations it had in 2023: biased outputs, hallucinations, shallow analysis. What’s different is how the research community responds to those limitations.

Maria Rosala, Director of Research at NN/g, embodies this tension between pragmatism and caution.
While she anticipates AI will “ultimately benefit the UX research profession, not replace it” and become “another useful (and very powerful) tool,” her skepticism remains unchanged:
“I’m not sure my view of AI has changed drastically since 2023. I still hold a healthy amount of skepticism about it replacing the need for research or the role of UX researchers. I see it as a detriment to UX research only insofar as some companies see it as a suitable replacement for trained researchers.”
➡️ Others experienced more dramatic shifts.
Joel Barr went from calling AI “dangerous and potentially harmful” in 2023 to embracing it as his “Swiss army knife” by 2025: “I use it for ideation, analysis, reporting… It’s become like my always-wants-to-please intern.”
Whether maintaining cautious skepticism or embracing newfound utility, researchers aren’t pretending AI became safe or unbiased – they’re establishing clearer boundaries around when and how to use it.
Kelly Jura frames this evolution by comparing AI to familiar tool transitions:
“I think about all of the tools that have evolved over the years. Maybe you began your UX journey with Balsamiq, Omnigraffle, or Sketch before landing on Figma. Those who really excel in user experience can do their jobs well with pen and paper or really any tool at hand. What truly matters is having a solid understanding and foundation, using the tools to enhance, rather than replace, the essential skills needed to do the job well.”

Not because the technology earned trust, but because researchers learned where it fits in their process and where it doesn’t.
Yet the concern Maria Rosala voices persists across the field: by 2025, anxiety about stakeholders not understanding AI’s limitations has only intensified.
That knowledge gap is creating exactly the problems researchers feared in 2023.
Persistent risk of AI misuse in UX
The problem of AI misuse in UX was rising already back in 2023. As Stéphanie Walter pointed out then:
“I’ve seen people who want to create personas with ChatGPT and replace user research with ‘asking the AI tool’. This is not user research, and will lead to poor product decisions.”
By 2025, we’re not just seeing isolated cases anymore, it’s becoming a trend:
“The potential for UX theater, poor product decisions, because we want to replace actual users with fake AI ones (aka, skip research) is more and more real.” – Stéphanie Walter
The core fear isn’t really about AI itself – it’s about stakeholders using AI as an excuse to skip real user research entirely while thinking they’re still being “research-informed.”
As Debbie Levitt puts it: “If people rely on [AI] too much or drop their own skills or critical thinking, then what have we done? What are we doing? Does anybody still need to hire us?”
The issues that concern UX professionals the most are:
- Stakeholders replacing real research with AI-generated fake research
- Companies abandoning UX research entirely
- Stakeholders not being able to tell good research from bad AI output
- The speed/quality trade-off
By 2025, the community has been more vocal about addressing these issues, especially the growing trend of replacing human-driven research with AI-generated insights.
Maria Rosala highlights this problem:
“AI generates a legitimate-looking output to non-specialists, and it does it quickly! Some stakeholders are familiar with what we produce (research reports, personas, etc.), but are less familiar with how we get there. The same problem is affecting design. AI can design interfaces.
It can generate wireframes and prototypes. But ask any good designer, and they will tell you that the quality is often laughable or the designs are super generic.”
Debbie Levitt emphasizes the importance of not letting AI take over critical thinking and research integrity:
“They don’t need us to sit around and prompt AI. They’ll ask someone cheaper to do that. I think we can use what improves our process, makes us more efficient, and makes us more accurate. Anything slowing us down or adding errors should be avoided, even if it ‘seems cool.’ We must focus on high-quality work so we deliver lots of value to users and customers. We meet or exceed their needs and expectations. They see a good ROI from their investments in us.”
The faster AI gets at producing research-shaped outputs, the harder it becomes to explain why the slower, more expensive, human version matters.
The researchers who remain essential won’t be the ones with the best AI tools – they’ll be the ones who can prove why human judgment still matters when the stakes are high.
How to demonstrate research value in an AI-driven environment?
🔍 Make the process visible
Maria Rosala‘s concern that “stakeholders are familiar with what we produce but are less familiar with how we get there” points to the solution: show your work.
Document your methodology. Explain why AI-generated personas miss what a 30-minute conversation with a real user reveals.
Make the invisible visible.
🎯 Prove value through quality, not speed
Debbie Levitt‘s criteria is clear:
“We must focus on high-quality work so we deliver lots of value to users and customers. We meet or exceed their needs and expectations. They see a good ROI from their investments in us.”
When stakeholders see AI produce fast but cookie-cutter results, while researchers produce eye-opening insights that move the business forward, even if at a more methodical pace, the choice becomes obvious.
🤖 Position yourself as the AI expert who knows its limits
Kelly Jura‘s advice – “Use AI to help you do your job, but understand enough to question all outcomes and step in whenever needed” isn’t just about personal practice.
It’s about positioning. Be the person in the room who can explain exactly where AI helps and where it fails. Stakeholders need that expertise.
🧩 Use AI to do more research, not less
Ben Levin and Stéphanie Walter both point to the real opportunity: AI should free up time for more user contact, not replace it.
💡 Pro Tip
Use AI for transcription, email drafts, and data prep so you can run more sessions, talk to more users, and go deeper in analysis.
Shift in perceived purpose of AI
In our 2023 survey, when UX researchers talked about AI, they focused on offloading grunt work: “repetitive tasks like templates and emails.” By 2025, the use cases broadened.
Many researchers now see AI as a strategic research assistant, using it for more than just repetitive tasks.
Kelly Jura now sees it as a great ideation tool: “I have grown to use AI more as an ideation tool. AI has become increasingly valuable in planning and brainstorming, especially for generating and refining questions or creating scenarios.”
Joell Barr, whom we’ve mentioned earlier, perfectly captures this new positioning:
“I use it like a Swiss army knife. Ideation, analysis, reporting, I always double-check for errors and for it to provide evidence that it’s not wrong. It’s become my always-wants-to-please intern.”
Tina Lickova‘s framing would not have been welcomed two years ago: “AI is a great assistant for creating hypotheses, script and even guiding and coaching on the right approach to do so.”
In 2023, letting AI anywhere near research methodology felt risky. By 2025, some researchers are using it as a methodology sounding board.
The essential distinction is recognizing it as such a tool, instead of flawfully projecting on it an ability to make complex decisions.
👉 Yet not everyone has made the leap to using AI for ideation.
For some research professionals, AI remains firmly in its original lane of data analysis. For Stephanie Walter, not much has changed: “I haven’t seen many groundbreaking new things here… The models got better at helping analyze the data.”
Whether expanding AI’s role or keeping it contained, researchers emphasize a clear principle: AI augments their work, it doesn’t replace it.
Kelly Jura makes this boundary explicit: “I approach AI output as a starting point or supplemental perspective, but never as a copy/paste final product.”
A clear workflow pattern is emerging: human researchers lead the work, AI assists with analysis, and humans verify the output before acting on it.

The AI-based users debate: 2023 vs. 2025

Two years transformed the conversation on AI-based users from wholesale rejection to cautious, conditional acceptance with clear boundaries. While 2023 responses focused on theoretical concerns, 2025 brought empirical evidence.
As Maria Rosala noted after NN/g’s investigation: “Sure, they didn’t talk like real people, were prone to sycophancy, and sometimes talked too much, but that’s missing the point.”
The core insight remained unchanged: empathy cannot be manufactured. Her question:
“Can I really feel empathy for someone who is made up?”
perfectly captures why AI users remain fundamentally limited despite technical improvements.
Even while playing the role of a persona, there is no underlying person to learn more about.
There’s just a simulation that can go many different ways while parroting information that could have been learned in a myriad other contexts.
Where AI users might work
Two specific use cases gained acceptance by 2025:
✅ Proto-personas, ideation and hypothesis generation
This emerged as the primary legitimate application.
“It’s great for collecting hypotheses, creating proto-personas and getting the first user data.” – Tina Ličková
“This is interesting and can help us think through experiments and form hypotheses. It could be a starting point, much like asking my coworkers, “Based on your knowledge of our users, how likely are they…” to pressure-test ideas.” – Kelly Jura
However, this, again, comes with non-negotiables for all the researchers we talked to:
“It still needs to be verified with a real user – no negotiations there.” – Tina Ličková
✅ Pre-research augmentation
Some UX researchers have shown to be using AI for generating quick feedback before actual testing with users, however, they still know where to draw the line.
Eduard Kuric articulated the boundary:
“There is no AI model in existence that can capture the variability of human experiences, thoughts and perspectives. There are some cases where I could see use for synthetic responses, though. The biggest one is the augmentation of UX research, generating quick feedback before involving actual participants. But the involvement of actual users in UX research is not optional.”
This represents the 2025 consensus: AI users can serve as research preparation, but definitely not research replacement.
Why researchers remain sceptical
Even though some researchers identify limited, appropriate applications of AI users, most of them are still skeptical and don’t believe AI can ever replace real human testing.
⛔️ AI’s generalization problem
UX professionals highlight the AI’s tendency to generalize and the consequences this kind of testing can have on actual products.
“AI generated personas are not users. At best, they are a high level archetype, that will lead you to take the same design decisions as your competitors.” – Stéphanie Walter
“AI can process large amounts of information and generalize; however, this could lead to generic insights and generic products that don’t connect with users.” – Kelly Jura
⛔️ The innovation ceiling
The most significant 2025 revelation wasn’t about AI capabilities – it was about fundamental limitations.
Debbie Levitt‘s experience trying to generate novel content revealed a critical constraint:
“It couldn’t. It realized everything it came up with was a mix of other books, TV shows, and movies that already existed. We must still look to humans for fresh ideas and real innovations. If you want your designs to look like everything else, have AI do them. It won’t invent anything new. It will remix all the designs on which it was trained, and give you some sort of middle ground among those.”
The very unpredictability that makes human users challenging to research is what drives innovation. And synthetic users can’t replicate this.
⛔️ The bias amplification risk
Multiple experts raised concerns about AI perpetuating and amplifying existing biases.
Debbie Levitt provided the most comprehensive analysis, pointing out that AI systems inherit “problems of racism, sexism, ableism” and are built on “old-fashioned ways of looking at our users” rooted in outdated marketing demographics.
Kelly Jura reinforced this concern:
“If AI users are built on biased data, there is a risk of further excluding underrepresented groups when designing products.”
The growing risk of low UX maturity
With everything mentioned above, AI users continue to find adoption. As Maria Rosala pointed out: “They’re often used by teams with low UX maturity who don’t have budget, resources, or internal support to do research.”
This explains the continued market interest. AI users fill a gap in organizations that can’t or won’t invest in real research.
Teams with low UX maturity are thus especially vulnerable to making misguided decisions by over-relying on AI feedback.
Looking ahead: The next 5 years

When looking at the experts’ predictions, we identified 2 distinct perspectives on AI’s trajectory in UX research, and the disagreement itself tells a story.
👍 Optimistic view: Researchers as architects, AI as builders
Maria Rosala envisions a strategic shift: “I think we’ll see researchers doing more strategic, complex research and spending less time on simple, tactical research studies.”
She predicts AI will handle “unmoderated tests and qualitative surveys” with “pretty good results.”
This architectural model emerges clearly: “In the future, I think we’ll see researchers become more like architects in the research process, and AI will be the builder (doing the time-intensive, heavy lifting).”
Stéphanie Walter shares this optimistic view: “I hope we will be able to spend more quality time with end users. So, for me, it won’t replace researchers, but will help us focus on the actual research part of the job.”
👎 Pessimistic view: Shrinking teams and job threats
Ben Levin brings a historical perspective: “My own experience stretches back only 25 years or so, but 25 human years seems roughly equivalent to 5 AI years.”
His concern is practical:
“We’ll be asked to do more with less support from junior people coming up through the ranks, and that worries me.”
Kevin Liang emphasizes maintaining research integrity:
“AI data is still secondary data, desk research, whatever you want to call it, and should be treated as such. And there is no replacement for primary scientific research.”
Debbie Levitt offers the starkest prediction: “I think AI will reduce or eliminate most of our jobs in the next 3-5 years.” She acknowledges current AI limitations but warns that “the tech is moving so fast.”
“My new book is “Life After Tech” because I believe we will need to consider what our work or careers are when we’re done with tech, or tech is done with us.”

Key transformations expected
🔄 Method evolution
Maria Rosala notes methods are already changing: “Traditional asynchronous research methods, like surveys and diary studies, are becoming more synchronous, as AI offers the ability to ask in-the-moment follow-up questions.”
🌐 The rise of democratization
Tina Ličková predicts, “SME companies will hugely democratize and UXR becomes even more a corporate role.”
Maria Rosala sees AI coaching non-researchers: “AI can coach a PWDR through designing a simple unmoderated test.”
📌 Note: PWDR (People Who Do Research): A term referring to non-specialist individuals who engage in research activities.
📊 Scale capabilities
Eduard Kuric anticipates “It will revolutionize our ability to collect and analyze vast amounts of qualitative data” and “interview more people than we could before, due to the physical limitations.”
🛠️ Repository intelligence
Maria Rosala envisions AI becoming “more involved in the curation of research repositories. It can tag data from many different sources as it enters the repository, organize it, and highlight interesting new patterns.”
The bottom line
The predictions reveal a fundamental tension:
👉 Tactical research becomes increasingly automated while strategic research becomes more valuable.
Kelly Jura captures this balance:
“Skilled user experience researchers will hopefully hold greater significance. Tools evolve over time, and those who excel in their careers possess both a strong foundational knowledge and the adaptability to adjust.”
The real challenge ahead?
As Ben Levin notes, researchers will be “pushing back against Synthetic Users, or the specter of relying on an LLM model to find usability problems or suggest future features.”
At UXtweak, we believe the winners will be researchers who can articulate the unique value of human insight while leveraging AI for operational efficiency. 🐝
The next five years won’t just transform how we do research – they’ll force us to better understand why human-centered research matters.
Closing thoughts
The transformation from 2023 to 2025 reveals a divided field. UX researchers have moved from asking “Will AI replace us?” to “How do we use AI strategically?”
However, some experts disagree quite sharply on outcomes – some see role elevation, others predict the thinning or job elimination within 3-5 years.
But all nine experts agree on boundaries: AI excels at data processing and analysis support, while humans remain essential for strategic thinking and participant interaction.
Want to see the full evolution? Read our 2023 baseline report to understand how the conversation has shifted.
Interested in more research on the intersection of AI and UX?
Check out the research papers published by our UXtweak Research team:
📚 Can Talking Avatars Make Surveys More Engaging? (arXiv preprint)
We tested whether photorealistic embodied conversational agents can improve the quality of online survey responses. With 80 participants and 2,200+ responses, the study shows both the promise and the challenges of making surveys feel more like real conversations.
We examine the implementation of AI-generated follow-up questions to assess their impact on usability testing insights. An experiment in which we compared feedback from no follow-up questions, static questions prepared by researchers, real-time GPT-4 questions, and a blend of static and AI-generated questions.
📚 Can Large Language Models like ChatGPT simulate card sorting? (arXiv preprint)
An in-depth study in which we experimented with data from 28 card sorting studies involving 1,399 participants from real projects across various industries. We tasked LLMs with simulating these studies using various prompting methods and then compared the results.
📚 AI-Assisted Usability Testing: Reality Or Fiction? (Smashing Magazine)
Discussion on the significance of context in the creation of relevant follow-up questions for usability testing, how an AI tasked with interactive follow-up should be validated, and the potential — along with the risks — of AI interaction in usability testing.
📚 Automated Detection of Usability Issues (arXiv preprint)
In a systematic review of 155 publications, we provide a comprehensive overview of the current state of the art for automated usability issue detection. We analyzed trends, paradigms, and the technical contexts in which they are applied.
See more on our Research by UXtweak section.
About this report
This report was written by Daria Krasovskaya with contributions by Tadeas Adamjak and reviewed by Peter Demcak. The answers of the experts were analyzed to identify key topics and patterns.
The findings reflect expert views and our interpretations, offering a focused snapshot rather than generalizable conclusions. Despite the small sample size, these qualitative insights from industry leaders provide valuable context for understanding AI’s evolving role in UX research.

👉 The irony is painful: AI was supposed to make research more accessible, but it’s being used to make bad research look legitimate.