Skip to Content

ChatGPT in User Research – an exciting collaboration or a dangerous temptation?

Capgemini
19 Apr 2023

User research is an empathetic discipline that puts humans at the centre of the practice. Can tools that claim to replicate human behaviours, like ChatGPT, enhance or hinder the practice?

Our main purpose, as user researchers, is to be the voice of the user. We use a variety of research methods to collect, analyse and present data and insights about people. However, these methods can be time-consuming, especially when analysing large volumes of data.

ChatGPT is designed to generate human-like responses to user inputs by analysing and understanding the patterns and structures of human language.

To illustrate this, we asked ChatGPT to write an introduction to this article. The previous two paragraphs contain parts of its response.

In one of our UR community crits, we discussed ChatGPT and explored how user researchers are considering use in their practices and their personal lives. It became apparent that there are two distinct schools of thought that can be represented by Moore’s Technology Adoption Curve.

Our community was split into ‘The Early Market’ and ‘The Mainstream Market’. ‘Early Market’ researchers have investigated ChatGPT and are learning how to formulate the right questions to ask. ‘Mainstream Market’ researchers were more cautious. They wanted to understand the impact that it may have on user research and witness further evidence of its effectiveness before adopting the tool.

We have collated the good, the bad and the future of what the UR community thinks about ChatGPT.

The Good

Learning from our ‘Early Market’ researchers, these are the top 3 areas where ChatGPT has the potential to add value to the research practice.

1. Ideating to reduce blind spots – how can we generate hypotheses for further research?

ChatGPT can add value as a “sounding board” to generate potential ideas and hypothesis for further research. This additional perspective can help to reduce blind spots in research data and add layers to recurring user insights, which can be particularly valuable when there is only one user researcher on a project.

2. As the UR Planner – how can we create templates for user research artefacts more efficiently?

ChatGPT can be used to create templates for a variety of research activities, including but not limited to research plans and presentations for playbacks. ChatGPT can also be a powerful assistant for admin tasks, such as recruitment outreach or spellchecking. All these applications can give space for a researcher to focus on the creative aspects of their role without being burdened by repetitive admin tasks.

3. Driving faster quantitative analysis – how can we benefit from fast data processing functionalities to scale up research?

ChatGPT has the power to process large volume of quantitative data and to perform complex analysis such as sentiment analysis or text classification. ChatGPT
makes these pattern identification methods less time consuming and more accessible for user research to augment the quantitative analysis. However, this application of ChatGPT was controversial. There were concerns that this was a riskier application of the tool which requires structured governance and strict privacy processes.

The Bad

Whilst there is evidence for its benefits, our ‘Mainstream Market’ researchers outlined concerns around the tool’s impact on the validity of the data and the subsequent conclusions drawn around the role of the researcher.

1. A risk of bias – how can we ensure that the data received is reliable?

ChatGPT is only as good as the data that it learns from. Without a quality assurance on its training data, there is a risk that the outputs from ChatGPT may be biased. We believe that it is crucial that if researchers use ChatGPT as a starting point to their practice, it must be layered, caveated, and most importantly, validated before it is distributed. The value that we bring to our clients revolves around our skills as researchers being augmented, but not replaced by tools like ChatGPT.

2. Machines are not sentient, we are – should we be learning from them about human centred design?

ChatGPT is just lines of code, it does not have the emotional capacity to understand humans and our complexities. It is using a library of data to produce outputs and is not capable of quantifying the deeply subjective human mind and its connections. Should we then be using it to support research analysis, that can sometimes be as subjective? We believe that it is crucial that the algorithm does not lead to researchers becoming too far removed from the most important factor – their users. There is a risk that the intent of the experience shared by users is lost in an analysis solely conducted by AI. This is where the superpower of researchers is critical – to make sure that people’s lived experiences are treated with empathy and dignity.

3. The ‘threat’ of AI – what will be the impact on the value of user research?

It is still not uncommon for researchers to have to ‘sell’ their practice. By crediting ChatGPT’s potential benefits in research, do we run the risk of diminishing the perceived value of the researcher themselves? Could this collaboration with the AI be misinterpreted as a replacement? These are questions that we will have to navigate as ChatGPT enters our sphere to ensure UR as a human-centred practice is respected and valued. AI by itself will not replace humans doing their jobs; humans using AI, however, will take over those jobs.

The Future

ChatGPT presents an exciting opportunity to explore how artificial intelligence can be used in human centric disciplines such as user research. As user researchers, we understand the caution around the use of ChatGPT. Most of our craft revolves around identifying non-verbal cues demonstrated by people. There is no doubt that ChatGPT can be a valuable tool in generating insights from large datasets efficiently, but it cannot replace a user researcher and their skills to interpret human behaviours into actionable and evidence-based hypotheses.

Without the collaboration between ChatGPT and the researcher, there is a risk of misleading the research by relying too much on the algorithm’s outcome, which could be dangerous for future decisions. We are excited to experience how AI tools can transform our craft and improve the products and services that we create.

Yovani Umavassee

User Research Consultant (Digital Factories)
Yovani Umavassee is a User Researcher at Capgemini Invent. Background in artificial intelligence and engineering.

Kesta Kemp

User Research Consultant (Digital Factories)
Kesta Kemp is a User Researcher at Capgemini Invent. Background in anthropology and entrepreneurship.