Skip to Content


Roosa Säntti
15 Mar 2023

Innovative technology as powerful as Artificial intelligence (AI) is transforming the world. In the past years AI maturity of organizations has increased and the technology has overall grown in popularity, and for obvious reasons. AI can help to automate repetitive tasks, make predictions based on data and even make decisions for you. AI is in the core of more personalized services, smarter products, and more efficient processes. However, as with any innovative technology, AI raises concerns about equity. This also applies to ChatGPT, an AI chatbot built on top of OpenAI’s GPT-3 family of large language models.

OpenAI has announced that their language model for ChatGPT is the largest ever created: 175 billion parameters. It is a powerful AI tool that can be used for a variety of purposes and has been the most talked about topic in AI in the latest months. Anyone who has tried ChatGPT, asking for help for a question or had a conversation with them, knows that this tool can truly be called smart – it enables you to have human-like conversations with the bot, not only answering your question, but the tool is acting empathetic and gives you probably more information than you were even able to ask because it is actually designed to predict.

Based on this experience, you could easily imagine that this tool can tackle any topic. Although, this is not true. The language model can produce unbiased responses, conversation and rely on such facts, as they know about – coming down to the data it was trained on. Since certain groups like children, women and other marginalized groups are underrepresented in the data in general, the same biases that we see elsewhere remain in this system affecting ChatGPT functionality and capability to advance equity in the world.

AI is only as unbiased as its data

What does this mean? The biases from the world and cultural effects are also reflected in how ChatGPT functions since it is trained based on the data of the biased world. Can it be that ChatGPT, and other generative AI solutions reinforce these biases – instead of fighting against them? And are they driving us towards an even more discriminative world?

Large technology companies have started to make investments in this technology and are planning to embed the functionalities into their own products and services that they offer to their customers. One example is Microsoft who launched Bing chatbot powered by ChatGPT on Feb 7th, their new AI powered search engine. First experiences from using the chatbot are that it is a very powerful tool to answer complex questions and perform advanced tasks. Still, it is important to note that no matter how you use the chatbot, you should never blindly rely on the answers. Usually people rely on search engines to deliver accurate, objective, and unbiased information. But this might not be the reality when the data and algorithms are biased. So not only the training data is causing biases, but also the fact that algorithms that drive ChatGPT are designed to predict based on history – instead of truly checking facts. With the fast adoption of added resources like ChatGPT, we could quickly enter a new era of AI-supported access to knowledge. Using biased training data will only strengthen the already existing inequalities in the world.

Transparency is the key

The principles with developing algorithms and AI based solutions is that we always need to know and understand the data, that is used to train AI, and analyze and acknowledge the biases in the data. If we still decide to use biased data, we need to understand how it affects the end results given by the algorithms.

However, it is not enough that there is good transparency and understanding of the biases in data, but the algorithms need to be understood as well. Transparency regarding algorithms means it must be clear on how the model is built and what are the parameters used to determine the results that the model gives.

Classics examples of biased AI solutions can be related to facial recognition systems which are officially known to be discriminating black people, due to the fact the training data used to build these solutions is consisting mostly of white people. Another notable example of gender bias is that a program will often translate the word nurse to a female-gendered word and a doctor to a male-gendered word. And this is due to the data the AI was trained on.

Lack of women in the field

To make sure that we are building sustainable and equal AI systems, we should truly focus on having diverse teams building these solutions. Unfortunately, there is still a lack of women in the field of IT and technology. Data and AI is a very male-dominated domain, according to some sources, almost 80% of AI developers are male. As long as AI and tech are male dominate, the solutions being built are also designed from their perspective. It is difficult to expect AI tools to be more inclusive with that in mind.

Four ways to lead the way

There are several ways to promote the female representation in tech and AI field:

  • Hire and promote with diversity in mind

To increase female workers in tech companies, we need to make sure that we are embedding this thinking into every hiring and promotion decision that we make. We need to be aware of unconscious biases which make us look for male-typical characteristics and behavior from a person who we would consider a suitable or high-performing individual in tech roles. We need to understand the various kinds of characteristics that can make a person suitable and well-fit for the field too. We need to make sure we constantly re-evaluate the evaluation and selection criteria that we use to hire and promote.

  • Set targets for female representation – overall and in leadership positions

It is too common to think that setting female quotas for new-hires and promotions would create inequality to force more female employees to positions where they would not be suitable,  since the population to select from is smaller compared to male candidates. My view is that there is still a lot to do to increase the female ration in Tech and AI, so also this issue needs a little push. If there are no official targets set, this positive change will not get enough push and it will never happen. Having a more equal company culture and tech landscape is in the end a win-win situation for all.

  • We need role models

As the challenge is that there are not enough educated and skilled women existing to work in tech, we need to address the issue at its root. One tool to address this is to support and promote role models – make successful women and minority groups in tech visible. This way also younger generations can imagine themselves working in the tech field and can see that it is possible to make that career and even to get all the way to the top.

  • Educate developers with AI ethics

When developing AI solutions for our clients, we need to make sure that our developers are familiar with AI ethics guidelines, and any other regulations regarding discrimination and equality. As an example, Capgemini has set seven principles of AI ethics , so this is a good place to start with. Everyone who has a touch point to AI, should at least be aware of those.

In conclusion, ChatGPT and other AI tools have the potential to revolutionize communication and decision-making. However, it is necessary to know AI’s equity implications and address them. By ensuring that AI is trained on diverse and representative datasets, and being transparent about potential biases, we can help ensure that AI is used in a fair and equitable manner. Additionally, we need to make sure that the teams who are developing AI solutions are diverse and to increase the female ratio on Tech and Business leadership roles, role models and well-thought recruitment criteria are among the utmost important topics right now.


Roosa Säntti,

Head of Insights & Data Finland
Roosa Säntti is heading Insights & Data practice in Finland and is also an active member of Capgemini’s global I&D Innovation Network. Roosa is a business builder by heart and believes that with data, we can truly drive businesses and society towards a more sustainable future. She is also a big supporter of diversity and sees that fueling innovation also in her own teams.