Skip to Content

ChatGPT in business: how AI is changing the game

Conor McGovern
27 Mar 2023

ChatGPT’s shockingly good conversational ability is headed for businesses, with the arrival of GPT-4 set to further propel the coming revolution.

Open AI’s chatbot has taken the internet by storm, with version 3.5 reaching a million users in five days and more than 100 million in a month[1]. Now ChatGPT’s shockingly good conversational ability is headed for businesses, with the arrival of GPT-4 and Google’s BARD set to further propel the coming revolution.

ChatGPT, which offers human-like responses to tasks such as answering questions, producing text and summarising long documents, is the most famous of a new breed of “generative AI” tools. Others, such as Resemble AI, create realistic voice recordings, produce unique images, like DALL-E, or generate personalised experiences, like Midjourney.

The pace of change accelerated again in March 2023 when OpenAI released GPT-41 , the latest version of its AI language model, Microsoft announced Copilot2 , an AI assistant for Office 365, Google promised a similar tool for its Workspace apps3 and released BARD4 , its experimental, conversational, AI chat service. AI tools are shifting from the IT department to the desktop in record time and will profoundly transform businesses in far more visible ways than older technologies have. Generative AI is not without its challenges and limitations, but smart enterprises will be the ones that put it to work most quickly and effectively.

How does ChatGPT work?

ChatGPT’s responses have fooled people into thinking that it has achieved sentience. But what’s really going on is much simpler, and the clues are in the name. GPT stands for Generative Pre-trained Transformer and, put simply, it is a large language model (LLM) that generates the next word in a sequence based on probabilities learned from the data on which it has been pre-trained.

In ChatGPT’s case, it immediately dazzled users thanks to its underlying language model, GPT-3. GPT-3 is one of the largest language models ever created. The entirety of Wikipedia accounts for just 3 percent of its training data. Newly arrived GPT-4 has even more parameters and training data, though OpenAI has not revealed details.

LLMs can handle massive data sets and learn from new information because they use a type of AI called a transformer model, first developed in 2017. A transformer excels at processing sequential data, such as text or speech. It can identify and prioritise a task’s most important points and process multiple elements of a query at the same time. Unlike earlier models, such as recurrent neural networks, which had to be trained from scratch for every business use case, a transformer can be pre-trained – saving time and computational effort.

Though they are designed to work with big data, they are remarkably good at producing usable insights from smaller data sets, too. According to OpenAI, just 100 examples of domain-specific data, such as contract law, can substantially improve the accuracy and relevance of a transformer model’s output.

All that processing comes at a cost: generative AI depends on high-performance cloud computing. ChatGPT runs on Microsoft’s Azure cloud framework and Microsoft is a significant investor in OpenAI.

What capabilities does it offer?

The flexibility of LLMs means they have many potential applications, from summarising text to sentiment analysis, data extraction, compiling research and even writing computer code. These kinds of automation can directly improve productivity, enabling employees to focus on strategic work, as well as driving cost reduction.

For example, enterprise search – the search engine to access data within organisations – is more efficient if employees can write queries in natural language, which makes ChatGPT ideal. The tool’s ability to understand context and learn from a user’s search history makes it even easier to get relevant search results.

By embedding these tools into productivity suites, as Microsoft and Google are doing, employees gain an AI assistant that can summarise meetings, suggest email replies or produce first drafts of documents.

Human resources (HR) tasks such as onboarding, training and performance management, as well as employee queries and complaints, could also be automated. The ChatGPT model in this case would be trained on data from common employee complaints and enquiries, such as questions about benefits, holiday requests and even payroll issues. This would free up HR professionals for more complex matters.

In financial services, AI can assist with tasks such as compliance, credit risk management, investment research, thematic baskets for trading and processing legal documents. This means workers can get more done in the same amount of time and focus their expertise on more demanding tasks.

Externally, in place of human customer service, ChatGPT can advise and support customers, while collecting data that can be used to train human customer-service agents. Again, ChatGPT’s natural language capabilities mean it can understand a customer enquiry quickly and deliver an appropriate response without human assistance. This improves the customer experience and increases loyalty and retention.

By automating processes, businesses will be able to handle far greater volumes of queries than can human employees, without the cost of hiring additional staff to implement the technology. This is thanks to performance attributes of GPT models that go beyond those of older competitors. They are time and memory efficient and can be easily integrated into a low-code environment. And they are cost-effective, deployed on a pay-as-you-go model.

Finally, GPT is deployed within an Azure subscription, providing easy Active Directory integration, private end points for security, service-level agreements (SLAs) and built-in responsible AI – a Microsoft standard that ensures AI is safe, trustworthy and ethical.

Challenges and limitations

Every technology comes with challenges and limitations, and ChatGPT is no exception. It lacks the critical thinking, creativity and strategic decision-making skills of a human worker. As a result, it is best used as a supplementary tool to automate simple or moderately difficult tasks, freeing up professionals for more complex and “human” tasks.

Also, ChatGPT’s accuracy is limited by the quality of its training data, which can include overly detailed explanations that the AI sometimes emulates, producing verbose or lengthy responses. While detailed responses can be useful sometimes, they may be tedious for critical applications requiring direct answers.

ChatGPT might also lack the latest information because its training data presently stops at September 2021, however BARD addresses itFinally, ChatGPT tends to assume the user’s intention when receiving an ambiguous prompt, instead of asking clarifying questions.

While the model’s parameters can be fine-tuned or trained on specific datasets to minimise negative behaviours, it is unlikely that ChatGPT will ever be completely free from limitations because it reflects the biases and limitations of its creators and the data used to train it.

Regulations to mitigate risk

Ethics and regulations are among the biggest concerns with GPT-3 and other AI solutions. Specifically, there are five identified risk categories: ethical guidelines, control of data usage, transparency, monitoring performance and legal issues.

Many AI companies have already acknowledged the need for bias mitigation and ensuring fairness. For example, an HR department that uses AI to screen resumés would need to ensure that the system was not trained on data that included historic biases, such as a tendency to hire fewer women. And, as clients import their own data into AI models, they will have to consider the privacy of employees and customers.

To regulate this, guidelines should be produced to:

  • clarify limits on data collection and storage
  • require explicit user consent when data could be used in a way that defies customer expectation or regulation
  • ensure AI companies are transparent about how their systems work, how they’re trained and how imported data is used.

This will increase trust, as well as potentially reduce the risk of bias.

As these tools are deployed, regular evaluation will be necessary to ensure they match performance, ethical and legal standards. This will help companies define where errors appear and manage their liability for them.

Indeed, all these concerns have been considered by legal entities worldwide. Within the UK, the Information Commissioner’s Office (ICO) has published its Guidance on AI and Data Protection, which underlines key data-protection questions. And, since 2016, the EU has collaborated on a range of legal papers that support and shape how users interact with GPT-3.

Future developments

Initial reactions to GPT-4 suggest that it is better at mimicking human behaviour and speech than GPT-3, enabling it to infer human intentions more accurately. It also better understands context and is less prone to “hallucinations” – the mistakes that arise when AI makes up its own facts. OpenAI will continue to release updated models in the coming years.

In addition, OpenAI is developing a new, improved version of the DALL-E model, which can create realistic images and art from a natural language description. Sam Altman, co-founder of OpenAI, predicts that multi-modal models like DALLE-E, which can analyse multiple types of data input, including images, video and audio, will soon surpass text models in speech generation. He expects many multimodal models to be developed for specific domains such as education, law and medicine.

Microsoft recently began rolling out an LLM-powered version of its Bing search engine to challenge Google and other competitors in conversational search. Start-ups such as, Character.AI, Metaphor and Perplexity have already demonstrated the potential of conversational search, which promises to improve efficiency for customers.

With the release of BARD, we can expect significant developments from Google. Their newest AI technologies like LaMDA, PaLM, Imagen and MusicLM are building on their credibility with the language, images, video and audio based applications. Opening their Generative Language API for enterprises with a range of models over time will acclerate creation of innovative applications with AI.

Finally, the rise of large language model operations (LLMOps) is expected due to the increasing demand for tools that help with foundational model fine-tuning, no-code LLM deployment, GPU access and optimisation, prompt experimentation, prompt chaining, and data synthesis and augmentation.

The transformative potential of ChatGPT and similar AI models is vast. Leading businesses are already exploring how to use them and we are exploring innovative use cases across sectors for our clients. There are precautions that must be taken and regulatory concerns to be addressed, but that is not a reason to hold back. The potential productivity and efficiency benefits are too great to ignore.







Conor McGovern

VP Analytics and Artificial Intelligence (A&AI) Capgemini Invent UK
Conor McGovern leads the Analytics and Artificial Intelligence (A&AI) practice in Capgemini Invent UK and Invent’s global Enterprise Data & Analytics practice. Conor and his team use data, analytics and AI to tackle the toughest business challenges for clients. They help drive strategic, real-time decision-making, eliminate repetitive tasks and enable new levels of efficiency.