{"id":1077715,"date":"2023-03-30T15:04:00","date_gmt":"2023-03-30T13:04:00","guid":{"rendered":"https:\/\/www.capgemini.com\/?p=913268"},"modified":"2025-03-24T06:37:37","modified_gmt":"2025-03-24T06:37:37","slug":"chatgpt-and-i-have-trust-issues","status":"publish","type":"post","link":"https:\/\/www.capgemini.com\/in-en\/insights\/expert-perspectives\/chatgpt-and-i-have-trust-issues\/","title":{"rendered":"ChatGPT and I have trust issues"},"content":{"rendered":"\n<header class=\"wp-block-cg-blocks-hero-blogs header-hero-blogs\"><div class=\"container\"><div class=\"hero-blogs\"><div class=\"hero-blogs-content-wrapper\"><div class=\"row\"><div class=\"col-12\"><div class=\"header-title\"><h1>ChatGPT and I have trust issues<\/h1><\/div><\/div><\/div><\/div><div class=\"hero-blogs-bottom\"><div class=\"header-author\"><div class=\"author-img\"><img decoding=\"async\" src=\"https:\/\/www.capgemini.com\/wp-content\/uploads\/2023\/06\/Tijana.webp?w=200&amp;quality=10\" alt=\"\" loading=\"lazy\"\/><\/div><div class=\"author-name-date\"><h5 class=\"author-name\">Tijana Nikolic<\/h5><h5 class=\"blog-date\">30 March 2023<\/h5><\/div><\/div><div class=\"brand-image\"> <\/div><\/div><\/div><\/div><\/header>\n\n\n\n<section class=\"wp-block-cg-blocks-group undefined section section--article-content\"><div class=\"article-main-content\"><div class=\"container\"><div class=\"row\"><div class=\"col-12 col-md-1\"><nav class=\"article-social\"><ul class=\"social-nav\"><li class=\"ip-order-fb\"><a href=\"https:\/\/www.facebook.com\/sharer\/sharer.php?u=https:\/\/www.capgemini.com\/?p=913268\" target=\"_blank\" rel=\"noopener noreferrer\" title=\"opens in a new window\"><i aria-hidden=\"true\" class=\"icon-fb\"><\/i><span class=\"sr-only\">Facebook<\/span><\/a><\/li><li class=\"ip-order-tw\"><a href=\"https:\/\/twitter.com\/intent\/tweet?url=https:\/\/www.capgemini.com\/?p=913268&amp;text=\" target=\"_blank\" rel=\"noopener noreferrer\" title=\"opens in a new window\"><i aria-hidden=\"true\" class=\"icon-tw\"><\/i><span class=\"sr-only\">Twitter<\/span><\/a><\/li><li class=\"ip-order-li\"><a href=\"https:\/\/www.linkedin.com\/shareArticle?url=https:\/\/www.capgemini.com\/?p=913268&amp;text=\" target=\"_blank\" rel=\"noopener noreferrer\" title=\"opens in a new window\"><i aria-hidden=\"true\" class=\"icon-li\"><\/i><span class=\"sr-only\">Linkedin<\/span><\/a><\/li><\/ul><\/nav><\/div><div class=\"col-12 col-md-11 col-lg-10\"><div class=\"article-text article-quote-text\">\n<p id=\"04ec\"><strong><em>Disclaimer<\/em><\/strong><em>: This blog was NOT written by ChatGPT, but by a group of human data scientists:<\/em>&nbsp;<a href=\"https:\/\/medium.com\/@shahryar.masoumi1370?source=user_profile-------------------------------------\" target=\"_blank\" rel=\"noreferrer noopener\">Shahryar Masoumi<\/a>,&nbsp;<a href=\"https:\/\/medium.com\/@zirkzee?source=user_profile-------------------------------------\" target=\"_blank\" rel=\"noreferrer noopener\">Wouter Zirkzee<\/a>,&nbsp;<a href=\"https:\/\/medium.com\/@almira.pillay?source=user_profile-------------------------------------\" target=\"_blank\" rel=\"noreferrer noopener\">Almira Pillay<\/a>,&nbsp;<a href=\"https:\/\/medium.com\/@sven_hendrikx?source=user_profile-------------------------------------\" target=\"_blank\" rel=\"noreferrer noopener\">Sven Hendrikx<\/a>&nbsp;and <a href=\"https:\/\/www.linkedin.com\/in\/tijana-nikoli%C4%87-99b059110\/\" target=\"_blank\" rel=\"noreferrer noopener\">myself<\/a>.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" src=\"https:\/\/www.capgemini.com\/wp-content\/uploads\/2023\/03\/unnamed.webp?w=960\" alt=\"\" class=\"wp-image-913320\"\/><figcaption class=\"wp-element-caption\">Stable diffusion generated image with prompt = \u201can illustration of a human having trust issues with generative AI technology\u201d<\/figcaption><\/figure>\n\n\n\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<p id=\"7cb0\">Whether we are ready for it or not, we are currently in the era of generative AI, with the explosion of generative models such as&nbsp;DALL-e,&nbsp;<a href=\"https:\/\/openai.com\/blog\/gpt-3-apps\" target=\"_blank\" rel=\"noreferrer noopener\">GPT-3<\/a>, and, notably,&nbsp;<a href=\"https:\/\/openai.com\/blog\/chatgpt\" target=\"_blank\" rel=\"noreferrer noopener\">ChatGPT<\/a>, which racked up one million users in one day. Recently, on March 14th, 2023, OpenAI released&nbsp;GPT-4, which caused quite a stir and thousands of people lining up to try it.<\/p>\n\n\n\n<p id=\"6545\">Generative AI can be used as a powerful resource to aid us in the most complex tasks. But like with any powerful innovation, there are some important questions to be asked\u2026 Can we really trust these AI models? How do we know if the data used in model training is representative, unbiased, and copyright safe? Are the safety constraints implemented robust enough? And most importantly, will AI replace the human workforce?<\/p>\n\n\n\n<p id=\"e7f4\">These are tough questions that we need to keep in mind and address. In this blog, we will focus on generative AI models, their trustworthiness, and how we can mitigate the risks that come with using them in a business setting.<\/p>\n\n\n\n<p id=\"aaf9\">Before we lay out our trust issues, let\u2019s take a step back and explain what this new generative AI era means. Generative models are deep learning models that create new data.&nbsp;<a href=\"https:\/\/vaclavkosar.com\/ml\/openai-dall-e-2-and-dall-e-1\" target=\"_blank\" rel=\"noreferrer noopener\">Their predecessors are Chatbots, VAE, GANs<\/a>, and transformer-based NLP models, they hold an architecture that can fantasize about and create new data points based on the original data that was used to train them \u2014 and today, we can do this all based on just a text prompt!<\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter size-large\"><img decoding=\"async\" src=\"https:\/\/www.capgemini.com\/wp-content\/uploads\/2023\/03\/Screenshot-2023-03-28-140517.webp?w=457\" alt=\"\" class=\"wp-image-913292\"\/><figcaption class=\"wp-element-caption\"><em>The evolution of generative AI,&nbsp;<\/em><a href=\"https:\/\/www.linkedin.com\/feed\/update\/urn:li:activity:7042192042604531712?utm_source=share&amp;utm_medium=member_desktop\" target=\"_blank\" rel=\"noreferrer noopener\"><em>with 2022 and 2023 bringing about many more generative<\/em><\/a><em>&nbsp;models.<\/em><\/figcaption><\/figure>\n\n\n\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<p id=\"4085\">We can consider chatbots as the first generative models, but looking back we\u2019ve come very far since then, with ChatGPT and DALL-e being&nbsp;<strong>easily accessible interfaces<\/strong>&nbsp;that everyone can use in their day-to-day. It is important to remember these are&nbsp;<em>interfaces<\/em>&nbsp;with generative pre-trained transformer (GPT) models under the hood.<\/p>\n\n\n\n<p id=\"b304\">The widespread accessibility of these two models has brought about a boom in the open-source community where we see&nbsp;<a href=\"https:\/\/www.linkedin.com\/feed\/update\/urn:li:activity:7042192042604531712?utm_source=share&amp;utm_medium=member_desktop\" rel=\"noreferrer noopener\" target=\"_blank\">more and more models<\/a>&nbsp;being published, in the hopes of making the technology more user-friendly and enabling more robust implementations.<\/p>\n\n\n\n<p id=\"a779\">But let\u2019s not get ahead of ourselves just yet \u2014 we will come back to this in our next blog. What\u2019s that infamous Spiderman quote again?<\/p>\n\n\n\n<p id=\"eb5a\"><strong>With great power\u2026<\/strong><\/p>\n\n\n\n<p id=\"e7bb\">The generative AI era has so much potential in moving us closer to artificial general intelligence (AGI) because these models are trained on understanding language but can also perform on a wide variety of other tasks, that in some cases even exceed human capability. This makes them very powerful in many business&nbsp;<a href=\"https:\/\/research.aimultiple.com\/generative-ai-applications\/\" rel=\"noreferrer noopener\" target=\"_blank\">applications<\/a>.<\/p>\n\n\n\n<p id=\"34fa\">Starting with the most common \u2014&nbsp;<strong>text application,&nbsp;<\/strong>which is fueled by GPT and&nbsp;<a href=\"https:\/\/towardsdatascience.com\/understanding-generative-adversarial-networks-gans-cd6e4651a29\" rel=\"noreferrer noopener\" target=\"_blank\">GAN<\/a>&nbsp;models. Including everything from text generation to summarization and personalized content creation, these can be used in&nbsp;<a href=\"https:\/\/research.aimultiple.com\/generative-ai-in-education\/\" rel=\"noreferrer noopener\" target=\"_blank\">education<\/a>,&nbsp;<a href=\"https:\/\/research.aimultiple.com\/generative-ai-healthcare\/\" rel=\"noreferrer noopener\" target=\"_blank\">healthcare<\/a>, marketing, and day-to-day life. The&nbsp;<strong>conversational application<\/strong>&nbsp;component of text application is used in chatbots and voice assistants.<\/p>\n\n\n\n<p id=\"0c42\">Next,&nbsp;<strong>code-based applications<\/strong>&nbsp;are fueled by the same models, with&nbsp;<a href=\"https:\/\/github.com\/features\/copilot\" rel=\"noreferrer noopener\" target=\"_blank\">GitHub\u2019s Co-pilot<\/a>&nbsp;as the most notable example. Here we can use generative AI to complete our code, review it, fix bugs, refactor, and write code comments and documentation.<\/p>\n\n\n\n<p id=\"9ced\">On the topic of&nbsp;<strong>visual applications<\/strong>, we can use&nbsp;<a href=\"https:\/\/openai.com\/blog\/dall-e\/\" rel=\"noreferrer noopener\" target=\"_blank\">DALL-e<\/a>,&nbsp;<a href=\"https:\/\/medium.com\/sogetiblogsnl\/an-introduction-to-stable-diffusion-efd5da6b3aeb\">Stable Diffusion<\/a>, and&nbsp;<a href=\"https:\/\/midjourney.com\/\" rel=\"noreferrer noopener\" target=\"_blank\">Midjourney<\/a>. These models can be used to create new or improved visual material for marketing, education, and design. In the health sector, we can use these models for semantic translation, where&nbsp;<a href=\"https:\/\/www.sciencedirect.com\/topics\/computer-science\/semantic-image\" rel=\"noreferrer noopener\" target=\"_blank\">semantic images<\/a>&nbsp;are taken as input and a realistic visual output is generated.&nbsp;<a href=\"https:\/\/arxiv.org\/abs\/2108.04476\" rel=\"noreferrer noopener\" target=\"_blank\">3D shape generation with GANs<\/a>&nbsp;is another interesting application in the video game industry. Finally,&nbsp;<a href=\"https:\/\/runwayml.com\/text-to-video\/\" rel=\"noreferrer noopener\" target=\"_blank\">text-to-video editing<\/a>&nbsp;with natural language is a novel and interesting application for the entertainment industry.<\/p>\n\n\n\n<p id=\"dba8\">GANs and sequence-to-sequence automatic speech recognition (ASR) models (such as&nbsp;<a href=\"https:\/\/openai.com\/research\/whisper\" target=\"_blank\" rel=\"noreferrer noopener\">Whisper<\/a>) are used in&nbsp;<strong>audio applications<\/strong>. Their text-to-speech application can be used in education and marketing. Speech-to-speech conversion and&nbsp;<a href=\"https:\/\/google-research.github.io\/seanet\/musiclm\/examples\/\" target=\"_blank\" rel=\"noreferrer noopener\">music generation<\/a>&nbsp;have advantages for the entertainment and video game industry, such as game character voice generation.<\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter size-large\"><img decoding=\"async\" src=\"https:\/\/www.capgemini.com\/wp-content\/uploads\/2023\/03\/GENERATIVE-AI-APPS.webp?w=604\" alt=\"\" class=\"wp-image-913294\"\/><figcaption class=\"wp-element-caption\"><em>Some applications of generative AI in industries.<\/em><\/figcaption><\/figure>\n\n\n\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<p id=\"269e\">Although powerful, such models also come&nbsp;<a href=\"https:\/\/arxiv.org\/pdf\/2102.02503.pdf\" rel=\"noreferrer noopener\" target=\"_blank\">with&nbsp;<em>societal limitations and risks<\/em><\/a>, which are crucial to address. For example, generative models are susceptible to unexplainable or faulty behavior, often because the data can have a variety of flaws, such as poor quality, bias, or just straight-up&nbsp;<em>wrong information.<\/em><\/p>\n\n\n\n<p id=\"34bd\"><strong>So, with great power indeed comes great responsibility\u2026 and a few trust issues<\/strong><\/p>\n\n\n\n<p id=\"8dda\">If we take a closer look at the risks regarding ethics and fairness in generative models,&nbsp;<a href=\"https:\/\/www.deepmind.com\/publications\/ethical-and-social-risks-of-harm-from-language-models\" rel=\"noreferrer noopener\" target=\"_blank\">we can distinguish multiple categories of risk.<\/a><\/p>\n\n\n\n<p id=\"b2c1\">The first major risk is&nbsp;<strong>bias<\/strong>, which can occur in different settings. An example of bias is the use of stereotypes such as race, gender, or sexuality. This can lead to discrimination and unjust or oppressive answers generated from the model. Another form of bias is the model\u2019s word choice. Its answers should be formulated without toxic or vulgar content, and slurs.<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p id=\"02e2\">One example of a language model that learned a wrong bias is&nbsp;<a href=\"https:\/\/en.wikipedia.org\/wiki\/Tay_(bot)\" rel=\"noreferrer noopener\" target=\"_blank\">Tay, a Twitter bot developed by Microsoft in 2016<\/a>. Tay was created to learn, by actively engaging with other Twitter users by answering, retweeting, or liking their posts. Through these interactions, the model swiftly learned wrong, racist, and unethical information, which it included in its own Twitter posts. This led to the shutdown of Tay, less than 24 hours after its initial release.<\/p>\n<\/blockquote>\n\n\n\n<p id=\"8e4e\">Large language models (LLMs) like ChatGPT generate the most relevant answer based on the constraints, but it is not always 100% correct and can contain&nbsp;<strong>false information<\/strong>.<strong>&nbsp;<\/strong>Currently, such models provide their answers written as confident statements, which can be misleading as they may not be correct. Such events where a model confidently makes inaccurate statements are also called&nbsp;<strong>hallucinations<\/strong>.<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p id=\"b81c\">In 2023, Microsoft released a&nbsp;<a href=\"https:\/\/openai.com\/blog\/chatgpt\" rel=\"noreferrer noopener\" target=\"_blank\">GPT<\/a>-backed model to&nbsp;<a href=\"https:\/\/blogs.microsoft.com\/blog\/2023\/02\/07\/reinventing-search-with-a-new-ai-powered-microsoft-bing-and-edge-your-copilot-for-the-web\/\" rel=\"noreferrer noopener\" target=\"_blank\">empower their Bing search engine with chat capabilities<\/a>. However, there have already been multiple reports of undesirable behavior by this new service. It has threatened users with legal consequences or exposed their personal information. In another situation, it tried to convince a tech reporter he was not happily married and that he was in love with the chatbot (it also proclaimed their love for the reporter) and consequently should leave his wife (you see why we have trust issues now?!).<\/p>\n<\/blockquote>\n\n\n\n<p id=\"7c5a\">Generative models are trained on large corpora of data, which in many cases, is scraped from the internet. This data can contain private information, causing a&nbsp;<strong>privacy risk<\/strong>&nbsp;as it can unintentionally be learned and memorized by the model. This private data not only contain people, but also project documents, code bases, and works of art. When using medical models to diagnose a patient, it could also include private patient data. This also ties into&nbsp;<a href=\"https:\/\/www.smithsonianmag.com\/smart-news\/are-ai-image-generators-stealing-from-artists-180981488\/\" rel=\"noreferrer noopener\" target=\"_blank\">copyright<\/a>&nbsp;when this private memorized data is used in a generated output. For example, there have even been cases where image diffusion models have included slightly altered signatures or&nbsp;<a href=\"https:\/\/twitter.com\/kevin2kelly\/status\/1551964984325812224\" rel=\"noreferrer noopener\" target=\"_blank\">watermarks<\/a>&nbsp;it has learned from their training set.<\/p>\n\n\n\n<p id=\"189f\">The public can also&nbsp;<strong>maliciously use<\/strong>&nbsp;generative models to harm\/cheat others. This risk is linked with the other mentioned risks, except that it is&nbsp;<em>intentional<\/em>. Generative models can easily be used to create entirely new content with (purposefully) incorrect, private, or stolen information. Scarily, it doesn\u2019t take much effort to flood the internet with maliciously generated content.<\/p>\n\n\n\n<p id=\"1552\"><strong>Building trust takes time\u2026and tests<\/strong><\/p>\n\n\n\n<p id=\"c083\">To mitigate these risks, we need to ensure the models are reliable and transparent through testing. Testing of AI models comes with some nuances when compared to testing of software, and they need to be addressed in an&nbsp;<a href=\"https:\/\/medium.com\/sogetiblogsnl\/mlops-for-those-who-are-serious-about-ai-a415d257cf4e\" target=\"_blank\" rel=\"noreferrer noopener\">MLOps setting<\/a>&nbsp;with&nbsp;<strong>data, model, and system tests<\/strong>.<\/p>\n\n\n\n<p id=\"f5ab\">These tests are captured in a test strategy at the very start of the project (<a href=\"https:\/\/medium.com\/sogetiblogsnl\/mlops-the-importance-of-problem-formulation-ee438f9987e\" target=\"_blank\" rel=\"noreferrer noopener\"><strong>problem formulation<\/strong><\/a>). In this early stage, it is important to capture key performance indicators (KPIs) to ensure a robust implementation. Next to that, assessing the impact of the model on the user and society is a crucial step in this phase. Based on the assessment, user subpopulation KPIs are collected and measured against, in addition to the performance KPIs.<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p id=\"78be\">An example of a subpopulation KPI is model accuracy on a specific user segment, which needs to be measured on data, model, and system levels. There are open-source packages that we can use to do this, like the&nbsp;<a href=\"https:\/\/github.com\/Trusted-AI\/AIF360\" rel=\"noreferrer noopener\" target=\"_blank\">AI Fairness 360<\/a>&nbsp;package.<\/p>\n<\/blockquote>\n\n\n\n<p id=\"d1be\"><strong>Data testing<\/strong>&nbsp;can be used to address bias, privacy, and false information (consistency) trust issues. We make sure these are mitigated through exploratory data analysis (EDA), with assessments on bias, consistency, and toxicity of the data sources.<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p id=\"1938\">The data bias mitigation methods vary depending on the data used for training (images, text, audio, tabular), but they boil down to re-weighting the features of the minority group, oversampling the minority group, or under-sampling the majority group.<\/p>\n<\/blockquote>\n\n\n\n<p id=\"0f72\">These changes need to be documented and reproducible, which is done with the help of data version control (DVC).&nbsp;<a href=\"https:\/\/dvc.org\/\" rel=\"noreferrer noopener\" target=\"_blank\">DVC<\/a>&nbsp;allows us to commit versions of data, parameters, and models in the same way \u201ctraditional\u201d version control tools such as git do.<\/p>\n\n\n\n<p id=\"09d6\"><strong>Model testing<\/strong>&nbsp;focuses on model performance metrics, which are assessed through training iterations with validated training data from previous tests. These need to be reproducible and saved with model versions. We can support this through open MLOPs packages like&nbsp;<a href=\"https:\/\/mlflow.org\/\" rel=\"noreferrer noopener\" target=\"_blank\">MLFlow<\/a>.<\/p>\n\n\n\n<p id=\"6da3\">Next, model robustness tests like&nbsp;<a href=\"https:\/\/towardsdatascience.com\/metamorphic-testing-of-machine-learning-based-systems-e1fe13baf048\" target=\"_blank\" rel=\"noreferrer noopener\">metamorphic<\/a>&nbsp;and&nbsp;<a href=\"https:\/\/medium.com\/sogetiblogsnl\/toughen-up-building-more-robust-models-with-adversarial-training-fbecc9618fd3?source=user_profile---------3-------------------------------\" target=\"_blank\" rel=\"noreferrer noopener\">adversarial tests<\/a>&nbsp;should be implemented. These tests help assess if the model performs well on independent test scenarios. The usability of the model is assessed through user acceptance tests (UAT). Lags in the pipeline, false information, and interpretability of the prediction are measured on this level.<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p id=\"7eaf\">In terms of ChatGPT, a UAT could be constructed around assessing if the answer to the prompt is according to the user\u2019s expectation. In addition, the explainability aspect is added \u2014 does the model provide sources used to generate the expected response?<\/p>\n<\/blockquote>\n\n\n\n<p id=\"ced4\"><strong>System testing<\/strong>&nbsp;is extremely important to mitigate malicious use and false information risks. Malicious use needs to be assessed in the first phase and system tests are constructed based on that. Constraints in the model are then programmed.<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p id=\"0a39\">OpenAI is aware of possible malicious uses of ChatGPT and have&nbsp;<a href=\"https:\/\/openai.com\/charter\" rel=\"noreferrer noopener\" target=\"_blank\">incorporated safety as part of their strategy<\/a>. They have described&nbsp;<a href=\"https:\/\/openai.com\/blog\/how-should-ai-systems-behave\" rel=\"noreferrer noopener\" target=\"_blank\">how they try to mitigate some of these risks and limitations<\/a>. In a system test, these constraints are validated on real-life scenarios, as opposed to controlled environments used in previous tests.<\/p>\n<\/blockquote>\n\n\n\n<p id=\"4009\">Let\u2019s not forget about&nbsp;<a href=\"https:\/\/datatron.com\/what-is-model-drift\/\" rel=\"noreferrer noopener\" target=\"_blank\">model and data drift<\/a>. These are&nbsp;<a href=\"https:\/\/medium.com\/sogetiblogsnl\/mlops-monitoring-phase-df523cccb025\">monitored<\/a>, and retraining mechanisms can be set up to ensure the model stays relevant over time. Finally, the human-in-the-loop (HIL) method is also used to provide feedback to an online model.<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p id=\"7291\">ChatGPT and&nbsp;<a href=\"https:\/\/bard.google.com\/\" rel=\"noreferrer noopener\" target=\"_blank\">Bard<\/a>&nbsp;(Google\u2019s chatbot) have the possibility of human feedback through a thumbs up\/down. Though simple, this feedback is used to&nbsp;<a href=\"https:\/\/openai.com\/research\/instruction-following#sample3\" rel=\"noreferrer noopener\" target=\"_blank\">effectively retrain and align<\/a>&nbsp;the underlying models to users\u2019 expectations, providing more relevant feedback in future iterations.<\/p>\n<\/blockquote>\n\n\n\n<p id=\"4d0e\"><strong>To trust or not to trust?<\/strong><\/p>\n\n\n\n<p id=\"0dfc\">Just like the internet, truth and facts are not always given \u2014 and we\u2019ve seen (and will continue to see) instances where ChatGPT and other generative AI models get it wrong. While it is a powerful tool, and we completely understand the hype, there will always be some risk. It should be standard practice to implement risk and quality control techniques to minimize the risks as much as possible. And we do see this happening in practice \u2014 OpenAI has been transparent about the limitations of their models, how they have tested them, and the&nbsp;<a href=\"https:\/\/openai.com\/blog\/how-should-ai-systems-behave\" target=\"_blank\" rel=\"noreferrer noopener\">governance<\/a>&nbsp;that has been set up. <a href=\"https:\/\/www.capgemini.com\/about-us\/technology-partners\/google-cloud\/\">Google<\/a> also has&nbsp;<a href=\"https:\/\/ai.google\/principles\/\" target=\"_blank\" rel=\"noreferrer noopener\">responsible AI<\/a>&nbsp;principles that they have abided by when developing Bard. As both organizations release new and improved models \u2014 they also advance their testing controls to continuously improve quality, safety, and user-friendliness.<\/p>\n\n\n\n<p id=\"b504\">Perhaps we can argue that using generative AI models like ChatGPT doesn\u2019t necessarily leave us vulnerable to misinformation, but more familiar with how AI works and its limitations. Overall, the future of generative AI is bright and will continue to revolutionize the industry&nbsp;<em>if<\/em>&nbsp;we can trust it. And as we know, trust is an ongoing process\u2026<\/p>\n\n\n\n<p id=\"84e1\">In the next part of our Trustworthy Generative AI series, we will explore testing LLMs (bring your techie hat) and how quality LLM solutions lead to trust, which in turn, will increase adoption among businesses and the public.<\/p>\n\n\n\n<div style=\"height:40px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<p><em><a href=\"https:\/\/labs.sogeti.com\/chatgpt-and-i-have-trust-issues\/\" target=\"_blank\" rel=\"noreferrer noopener\">This article first appeared on SogetiLabs blog.<\/a><\/em><\/p>\n<\/div><\/div><\/div><\/div><\/div><\/section>\n","protected":false},"excerpt":{"rendered":"<p>Whether we are ready for it or not, we are currently in the era of generative AI, with the explosion of generative models such as\u00a0DALL-e,\u00a0GPT-3, and, notably,\u00a0ChatGPT, which racked up one million users in one day. <\/p>\n","protected":false},"author":81,"featured_media":1131031,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"cg_dt_proposed_to":[],"cg_seo_hreflang_relations":"[]","cg_seo_canonical_relation":"","cg_seo_hreflang_x_default_relation":"{\"uuid\":\"b713fe14-0248-4e24-9c7e-35859e0113cf\",\"blogId\":\"\",\"domain\":\"\",\"sitePath\":\"\",\"postLink\":\"\",\"postId\":null,\"isSaved\":true,\"isCrossLink\":false,\"hasCrossLink\":false}","cg_dt_approved_content":true,"cg_dt_mandatory_content":false,"cg_dt_notes":"","cg_dg_source_changed":true,"cg_dt_link_disabled":false,"_yoast_wpseo_primary_brand":"","_jetpack_memberships_contains_paid_content":false,"footnotes":"","featured_focal_points":""},"categories":[1],"tags":[],"brand":[],"service":[245],"industry":[],"partners":[],"blog-topic":[287],"content-group":[],"class_list":["post-1077715","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-uncategorized","service-data-ai","blog-topic-data-and-ai"],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v22.8 (Yoast SEO v22.8) - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>ChatGPT and I have trust issues - Capgemini India<\/title>\n<meta name=\"description\" content=\"Whether we are ready for it or not, we are currently in the era of generative AI, with the explosion of generative models such as\u00a0DALL-e,\u00a0GPT-3, and, notably,\u00a0ChatGPT, which racked up one million users in one day.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.capgemini.com\/in-en\/insights\/expert-perspectives\/chatgpt-and-i-have-trust-issues\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"ChatGPT and I have trust issues\" \/>\n<meta property=\"og:description\" content=\"Whether we are ready for it or not, we are currently in the era of generative AI, with the explosion of generative models such as\u00a0DALL-e,\u00a0GPT-3, and, notably,\u00a0ChatGPT, which racked up one million users in one day.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.capgemini.com\/in-en\/insights\/expert-perspectives\/chatgpt-and-i-have-trust-issues\/\" \/>\n<meta property=\"og:site_name\" content=\"Capgemini India\" \/>\n<meta property=\"article:published_time\" content=\"2023-03-30T13:04:00+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-03-24T06:37:37+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.capgemini.com\/in-en\/wp-content\/uploads\/sites\/18\/2023\/03\/trusted_AI.webp\" \/>\n\t<meta property=\"og:image:width\" content=\"959\" \/>\n\t<meta property=\"og:image:height\" content=\"479\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/webp\" \/>\n<meta name=\"author\" content=\"Tijana Nikolic\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"andreafedderson\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"9 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.capgemini.com\/in-en\/insights\/expert-perspectives\/chatgpt-and-i-have-trust-issues\/\",\"url\":\"https:\/\/www.capgemini.com\/in-en\/insights\/expert-perspectives\/chatgpt-and-i-have-trust-issues\/\",\"name\":\"ChatGPT and I have trust issues - Capgemini India\",\"isPartOf\":{\"@id\":\"https:\/\/www.capgemini.com\/in-en\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/www.capgemini.com\/in-en\/insights\/expert-perspectives\/chatgpt-and-i-have-trust-issues\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/www.capgemini.com\/in-en\/insights\/expert-perspectives\/chatgpt-and-i-have-trust-issues\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.capgemini.com\/in-en\/wp-content\/uploads\/sites\/18\/2023\/03\/trusted_AI.webp\",\"datePublished\":\"2023-03-30T13:04:00+00:00\",\"dateModified\":\"2025-03-24T06:37:37+00:00\",\"author\":{\"@id\":\"https:\/\/www.capgemini.com\/in-en\/#\/schema\/person\/f15605d9e6e17233b6b0fe01b151d116\"},\"description\":\"Whether we are ready for it or not, we are currently in the era of generative AI, with the explosion of generative models such as\u00a0DALL-e,\u00a0GPT-3, and, notably,\u00a0ChatGPT, which racked up one million users in one day.\",\"breadcrumb\":{\"@id\":\"https:\/\/www.capgemini.com\/in-en\/insights\/expert-perspectives\/chatgpt-and-i-have-trust-issues\/#breadcrumb\"},\"inLanguage\":\"en-IN\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.capgemini.com\/in-en\/insights\/expert-perspectives\/chatgpt-and-i-have-trust-issues\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-IN\",\"@id\":\"https:\/\/www.capgemini.com\/in-en\/insights\/expert-perspectives\/chatgpt-and-i-have-trust-issues\/#primaryimage\",\"url\":\"https:\/\/www.capgemini.com\/in-en\/wp-content\/uploads\/sites\/18\/2023\/03\/trusted_AI.webp\",\"contentUrl\":\"https:\/\/www.capgemini.com\/in-en\/wp-content\/uploads\/sites\/18\/2023\/03\/trusted_AI.webp\",\"width\":959,\"height\":479},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.capgemini.com\/in-en\/insights\/expert-perspectives\/chatgpt-and-i-have-trust-issues\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/www.capgemini.com\/in-en\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"ChatGPT and I have trust issues\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.capgemini.com\/in-en\/#website\",\"url\":\"https:\/\/www.capgemini.com\/in-en\/\",\"name\":\"Capgemini India\",\"description\":\"Capgemini\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.capgemini.com\/in-en\/?s={search_term_string}\"},\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"en-IN\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/www.capgemini.com\/in-en\/#\/schema\/person\/f15605d9e6e17233b6b0fe01b151d116\",\"name\":\"andreafedderson\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-IN\",\"@id\":\"https:\/\/www.capgemini.com\/in-en\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/51832d062c10fe46e4ed3450b55aee62a9b3c45fbc714f026f23966a41ed247a?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/51832d062c10fe46e4ed3450b55aee62a9b3c45fbc714f026f23966a41ed247a?s=96&d=mm&r=g\",\"caption\":\"andreafedderson\"},\"url\":\"https:\/\/www.capgemini.com\/in-en\/author\/andreafedderson\/\"}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"ChatGPT and I have trust issues - Capgemini India","description":"Whether we are ready for it or not, we are currently in the era of generative AI, with the explosion of generative models such as\u00a0DALL-e,\u00a0GPT-3, and, notably,\u00a0ChatGPT, which racked up one million users in one day.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.capgemini.com\/in-en\/insights\/expert-perspectives\/chatgpt-and-i-have-trust-issues\/","og_locale":"en_US","og_type":"article","og_title":"ChatGPT and I have trust issues","og_description":"Whether we are ready for it or not, we are currently in the era of generative AI, with the explosion of generative models such as\u00a0DALL-e,\u00a0GPT-3, and, notably,\u00a0ChatGPT, which racked up one million users in one day.","og_url":"https:\/\/www.capgemini.com\/in-en\/insights\/expert-perspectives\/chatgpt-and-i-have-trust-issues\/","og_site_name":"Capgemini India","article_published_time":"2023-03-30T13:04:00+00:00","article_modified_time":"2025-03-24T06:37:37+00:00","og_image":[{"width":959,"height":479,"url":"https:\/\/www.capgemini.com\/in-en\/wp-content\/uploads\/sites\/18\/2023\/03\/trusted_AI.webp","type":"image\/webp"}],"author":"Tijana Nikolic","twitter_card":"summary_large_image","twitter_misc":{"Written by":"andreafedderson","Est. reading time":"9 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/www.capgemini.com\/in-en\/insights\/expert-perspectives\/chatgpt-and-i-have-trust-issues\/","url":"https:\/\/www.capgemini.com\/in-en\/insights\/expert-perspectives\/chatgpt-and-i-have-trust-issues\/","name":"ChatGPT and I have trust issues - Capgemini India","isPartOf":{"@id":"https:\/\/www.capgemini.com\/in-en\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.capgemini.com\/in-en\/insights\/expert-perspectives\/chatgpt-and-i-have-trust-issues\/#primaryimage"},"image":{"@id":"https:\/\/www.capgemini.com\/in-en\/insights\/expert-perspectives\/chatgpt-and-i-have-trust-issues\/#primaryimage"},"thumbnailUrl":"https:\/\/www.capgemini.com\/in-en\/wp-content\/uploads\/sites\/18\/2023\/03\/trusted_AI.webp","datePublished":"2023-03-30T13:04:00+00:00","dateModified":"2025-03-24T06:37:37+00:00","author":{"@id":"https:\/\/www.capgemini.com\/in-en\/#\/schema\/person\/f15605d9e6e17233b6b0fe01b151d116"},"description":"Whether we are ready for it or not, we are currently in the era of generative AI, with the explosion of generative models such as\u00a0DALL-e,\u00a0GPT-3, and, notably,\u00a0ChatGPT, which racked up one million users in one day.","breadcrumb":{"@id":"https:\/\/www.capgemini.com\/in-en\/insights\/expert-perspectives\/chatgpt-and-i-have-trust-issues\/#breadcrumb"},"inLanguage":"en-IN","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.capgemini.com\/in-en\/insights\/expert-perspectives\/chatgpt-and-i-have-trust-issues\/"]}]},{"@type":"ImageObject","inLanguage":"en-IN","@id":"https:\/\/www.capgemini.com\/in-en\/insights\/expert-perspectives\/chatgpt-and-i-have-trust-issues\/#primaryimage","url":"https:\/\/www.capgemini.com\/in-en\/wp-content\/uploads\/sites\/18\/2023\/03\/trusted_AI.webp","contentUrl":"https:\/\/www.capgemini.com\/in-en\/wp-content\/uploads\/sites\/18\/2023\/03\/trusted_AI.webp","width":959,"height":479},{"@type":"BreadcrumbList","@id":"https:\/\/www.capgemini.com\/in-en\/insights\/expert-perspectives\/chatgpt-and-i-have-trust-issues\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.capgemini.com\/in-en\/"},{"@type":"ListItem","position":2,"name":"ChatGPT and I have trust issues"}]},{"@type":"WebSite","@id":"https:\/\/www.capgemini.com\/in-en\/#website","url":"https:\/\/www.capgemini.com\/in-en\/","name":"Capgemini India","description":"Capgemini","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.capgemini.com\/in-en\/?s={search_term_string}"},"query-input":"required name=search_term_string"}],"inLanguage":"en-IN"},{"@type":"Person","@id":"https:\/\/www.capgemini.com\/in-en\/#\/schema\/person\/f15605d9e6e17233b6b0fe01b151d116","name":"andreafedderson","image":{"@type":"ImageObject","inLanguage":"en-IN","@id":"https:\/\/www.capgemini.com\/in-en\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/51832d062c10fe46e4ed3450b55aee62a9b3c45fbc714f026f23966a41ed247a?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/51832d062c10fe46e4ed3450b55aee62a9b3c45fbc714f026f23966a41ed247a?s=96&d=mm&r=g","caption":"andreafedderson"},"url":"https:\/\/www.capgemini.com\/in-en\/author\/andreafedderson\/"}]}},"blog_topic_info":[{"id":287,"name":"Data and AI"}],"taxonomy_info":{"category":[{"id":1,"name":"Uncategorized","slug":"uncategorized"}],"service":[{"id":245,"name":"Data &amp; AI","slug":"data-ai"}],"blog-topic":[{"id":287,"name":"Data and AI","slug":"data-and-ai"}]},"parsely":{"version":"1.1.0","canonical_url":"https:\/\/capgemini.com\/in-en\/insights\/expert-perspectives\/chatgpt-and-i-have-trust-issues\/","smart_links":{"inbound":0,"outbound":0},"traffic_boost_suggestions_count":0,"meta":{"@context":"https:\/\/schema.org","@type":"NewsArticle","headline":"ChatGPT and I have trust issues","url":"https:\/\/www.capgemini.com\/in-en\/insights\/expert-perspectives\/chatgpt-and-i-have-trust-issues\/","mainEntityOfPage":{"@type":"WebPage","@id":"https:\/\/www.capgemini.com\/in-en\/insights\/expert-perspectives\/chatgpt-and-i-have-trust-issues\/"},"thumbnailUrl":"https:\/\/www.capgemini.com\/in-en\/wp-content\/uploads\/sites\/18\/2023\/03\/trusted_AI.webp?w=150&h=150&crop=1","image":{"@type":"ImageObject","url":"https:\/\/www.capgemini.com\/in-en\/wp-content\/uploads\/sites\/18\/2023\/03\/trusted_AI.webp"},"articleSection":"Uncategorized","author":[],"creator":[],"publisher":{"@type":"Organization","name":"Capgemini India","logo":""},"keywords":[],"dateCreated":"2023-03-30T13:04:00Z","datePublished":"2023-03-30T13:04:00Z","dateModified":"2025-03-24T06:37:37Z"},"rendered":"<meta name=\"parsely-title\" content=\"ChatGPT and I have trust issues\" \/>\n<meta name=\"parsely-link\" content=\"https:\/\/www.capgemini.com\/in-en\/insights\/expert-perspectives\/chatgpt-and-i-have-trust-issues\/\" \/>\n<meta name=\"parsely-type\" content=\"post\" \/>\n<meta name=\"parsely-image-url\" content=\"https:\/\/www.capgemini.com\/in-en\/wp-content\/uploads\/sites\/18\/2023\/03\/trusted_AI.webp?w=150&amp;h=150&amp;crop=1\" \/>\n<meta name=\"parsely-pub-date\" content=\"2023-03-30T13:04:00Z\" \/>\n<meta name=\"parsely-section\" content=\"Uncategorized\" \/>","tracker_url":"https:\/\/cdn.parsely.com\/keys\/capgemini.com\/p.js"},"jetpack_featured_media_url":"https:\/\/www.capgemini.com\/in-en\/wp-content\/uploads\/sites\/18\/2023\/03\/trusted_AI.webp","archive_status":false,"featured_image_src":"https:\/\/www.capgemini.com\/in-en\/wp-content\/uploads\/sites\/18\/2023\/03\/trusted_AI.webp","featured_image_alt":"","jetpack_sharing_enabled":true,"distributor_meta":false,"distributor_terms":false,"distributor_media":false,"distributor_original_site_name":"Capgemini India","distributor_original_site_url":"https:\/\/www.capgemini.com\/in-en","push-errors":false,"featured_image_url":"https:\/\/www.capgemini.com\/in-en\/wp-content\/uploads\/sites\/18\/2023\/03\/trusted_AI.webp","_links":{"self":[{"href":"https:\/\/www.capgemini.com\/in-en\/wp-json\/wp\/v2\/posts\/1077715","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.capgemini.com\/in-en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.capgemini.com\/in-en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.capgemini.com\/in-en\/wp-json\/wp\/v2\/users\/81"}],"replies":[{"embeddable":true,"href":"https:\/\/www.capgemini.com\/in-en\/wp-json\/wp\/v2\/comments?post=1077715"}],"version-history":[{"count":7,"href":"https:\/\/www.capgemini.com\/in-en\/wp-json\/wp\/v2\/posts\/1077715\/revisions"}],"predecessor-version":[{"id":1146200,"href":"https:\/\/www.capgemini.com\/in-en\/wp-json\/wp\/v2\/posts\/1077715\/revisions\/1146200"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.capgemini.com\/in-en\/wp-json\/wp\/v2\/media\/1131031"}],"wp:attachment":[{"href":"https:\/\/www.capgemini.com\/in-en\/wp-json\/wp\/v2\/media?parent=1077715"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.capgemini.com\/in-en\/wp-json\/wp\/v2\/categories?post=1077715"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.capgemini.com\/in-en\/wp-json\/wp\/v2\/tags?post=1077715"},{"taxonomy":"brand","embeddable":true,"href":"https:\/\/www.capgemini.com\/in-en\/wp-json\/wp\/v2\/brand?post=1077715"},{"taxonomy":"service","embeddable":true,"href":"https:\/\/www.capgemini.com\/in-en\/wp-json\/wp\/v2\/service?post=1077715"},{"taxonomy":"industry","embeddable":true,"href":"https:\/\/www.capgemini.com\/in-en\/wp-json\/wp\/v2\/industry?post=1077715"},{"taxonomy":"partners","embeddable":true,"href":"https:\/\/www.capgemini.com\/in-en\/wp-json\/wp\/v2\/partners?post=1077715"},{"taxonomy":"blog-topic","embeddable":true,"href":"https:\/\/www.capgemini.com\/in-en\/wp-json\/wp\/v2\/blog-topic?post=1077715"},{"taxonomy":"content-group","embeddable":true,"href":"https:\/\/www.capgemini.com\/in-en\/wp-json\/wp\/v2\/content-group?post=1077715"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}