{"id":783158,"date":"2023-04-20T14:29:00","date_gmt":"2023-04-20T12:29:00","guid":{"rendered":"https:\/\/www.capgemini.com\/fr-fr\/?p=783158"},"modified":"2025-05-12T15:11:14","modified_gmt":"2025-05-12T13:11:14","slug":"ia-de-confiance-et-explicabilite-ouvrir-la-boite-noire","status":"publish","type":"post","link":"https:\/\/www.capgemini.com\/fr-fr\/perspectives\/blog\/intelligence-artificielle-de-confiance-et-explicabilite-ouvrir-la-boite-noire\/","title":{"rendered":"Intelligence artificielle de confiance et explicabilit\u00e9\u00a0: ouvrir la bo\u00eete noire"},"content":{"rendered":"\n<header class=\"wp-block-cg-blocks-hero-blogs header-hero-blogs\"><div class=\"container\"><div class=\"hero-blogs\"><div class=\"hero-blogs-content-wrapper\"><div class=\"row\"><div class=\"col-12\"><div class=\"header-title\"><h1>Intelligence artificielle de confiance et explicabilit\u00e9\u00a0: ouvrir la bo\u00eete noire<\/h1><\/div><\/div><\/div><\/div><div class=\"hero-blogs-bottom\"><div class=\"header-author\"><div class=\"author-img\"><img decoding=\"async\" src=\"\/wp-content\/themes\/capgemini2020\/assets\/images\/cg-logo.png?w=200&amp;quality=10\" loading=\"lazy\"\/><\/div><div class=\"author-name-date\"><h5 class=\"author-name\">Capgemini<\/h5><h5 class=\"blog-date\">20 avril 2023<\/h5><\/div><\/div><div class=\"brand-image\"> <\/div><\/div><\/div><\/div><\/header>\n\n\n\n<section class=\"wp-block-cg-blocks-intro-para undefined section section--intro\"><div class=\"intro-para\"><div class=\"container\"><div class=\"row\"><div class=\"col-12 col-md-1\"><nav class=\"article-social\"><ul class=\"social-nav\"><li class=\"ip-order-fb\"><a href=\"https:\/\/www.facebook.com\/sharer\/sharer.php?u=https:\/\/www.capgemini.com\/fr-fr\/insights\/expert-perspectives\/intelligence-artificielle-de-confiance-et-explicabilite-ouvrir-la-boite-noire\/\" target=\"_blank\" rel=\"noopener noreferrer\" title=\"Ouvrir dans une nouvelle fen\u00eatre\"><i aria-hidden=\"true\" class=\"icon-fb\"><\/i><span class=\"sr-only\">Facebook<\/span><\/a><\/li><li class=\"ip-order-li\"><a href=\"https:\/\/www.linkedin.com\/sharing\/share-offsite\/?url=https:\/\/www.capgemini.com\/fr-fr\/insights\/expert-perspectives\/intelligence-artificielle-de-confiance-et-explicabilite-ouvrir-la-boite-noire\/\" target=\"_blank\" rel=\"noopener noreferrer\" title=\"Ouvrir dans une nouvelle fen\u00eatre\"><i aria-hidden=\"true\" class=\"icon-li\"><\/i><span class=\"sr-only\">Linkedin<\/span><\/a><\/li><\/ul><\/nav><\/div><div class=\"col-12 col-md-11 col-lg-10\"><h2 class=\"intro-para-title\">L\u2019intelligence artificielle de Confiance est un sujet que l\u2019on adresse tous les jours chez Capgemini Invent pour fournir \u00e0 nos clients des algorithmes de qualit\u00e9. Ce sujet est trait\u00e9 par la tribu Trusted AI qui assure la connaissance et l\u2019application de l\u2019\u00e9tat de l\u2019art des 8 th\u00e9matiques de l\u2019IA de Confiance introduites dans <a href=\"https:\/\/www.capgemini.com\/fr-fr\/insights\/expert-perspectives\/ia-de-confiance-comprendre-les-enjeux-de-la-prochaine-decennie\/\" target=\"_blank\" rel=\"noreferrer noopener\">cet article<\/a>. L\u2019explicabilit\u00e9 est une de ces 8 composantes.<\/h2><\/div><\/div><\/div><\/div><\/section>\n\n\n\n<section class=\"wp-block-cg-blocks-group undefined section section--article-content\"><div class=\"article-main-content\"><div class=\"container\"><div class=\"row\"><div class=\"col-12 col-md-11 col-lg-10 offset-md-1 offset-lg-1\"><div class=\"article-text article-quote-text\">\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" height=\"721\" width=\"1024\" src=\"https:\/\/www.capgemini.com\/fr-fr\/wp-content\/uploads\/sites\/6\/2025\/04\/explicabilite.png?w=960\" alt=\"\" class=\"wp-image-783156\" srcset=\"https:\/\/www.capgemini.com\/fr-fr\/wp-content\/uploads\/sites\/6\/2025\/04\/explicabilite.png 1127w, https:\/\/www.capgemini.com\/fr-fr\/wp-content\/uploads\/sites\/6\/2025\/04\/explicabilite.png?resize=300,211 300w, https:\/\/www.capgemini.com\/fr-fr\/wp-content\/uploads\/sites\/6\/2025\/04\/explicabilite.png?resize=768,541 768w, https:\/\/www.capgemini.com\/fr-fr\/wp-content\/uploads\/sites\/6\/2025\/04\/explicabilite.png?resize=1024,721 1024w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-l-explicabilite-a-quoi-ca-sert-nbsp\">L\u2019explicabilit\u00e9, \u00e0 quoi \u00e7a sert&nbsp;?<\/h2>\n\n\n\n<p>La course \u00e0 la performance des mod\u00e8les d\u2019IA a rendu ces derniers plus complexes et moins interpr\u00e9tables. Or, pour avoir confiance dans le fonctionnement d\u2019une IA, il faut pouvoir comprendre son raisonnement. L\u2019explicabilit\u00e9 est un domaine de l\u2019IA de confiance qui s\u2019int\u00e9resse d\u2019une part \u00e0 comprendre comment une IA g\u00e9n\u00e8re des pr\u00e9dictions, et d\u2019autre part \u00e0 expliquer ces pr\u00e9dictions aux utilisateurs.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-l-explicabilite-qu-est-ce-que-c-est-nbsp\">L\u2019explicabilit\u00e9, qu\u2019est-ce que c\u2019est&nbsp;?<\/h2>\n\n\n\n<p>Expliquer une IA c\u2019est&nbsp;: \u00ab&nbsp;Faire comprendre \u00e0 quelqu&#8217;un un mod\u00e8le, une pr\u00e9diction, les \u00e9claircir en donnant les \u00e9l\u00e9ments n\u00e9cessaires. \u00catre une justification, constituer une raison suffisante&nbsp;; \u00eatre la cause de quelque chose \u00bb. On comprend dans cette d\u00e9finition adapt\u00e9e du Larousse que l\u2019explicabilit\u00e9 dans le contexte de l\u2019IA contient plusieurs dimensions&nbsp;:<\/p>\n\n\n\n<h4 class=\"wp-block-heading\" id=\"h-les-explications-doivent-etre-pertinentes-nbsp\">Les explications doivent \u00eatre pertinentes&nbsp;<\/h4>\n\n\n\n<p>Une explication doit r\u00e9pondre syst\u00e9matiquement \u00e0 une question du type \u00ab&nbsp;pourquoi\u2026&nbsp;?&nbsp;\u00bb&nbsp; ou \u00ab&nbsp;comment \u2026&nbsp;?&nbsp;\u00bb (Par exemple&nbsp;: pourquoi telle d\u00e9cision est prise&nbsp;? comment modifier la pr\u00e9diction&nbsp;?) Il est donc primordial de bien identifier tous les acteurs-types gravitant autour d\u2019un syst\u00e8me d\u2019IA ainsi que toutes les questions qu\u2019ils se posent au regard de son fonctionnement. Chaque acteur (data scientist, utilisateur, r\u00e9gulateur\u2026) pourra avoir des questions diff\u00e9rentes et des niveaux d\u2019expertises diff\u00e9rents.<\/p>\n\n\n\n<p>Pour qu\u2019un syst\u00e8me d\u2019IA soit consid\u00e9r\u00e9 comme explicable, il est n\u00e9cessaire de trouver les solutions techniques permettant de r\u00e9pondre \u00e0 l\u2019ensemble de ces questions, mais \u00e9galement de repr\u00e9senter ces r\u00e9ponses dans un format compr\u00e9hensible adapt\u00e9 \u00e0 chaque auditeur.<\/p>\n\n\n\n<h4 class=\"wp-block-heading\" id=\"h-les-explications-doivent-etre-fiables\"><strong>Les explications doivent \u00eatre fiables<\/strong><\/h4>\n\n\n\n<p>Les outils d\u2019explicabilit\u00e9 en IA utilisent souvent des m\u00e9thodes statistiques en surcouche d\u2019un mod\u00e8le bo\u00eete noire, par exemple Shap calcule des valeurs de Shapley. Il n\u2019est donc pas toujours trivial que l\u2019explication fournie soit bien fid\u00e8le \u00e0 son fonctionnement. Dans ce cas, il est n\u00e9cessaire d\u2019\u00e9valuer la fiabilit\u00e9 des explications via des m\u00e9triques telles que l\u2019exactitude, la compl\u00e9tude ou la stabilit\u00e9.<\/p>\n\n\n\n<p>Une autre alternative est l\u2019utilisation de mod\u00e8les glass-box (explicables par nature), pour lesquels l\u2019explication est par d\u00e9finition fid\u00e8le au mod\u00e8le. Il existe aujourd\u2019hui des <a href=\"https:\/\/www.quantmetry.com\/blog\/comment-creer-des-modeles-interpretables-sans-approximation\/\">mod\u00e8les glass-box tr\u00e8s performants<\/a> et nous recommandons fortement leur utilisation pour des cas d\u2019usages critiques.<\/p>\n\n\n\n<h4 class=\"wp-block-heading\" id=\"h-les-explications-doivent-s-appuyer-sur-une-comprehension-metier-des-relations-de-cause-a-effet\"><strong>Les explications doivent s\u2019appuyer sur une compr\u00e9hension m\u00e9tier des relations de cause \u00e0 effet<\/strong><\/h4>\n\n\n\n<p>Les mod\u00e8les de machine learning peuvent apprendre sur tout type de donn\u00e9es sans faire aucune hypoth\u00e8se de causalit\u00e9 (par exemple, un mod\u00e8le peut pr\u00e9dire le temps qu\u2019il fait en comptant le nombre de parapluies dans la rue\u2026). L\u2019explication d\u2019une pr\u00e9diction d\u2019un mod\u00e8le ne peut donc pas syst\u00e9matiquement \u00eatre interpr\u00e9t\u00e9 comme sa cause.<\/p>\n\n\n\n<p>Dans le cas o\u00f9 les explications sont utilis\u00e9es pour prendre des d\u00e9cisions, il est important de bien identifier les sc\u00e9narios et les liens de cause \u00e0 effet sous-jacent \u00e0 l\u2019observation de la donn\u00e9e, les facteurs de confusion, et d\u2019\u00e9tablir le graphe de causalit\u00e9 de vos variables.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-l-explicabilite-comment-l-activer-nbsp\"><strong>L\u2019explicabilit\u00e9, comment l\u2019activer&nbsp;?<\/strong><\/h2>\n\n\n\n<p>Produire un syst\u00e8me d\u2019IA explicable est donc un processus technique, humain et organisationnel. Ces trois actions \u00e0 mettre en place constituent les \u00e9tapes essentielles \u00e0 suivre&nbsp;:<\/p>\n\n\n\n<h4 class=\"wp-block-heading\" id=\"h-interroger\">Interroger<\/h4>\n\n\n\n<p>La premi\u00e8re action \u00e0 prendre est de clairement d\u00e9finir les besoins en explicabilit\u00e9. Il faut donc identifier tous les acteurs-types ainsi que les questions qu\u2019ils peuvent se poser par rapports au mod\u00e8le. Pour cela il est n\u00e9cessaire d\u2019organiser des ateliers avec toutes les personnes interagissant avec les pr\u00e9dictions de l\u2019IA et de lister les questions qu\u2019ils se posent, ainsi que les leviers qu\u2019ils attendent pour prendre des d\u00e9cisions \u00e9clair\u00e9es. Il est \u00e9galement important \u00e0 ce stade de bien relever le format attendu des r\u00e9ponses (texte, visuel, interactif\u2026).<\/p>\n\n\n\n<p>Le r\u00e9sultat de ces ateliers peut \u00eatre formalis\u00e9 dans un document de cadrage qui permet de choisir les m\u00e9thodes \u00e0 impl\u00e9menter ainsi que leur format de restitution.<\/p>\n\n\n\n<h4 class=\"wp-block-heading\" id=\"h-visualiser\">Visualiser<\/h4>\n\n\n\n<p>Une fois que les besoins ont \u00e9t\u00e9 clairement identifi\u00e9s, il est maintenant n\u00e9cessaire d\u2019impl\u00e9menter les m\u00e9thodes permettant de produire les r\u00e9ponses pertinentes. Pour chaque besoin exprim\u00e9, il faut identifier un ou plusieurs outils techniques permettant d\u2019y r\u00e9pondre, les impl\u00e9menter, et restituer leurs r\u00e9sultats de mani\u00e8re visuelle, adapt\u00e9e au niveau de comp\u00e9tences de l\u2019utilisateur.<\/p>\n\n\n\n<p>Il est recommand\u00e9 d\u2019utiliser autant que faire ce peut un mod\u00e8le explicable par nature, qui permet de r\u00e9pondre de mani\u00e8re fiable \u00e0 une bonne partie des questions sans avoir \u00e0 utiliser trop de surcouches d\u2019explicabilit\u00e9.<\/p>\n\n\n\n<p>Quel que soit le type de mod\u00e8le et d\u2019explication utilis\u00e9, Il est important d\u2019impliquer les utilisateurs dans le d\u00e9veloppement des interfaces de restitution afin de s\u2019assurer que leurs besoins soient bien adress\u00e9s. Il s\u2019agit donc d\u2019un processus it\u00e9ratif de d\u00e9veloppement et de prise de retours durant lequel de nouveaux besoins peuvent \u00e9merger. Cette \u00e9tape tisse des liens \u00e9troits avec l\u2019UX design.<\/p>\n\n\n\n<h4 class=\"wp-block-heading\" id=\"h-caracteriser\">Caract\u00e9riser<\/h4>\n\n\n\n<p>En plus de s\u2019assurer que les explications soient pertinentes, il faut \u00e9galement s\u2019assurer qu\u2019elles soient bien repr\u00e9sentatives du fonctionnement du mod\u00e8le. Si on ne valide les explications qu\u2019aupr\u00e8s des utilisateurs, on peut facilement tomber dans le pi\u00e8ge du biais de confirmation&nbsp;<\/p>\n\n\n\n<p>En fonction du type de t\u00e2che, Il existe plusieurs tests et m\u00e9triques permettant de v\u00e9rifier la fiabilit\u00e9 de nos explications. On peut par exemple v\u00e9rifier si plusieurs types d\u2019explications sont d\u2019accord entre elles (comparaison), v\u00e9rifier si deux points proches ont des explications proches (stabilit\u00e9) ou encore si l\u2019explication permet de bien retrouver les pr\u00e9dictions du mod\u00e8le (exactitude). Une fois de plus, il est recommand\u00e9 d\u2019utiliser un mod\u00e8le explicable par nature afin de s\u2019affranchir des probl\u00e8mes de fiabilit\u00e9 de l\u2019explication. Dans le cas contraire, il est n\u00e9cessaire d\u2019effectuer et de documenter <a href=\"https:\/\/towardsdatascience.com\/building-confidence-on-explainability-methods-66b9ee575514\">une analyse de fiabilit\u00e9<\/a> du syst\u00e8me d\u2019explicabilit\u00e9.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-l-explicabilite-en-vrai-ca-donne-quoi-nbsp\">L\u2019explicabilit\u00e9 en vrai \u00e7a donne quoi&nbsp;?<\/h2>\n\n\n\n<p>Avec l\u2019arriv\u00e9e imminente de l\u2019AI Act, les organismes r\u00e8glementaires vont devoir s\u2019assurer que les IA \u00e0 haut risque mises sur le march\u00e9 aient un niveau satisfaisant d\u2019explicabilit\u00e9. Dans ce cadre, l\u2019ACPR (l\u2019Autorit\u00e9 de Contr\u00f4le Prudentiel et de R\u00e9solution) a organis\u00e9 en 2021 un Tech Sprint sur l\u2019explication de mod\u00e8les d\u2019octroi de cr\u00e9dit, qui a \u00e9t\u00e9 gagn\u00e9 par l\u2019\u00e9quipe de Capgemini Invent !<br><span style=\"font-size: 1rem; letter-spacing: 0.4px;\"><\/span><br>Afin de proposer un syst\u00e8me d\u2019IA explicable, il a d\u2019abord \u00e9t\u00e9 n\u00e9cessaire d\u2019identifier les 4 diff\u00e9rents acteurs : Le client (dont le cr\u00e9dit est accept\u00e9 ou refus\u00e9 par l\u2019IA), le conseill\u00e9 bancaire (ou middle office qui utilise l\u2019outil d\u2019IA), le concepteur (Data Scientist) et le r\u00e9gulateur (ici l\u2019ACPR). Le syst\u00e8me d\u2019explicabilit\u00e9 est donc constitu\u00e9 d\u2019une application interactive avec 4 pages contenant les questions identifi\u00e9es pour chacun des utilisateurs, ainsi que les r\u00e9ponses propos\u00e9es.<\/p>\n\n\n\n<p>Par exemple pour le client deux questions son identifi\u00e9s :<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><em>\u00ab&nbsp;Pourquoi mon cr\u00e9dit a \u00e9t\u00e9 refus\u00e9&nbsp;?\u00bb<\/em>&nbsp;: la r\u00e9ponse sous forme de phrases dans un fran\u00e7ais compr\u00e9hensible est g\u00e9n\u00e9r\u00e9e \u00e0 partir de l\u2019importance calcul\u00e9e des variables d\u2019entr\u00e9e (en utilisant la librairie <a href=\"https:\/\/github.com\/iancovert\/shapley-regression\">Shapley Regression<\/a>, une alternative \u00e0 SHAP).<\/li>\n\n\n\n<li><em>\u00ab&nbsp;Comment puis-je changer la d\u00e9cision&nbsp;?&nbsp;\u00bb&nbsp;<\/em>: des sc\u00e9narios alternatifs (contrefactuels) bas\u00e9s sur les caract\u00e9ristiques du client et permettant d\u2019inverser la d\u00e9cision du mod\u00e8le sont propos\u00e9s au client (en utilisant la <a href=\"https:\/\/github.com\/interpretml\/DiCE\">librairie DICE<\/a>).<\/li>\n<\/ul>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"904\" height=\"550\" src=\"https:\/\/www.capgemini.com\/fr-fr\/wp-content\/uploads\/sites\/6\/2025\/04\/laptop-streamlit.png?w=904\" alt=\"\" class=\"wp-image-783155\" srcset=\"https:\/\/www.capgemini.com\/fr-fr\/wp-content\/uploads\/sites\/6\/2025\/04\/laptop-streamlit.png 904w, https:\/\/www.capgemini.com\/fr-fr\/wp-content\/uploads\/sites\/6\/2025\/04\/laptop-streamlit.png?resize=300,183 300w, https:\/\/www.capgemini.com\/fr-fr\/wp-content\/uploads\/sites\/6\/2025\/04\/laptop-streamlit.png?resize=768,467 768w\" sizes=\"auto, (max-width: 904px) 100vw, 904px\" \/><\/figure>\n\n\n\n<p><em>Figure 1: Application Streamlit pr\u00e9sentant les explications du mod\u00e8le de risque de cr\u00e9dit. Chaque explication\/visualisations a \u00e9t\u00e9 pens\u00e9e pour r\u00e9pondre aux diff\u00e9rentes questions des utilisateurs et valid\u00e9es aupr\u00e8s de partenaires dans le domaine de la banque. Cette page pr\u00e9sente la pr\u00e9diction pour un client, les 2 questions identifi\u00e9es ainsi que des phrases d\u2019explications.&nbsp;<\/em><\/p>\n\n\n\n<p>Ce syst\u00e8me d\u2019explicabilit\u00e9 est une solution pour un mod\u00e8le de classification binaire en donn\u00e9es tabulaires. Pour des t\u00e2ches diff\u00e9rentes comme le forecast (en s\u00e9ries temporelles) ou la segmentation (en Computer Vision) les outils utilis\u00e9s et les visualisations seront tr\u00e8s diff\u00e9rentes. N\u00e9anmoins, la d\u00e9marche \u00e0 impl\u00e9menter (interroger, visualiser, caract\u00e9riser) pour obtenir des explications fiables et pertinentes sera toujours la m\u00eame, quel que soit le cas d\u2019usage&nbsp;!<\/p>\n<\/div><\/div><\/div><\/div><\/div><\/section>\n","protected":false},"excerpt":{"rendered":"<p>L\u2019intelligence artificielle de Confiance est un sujet que l\u2019on adresse tous les jours chez Capgemini Invent pour fournir \u00e0 nos clients des algorithmes de qualit\u00e9. Ce sujet est trait\u00e9 par la tribu Trusted AI qui assure la connaissance et l\u2019application de l\u2019\u00e9tat de l\u2019art des 8 th\u00e9matiques de l\u2019IA de Confiance. L\u2019explicabilit\u00e9 est une de ces 8 composantes. \ufeff<\/p>\n","protected":false},"author":12386,"featured_media":783311,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"cg_dt_proposed_to":[],"cg_seo_hreflang_relations":"[]","cg_seo_canonical_relation":"","cg_seo_hreflang_x_default_relation":"","cg_dt_approved_content":true,"cg_dt_mandatory_content":false,"cg_dt_notes":"","cg_dg_source_changed":false,"cg_dt_link_disabled":false,"_yoast_wpseo_primary_brand":"","_jetpack_memberships_contains_paid_content":false,"footnotes":"","featured_focal_points":""},"categories":[3],"tags":[174],"brand":[],"service":[49],"industry":[],"partners":[],"blog-topic":[359],"content-group":[],"class_list":["post-783158","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-innovation","tag-ia-et-data","service-ia-data","blog-topic-ia-data"],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v22.8 (Yoast SEO v22.8) - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>IA de confiance et explicabilit\u00e9 - Capgemini France<\/title>\n<meta name=\"description\" content=\"Explicabilit\u00e9 en IA : comprendre et expliquer les pr\u00e9dictions des algorithmes pour plus de transparence et de fiabilit\u00e9.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.capgemini.com\/fr-fr\/perspectives\/blog\/intelligence-artificielle-de-confiance-et-explicabilite-ouvrir-la-boite-noire\/\" \/>\n<meta property=\"og:locale\" content=\"fr_FR\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Intelligence artificielle de confiance et explicabilit\u00e9\u00a0: ouvrir la bo\u00eete noire\" \/>\n<meta property=\"og:description\" content=\"Explicabilit\u00e9 en IA : comprendre et expliquer les pr\u00e9dictions des algorithmes pour plus de transparence et de fiabilit\u00e9.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.capgemini.com\/fr-fr\/perspectives\/blog\/intelligence-artificielle-de-confiance-et-explicabilite-ouvrir-la-boite-noire\/\" \/>\n<meta property=\"og:site_name\" content=\"Capgemini France\" \/>\n<meta property=\"article:published_time\" content=\"2023-04-20T12:29:00+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-05-12T13:11:14+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.capgemini.com\/fr-fr\/wp-content\/uploads\/sites\/6\/2025\/05\/Wepreview-Quantmetry-6.png\" \/>\n\t<meta property=\"og:image:width\" content=\"640\" \/>\n\t<meta property=\"og:image:height\" content=\"360\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Capgemini\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Tarek Edde Gomez\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"8 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/www.capgemini.com\/fr-fr\/perspectives\/blog\/intelligence-artificielle-de-confiance-et-explicabilite-ouvrir-la-boite-noire\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/www.capgemini.com\/fr-fr\/perspectives\/blog\/intelligence-artificielle-de-confiance-et-explicabilite-ouvrir-la-boite-noire\/\"},\"author\":{\"name\":\"Tarek Edde Gomez\",\"@id\":\"https:\/\/www.capgemini.com\/fr-fr\/#\/schema\/person\/286d7d6ac3401df4d6874916bd234aaa\"},\"headline\":\"Intelligence artificielle de confiance et explicabilit\u00e9\u00a0: ouvrir la bo\u00eete noire\",\"datePublished\":\"2023-04-20T12:29:00+00:00\",\"dateModified\":\"2025-05-12T13:11:14+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/www.capgemini.com\/fr-fr\/perspectives\/blog\/intelligence-artificielle-de-confiance-et-explicabilite-ouvrir-la-boite-noire\/\"},\"wordCount\":1565,\"publisher\":{\"@id\":\"https:\/\/www.capgemini.com\/fr-fr\/#organization\"},\"image\":{\"@id\":\"https:\/\/www.capgemini.com\/fr-fr\/perspectives\/blog\/intelligence-artificielle-de-confiance-et-explicabilite-ouvrir-la-boite-noire\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.capgemini.com\/fr-fr\/wp-content\/uploads\/sites\/6\/2025\/05\/johannes-plenio-1vzlw-ihjam-unsplash-scaled-1.jpg\",\"keywords\":[\"IA &amp; Data\"],\"articleSection\":[\"Innovation\"],\"inLanguage\":\"fr-FR\"},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.capgemini.com\/fr-fr\/perspectives\/blog\/intelligence-artificielle-de-confiance-et-explicabilite-ouvrir-la-boite-noire\/\",\"url\":\"https:\/\/www.capgemini.com\/fr-fr\/perspectives\/blog\/intelligence-artificielle-de-confiance-et-explicabilite-ouvrir-la-boite-noire\/\",\"name\":\"IA de confiance et explicabilit\u00e9 - Capgemini France\",\"isPartOf\":{\"@id\":\"https:\/\/www.capgemini.com\/fr-fr\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/www.capgemini.com\/fr-fr\/perspectives\/blog\/intelligence-artificielle-de-confiance-et-explicabilite-ouvrir-la-boite-noire\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/www.capgemini.com\/fr-fr\/perspectives\/blog\/intelligence-artificielle-de-confiance-et-explicabilite-ouvrir-la-boite-noire\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.capgemini.com\/fr-fr\/wp-content\/uploads\/sites\/6\/2025\/05\/johannes-plenio-1vzlw-ihjam-unsplash-scaled-1.jpg\",\"datePublished\":\"2023-04-20T12:29:00+00:00\",\"dateModified\":\"2025-05-12T13:11:14+00:00\",\"description\":\"Explicabilit\u00e9 en IA : comprendre et expliquer les pr\u00e9dictions des algorithmes pour plus de transparence et de fiabilit\u00e9.\",\"inLanguage\":\"fr-FR\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.capgemini.com\/fr-fr\/perspectives\/blog\/intelligence-artificielle-de-confiance-et-explicabilite-ouvrir-la-boite-noire\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"fr-FR\",\"@id\":\"https:\/\/www.capgemini.com\/fr-fr\/perspectives\/blog\/intelligence-artificielle-de-confiance-et-explicabilite-ouvrir-la-boite-noire\/#primaryimage\",\"url\":\"https:\/\/www.capgemini.com\/fr-fr\/wp-content\/uploads\/sites\/6\/2025\/05\/johannes-plenio-1vzlw-ihjam-unsplash-scaled-1.jpg\",\"contentUrl\":\"https:\/\/www.capgemini.com\/fr-fr\/wp-content\/uploads\/sites\/6\/2025\/05\/johannes-plenio-1vzlw-ihjam-unsplash-scaled-1.jpg\",\"width\":4000,\"height\":2667},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.capgemini.com\/fr-fr\/#website\",\"url\":\"https:\/\/www.capgemini.com\/fr-fr\/\",\"name\":\"Capgemini France\",\"description\":\"Just another www.capgemini.com site\",\"publisher\":{\"@id\":\"https:\/\/www.capgemini.com\/fr-fr\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.capgemini.com\/fr-fr\/?s={search_term_string}\"},\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"fr-FR\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/www.capgemini.com\/fr-fr\/#organization\",\"name\":\"Capgemini France\",\"url\":\"https:\/\/www.capgemini.com\/fr-fr\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"fr-FR\",\"@id\":\"https:\/\/www.capgemini.com\/fr-fr\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/www.capgemini.com\/fr-fr\/wp-content\/uploads\/sites\/6\/2022\/08\/Logo-Capgemini.png\",\"contentUrl\":\"https:\/\/www.capgemini.com\/fr-fr\/wp-content\/uploads\/sites\/6\/2022\/08\/Logo-Capgemini.png\",\"width\":202,\"height\":60,\"caption\":\"Capgemini France\"},\"image\":{\"@id\":\"https:\/\/www.capgemini.com\/fr-fr\/#\/schema\/logo\/image\/\"}},{\"@type\":\"Person\",\"@id\":\"https:\/\/www.capgemini.com\/fr-fr\/#\/schema\/person\/286d7d6ac3401df4d6874916bd234aaa\",\"name\":\"Tarek Edde Gomez\",\"url\":\"https:\/\/www.capgemini.com\/fr-fr\/author\/tarekgomez\/\"}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"IA de confiance et explicabilit\u00e9 - Capgemini France","description":"Explicabilit\u00e9 en IA : comprendre et expliquer les pr\u00e9dictions des algorithmes pour plus de transparence et de fiabilit\u00e9.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.capgemini.com\/fr-fr\/perspectives\/blog\/intelligence-artificielle-de-confiance-et-explicabilite-ouvrir-la-boite-noire\/","og_locale":"fr_FR","og_type":"article","og_title":"Intelligence artificielle de confiance et explicabilit\u00e9\u00a0: ouvrir la bo\u00eete noire","og_description":"Explicabilit\u00e9 en IA : comprendre et expliquer les pr\u00e9dictions des algorithmes pour plus de transparence et de fiabilit\u00e9.","og_url":"https:\/\/www.capgemini.com\/fr-fr\/perspectives\/blog\/intelligence-artificielle-de-confiance-et-explicabilite-ouvrir-la-boite-noire\/","og_site_name":"Capgemini France","article_published_time":"2023-04-20T12:29:00+00:00","article_modified_time":"2025-05-12T13:11:14+00:00","og_image":[{"width":640,"height":360,"url":"https:\/\/www.capgemini.com\/fr-fr\/wp-content\/uploads\/sites\/6\/2025\/05\/Wepreview-Quantmetry-6.png","type":"image\/png"}],"author":"Capgemini","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Tarek Edde Gomez","Est. reading time":"8 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.capgemini.com\/fr-fr\/perspectives\/blog\/intelligence-artificielle-de-confiance-et-explicabilite-ouvrir-la-boite-noire\/#article","isPartOf":{"@id":"https:\/\/www.capgemini.com\/fr-fr\/perspectives\/blog\/intelligence-artificielle-de-confiance-et-explicabilite-ouvrir-la-boite-noire\/"},"author":{"name":"Tarek Edde Gomez","@id":"https:\/\/www.capgemini.com\/fr-fr\/#\/schema\/person\/286d7d6ac3401df4d6874916bd234aaa"},"headline":"Intelligence artificielle de confiance et explicabilit\u00e9\u00a0: ouvrir la bo\u00eete noire","datePublished":"2023-04-20T12:29:00+00:00","dateModified":"2025-05-12T13:11:14+00:00","mainEntityOfPage":{"@id":"https:\/\/www.capgemini.com\/fr-fr\/perspectives\/blog\/intelligence-artificielle-de-confiance-et-explicabilite-ouvrir-la-boite-noire\/"},"wordCount":1565,"publisher":{"@id":"https:\/\/www.capgemini.com\/fr-fr\/#organization"},"image":{"@id":"https:\/\/www.capgemini.com\/fr-fr\/perspectives\/blog\/intelligence-artificielle-de-confiance-et-explicabilite-ouvrir-la-boite-noire\/#primaryimage"},"thumbnailUrl":"https:\/\/www.capgemini.com\/fr-fr\/wp-content\/uploads\/sites\/6\/2025\/05\/johannes-plenio-1vzlw-ihjam-unsplash-scaled-1.jpg","keywords":["IA &amp; Data"],"articleSection":["Innovation"],"inLanguage":"fr-FR"},{"@type":"WebPage","@id":"https:\/\/www.capgemini.com\/fr-fr\/perspectives\/blog\/intelligence-artificielle-de-confiance-et-explicabilite-ouvrir-la-boite-noire\/","url":"https:\/\/www.capgemini.com\/fr-fr\/perspectives\/blog\/intelligence-artificielle-de-confiance-et-explicabilite-ouvrir-la-boite-noire\/","name":"IA de confiance et explicabilit\u00e9 - Capgemini France","isPartOf":{"@id":"https:\/\/www.capgemini.com\/fr-fr\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.capgemini.com\/fr-fr\/perspectives\/blog\/intelligence-artificielle-de-confiance-et-explicabilite-ouvrir-la-boite-noire\/#primaryimage"},"image":{"@id":"https:\/\/www.capgemini.com\/fr-fr\/perspectives\/blog\/intelligence-artificielle-de-confiance-et-explicabilite-ouvrir-la-boite-noire\/#primaryimage"},"thumbnailUrl":"https:\/\/www.capgemini.com\/fr-fr\/wp-content\/uploads\/sites\/6\/2025\/05\/johannes-plenio-1vzlw-ihjam-unsplash-scaled-1.jpg","datePublished":"2023-04-20T12:29:00+00:00","dateModified":"2025-05-12T13:11:14+00:00","description":"Explicabilit\u00e9 en IA : comprendre et expliquer les pr\u00e9dictions des algorithmes pour plus de transparence et de fiabilit\u00e9.","inLanguage":"fr-FR","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.capgemini.com\/fr-fr\/perspectives\/blog\/intelligence-artificielle-de-confiance-et-explicabilite-ouvrir-la-boite-noire\/"]}]},{"@type":"ImageObject","inLanguage":"fr-FR","@id":"https:\/\/www.capgemini.com\/fr-fr\/perspectives\/blog\/intelligence-artificielle-de-confiance-et-explicabilite-ouvrir-la-boite-noire\/#primaryimage","url":"https:\/\/www.capgemini.com\/fr-fr\/wp-content\/uploads\/sites\/6\/2025\/05\/johannes-plenio-1vzlw-ihjam-unsplash-scaled-1.jpg","contentUrl":"https:\/\/www.capgemini.com\/fr-fr\/wp-content\/uploads\/sites\/6\/2025\/05\/johannes-plenio-1vzlw-ihjam-unsplash-scaled-1.jpg","width":4000,"height":2667},{"@type":"WebSite","@id":"https:\/\/www.capgemini.com\/fr-fr\/#website","url":"https:\/\/www.capgemini.com\/fr-fr\/","name":"Capgemini France","description":"Just another www.capgemini.com site","publisher":{"@id":"https:\/\/www.capgemini.com\/fr-fr\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.capgemini.com\/fr-fr\/?s={search_term_string}"},"query-input":"required name=search_term_string"}],"inLanguage":"fr-FR"},{"@type":"Organization","@id":"https:\/\/www.capgemini.com\/fr-fr\/#organization","name":"Capgemini France","url":"https:\/\/www.capgemini.com\/fr-fr\/","logo":{"@type":"ImageObject","inLanguage":"fr-FR","@id":"https:\/\/www.capgemini.com\/fr-fr\/#\/schema\/logo\/image\/","url":"https:\/\/www.capgemini.com\/fr-fr\/wp-content\/uploads\/sites\/6\/2022\/08\/Logo-Capgemini.png","contentUrl":"https:\/\/www.capgemini.com\/fr-fr\/wp-content\/uploads\/sites\/6\/2022\/08\/Logo-Capgemini.png","width":202,"height":60,"caption":"Capgemini France"},"image":{"@id":"https:\/\/www.capgemini.com\/fr-fr\/#\/schema\/logo\/image\/"}},{"@type":"Person","@id":"https:\/\/www.capgemini.com\/fr-fr\/#\/schema\/person\/286d7d6ac3401df4d6874916bd234aaa","name":"Tarek Edde Gomez","url":"https:\/\/www.capgemini.com\/fr-fr\/author\/tarekgomez\/"}]}},"blog_topic_info":[{"id":359,"name":"IA &amp; Data"}],"taxonomy_info":{"category":[{"id":3,"name":"Innovation","slug":"innovation"}],"post_tag":[{"id":174,"name":"IA &amp; Data","slug":"ia-et-data"}],"service":[{"id":49,"name":"IA &amp; Data","slug":"ia-data"}],"blog-topic":[{"id":359,"name":"IA &amp; Data","slug":"ia-data"}]},"parsely":{"version":"1.1.0","canonical_url":"https:\/\/capgemini.com\/fr-fr\/perspectives\/blog\/intelligence-artificielle-de-confiance-et-explicabilite-ouvrir-la-boite-noire\/","smart_links":{"inbound":0,"outbound":0},"traffic_boost_suggestions_count":0,"meta":{"@context":"https:\/\/schema.org","@type":"NewsArticle","headline":"Intelligence artificielle de confiance et explicabilit\u00e9\u00a0: ouvrir la bo\u00eete noire","url":"https:\/\/www.capgemini.com\/fr-fr\/perspectives\/blog\/intelligence-artificielle-de-confiance-et-explicabilite-ouvrir-la-boite-noire\/","mainEntityOfPage":{"@type":"WebPage","@id":"https:\/\/www.capgemini.com\/fr-fr\/perspectives\/blog\/intelligence-artificielle-de-confiance-et-explicabilite-ouvrir-la-boite-noire\/"},"thumbnailUrl":"https:\/\/www.capgemini.com\/fr-fr\/wp-content\/uploads\/sites\/6\/2025\/05\/johannes-plenio-1vzlw-ihjam-unsplash-scaled-1.jpg?w=150&h=150&crop=1","image":{"@type":"ImageObject","url":"https:\/\/www.capgemini.com\/fr-fr\/wp-content\/uploads\/sites\/6\/2025\/05\/johannes-plenio-1vzlw-ihjam-unsplash-scaled-1.jpg"},"articleSection":"Innovation","author":[],"creator":[],"publisher":{"@type":"Organization","name":"Capgemini France","logo":""},"keywords":["ia &amp; data"],"dateCreated":"2023-04-20T12:29:00Z","datePublished":"2023-04-20T12:29:00Z","dateModified":"2025-05-12T13:11:14Z"},"rendered":"<meta name=\"parsely-title\" content=\"Intelligence artificielle de confiance et explicabilit\u00e9\u00a0: ouvrir la bo\u00eete noire\" \/>\n<meta name=\"parsely-link\" content=\"https:\/\/www.capgemini.com\/fr-fr\/perspectives\/blog\/intelligence-artificielle-de-confiance-et-explicabilite-ouvrir-la-boite-noire\/\" \/>\n<meta name=\"parsely-type\" content=\"post\" \/>\n<meta name=\"parsely-image-url\" content=\"https:\/\/www.capgemini.com\/fr-fr\/wp-content\/uploads\/sites\/6\/2025\/05\/johannes-plenio-1vzlw-ihjam-unsplash-scaled-1.jpg?w=150&amp;h=150&amp;crop=1\" \/>\n<meta name=\"parsely-pub-date\" content=\"2023-04-20T12:29:00Z\" \/>\n<meta name=\"parsely-section\" content=\"Innovation\" \/>\n<meta name=\"parsely-tags\" content=\"ia &amp; data\" \/>","tracker_url":"https:\/\/cdn.parsely.com\/keys\/capgemini.com\/p.js"},"jetpack_featured_media_url":"https:\/\/www.capgemini.com\/fr-fr\/wp-content\/uploads\/sites\/6\/2025\/05\/johannes-plenio-1vzlw-ihjam-unsplash-scaled-1.jpg","archive_status":false,"featured_image_src":"https:\/\/www.capgemini.com\/fr-fr\/wp-content\/uploads\/sites\/6\/2025\/05\/johannes-plenio-1vzlw-ihjam-unsplash-scaled-1.jpg","featured_image_alt":"","jetpack-related-posts":[{"id":783251,"url":"https:\/\/www.capgemini.com\/fr-fr\/perspectives\/blog\/ia-de-confiance-comprendre-les-enjeux-de-la-prochaine-decennie\/","url_meta":{"origin":783158,"position":0},"title":"IA de confiance \u2013 comprendre les enjeux de la prochaine d\u00e9cennie","author":"audreykone","date":"April 20, 2023","format":false,"excerpt":"Au sein de Capgemini Invent, riches de nos dix ann\u00e9es d\u2019exp\u00e9rience en r\u00e9alisation de projets d\u2019IAs de bout-en-bout, et \u00e0 la pointe des enjeux strat\u00e9giques, scientifiques et r\u00e9glementaires \u00e0 venir, nous avons formalis\u00e9 l\u2019IA de confiance en huit th\u00e8mes fondamentaux.","rel":"","context":"In &quot;Innovation&quot;","block_context":{"text":"Innovation","link":"https:\/\/www.capgemini.com\/fr-fr\/category\/innovation\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/www.capgemini.com\/fr-fr\/wp-content\/uploads\/sites\/6\/2025\/05\/Slide3.jpg?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/www.capgemini.com\/fr-fr\/wp-content\/uploads\/sites\/6\/2025\/05\/Slide3.jpg?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/www.capgemini.com\/fr-fr\/wp-content\/uploads\/sites\/6\/2025\/05\/Slide3.jpg?resize=525%2C300&ssl=1 1.5x, https:\/\/i0.wp.com\/www.capgemini.com\/fr-fr\/wp-content\/uploads\/sites\/6\/2025\/05\/Slide3.jpg?resize=700%2C400&ssl=1 2x"},"classes":[]},{"id":783383,"url":"https:\/\/www.capgemini.com\/fr-fr\/perspectives\/blog\/la-robustesse-un-imperatif-pour-une-intelligence-artificielle-de-confiance\/","url_meta":{"origin":783158,"position":1},"title":"La Robustesse, un imp\u00e9ratif pour une Intelligence artificielle de confiance","author":"Tarek Edde Gomez","date":"April 20, 2023","format":false,"excerpt":"L\u2019intelligence artificielle de Confiance est un sujet que l\u2019on adresse tous les jours chez Capgemini Invent pour fournir \u00e0 nos clients des algorithmes de qualit\u00e9. Ce sujet est trait\u00e9 par la tribu Trusted AI qui assure la connaissance et l\u2019application de l\u2019\u00e9tat de l\u2019art des 8 th\u00e9matiques de l\u2019IA de\u2026","rel":"","context":"In &quot;Innovation&quot;","block_context":{"text":"Innovation","link":"https:\/\/www.capgemini.com\/fr-fr\/category\/innovation\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/www.capgemini.com\/fr-fr\/wp-content\/uploads\/sites\/6\/2025\/05\/robustesse-main-img-scaledjpg.webp?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/www.capgemini.com\/fr-fr\/wp-content\/uploads\/sites\/6\/2025\/05\/robustesse-main-img-scaledjpg.webp?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/www.capgemini.com\/fr-fr\/wp-content\/uploads\/sites\/6\/2025\/05\/robustesse-main-img-scaledjpg.webp?resize=525%2C300&ssl=1 1.5x, https:\/\/i0.wp.com\/www.capgemini.com\/fr-fr\/wp-content\/uploads\/sites\/6\/2025\/05\/robustesse-main-img-scaledjpg.webp?resize=700%2C400&ssl=1 2x"},"classes":[]},{"id":783189,"url":"https:\/\/www.capgemini.com\/fr-fr\/perspectives\/blog\/intelligence-artificielle-et-equite-eviter-le-piege-dignorer-les-biais\/","url_meta":{"origin":783158,"position":2},"title":"Intelligence artificielle et \u00e9quit\u00e9\u00a0: \u00e9viter le pi\u00e8ge d\u2019ignorer les biais","author":"Tarek Edde Gomez","date":"April 20, 2023","format":false,"excerpt":"L\u2019intelligence artificielle de Confiance est un sujet que l\u2019on adresse tous les jours chez Capgemini Invent pour fournir \u00e0 nos clients des algorithmes de qualit\u00e9. Ce sujet est trait\u00e9 par la tribu Trusted AI qui assure la connaissance et l\u2019application de l\u2019\u00e9tat de l\u2019art des 8 th\u00e9matiques de l\u2019IA de\u2026","rel":"","context":"In &quot;Innovation&quot;","block_context":{"text":"Innovation","link":"https:\/\/www.capgemini.com\/fr-fr\/category\/innovation\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/www.capgemini.com\/fr-fr\/wp-content\/uploads\/sites\/6\/2025\/05\/elena-mozhvilo-j06glukk0gm-unsplash.jpg?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/www.capgemini.com\/fr-fr\/wp-content\/uploads\/sites\/6\/2025\/05\/elena-mozhvilo-j06glukk0gm-unsplash.jpg?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/www.capgemini.com\/fr-fr\/wp-content\/uploads\/sites\/6\/2025\/05\/elena-mozhvilo-j06glukk0gm-unsplash.jpg?resize=525%2C300&ssl=1 1.5x, https:\/\/i0.wp.com\/www.capgemini.com\/fr-fr\/wp-content\/uploads\/sites\/6\/2025\/05\/elena-mozhvilo-j06glukk0gm-unsplash.jpg?resize=700%2C400&ssl=1 2x"},"classes":[]},{"id":783413,"url":"https:\/\/www.capgemini.com\/fr-fr\/perspectives\/blog\/en-intelligence-artificielle-performance-rime-avec-confiance\/","url_meta":{"origin":783158,"position":3},"title":"En Intelligence Artificielle, performance rime avec confiance","author":"Tarek Edde Gomez","date":"April 20, 2023","format":false,"excerpt":"L\u2019intelligence artificielle de Confiance est un sujet que l\u2019on adresse tous les jours chez Capgemini Invent pour fournir \u00e0 nos clients des algorithmes de qualit\u00e9. Ce sujet est trait\u00e9 par la tribu Trusted AI qui assure la connaissance et l\u2019application de l\u2019\u00e9tat de l\u2019art des 8 th\u00e9matiques de l\u2019IA de\u2026","rel":"","context":"In &quot;Innovation&quot;","block_context":{"text":"Innovation","link":"https:\/\/www.capgemini.com\/fr-fr\/category\/innovation\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/www.capgemini.com\/fr-fr\/wp-content\/uploads\/sites\/6\/2025\/05\/logo-performance-scaledjpeg.webp?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/www.capgemini.com\/fr-fr\/wp-content\/uploads\/sites\/6\/2025\/05\/logo-performance-scaledjpeg.webp?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/www.capgemini.com\/fr-fr\/wp-content\/uploads\/sites\/6\/2025\/05\/logo-performance-scaledjpeg.webp?resize=525%2C300&ssl=1 1.5x, https:\/\/i0.wp.com\/www.capgemini.com\/fr-fr\/wp-content\/uploads\/sites\/6\/2025\/05\/logo-performance-scaledjpeg.webp?resize=700%2C400&ssl=1 2x"},"classes":[]},{"id":783294,"url":"https:\/\/www.capgemini.com\/fr-fr\/perspectives\/blog\/intelligence-artificielle-de-confiance-en-route-vers-la-responsabilite-by-design\/","url_meta":{"origin":783158,"position":4},"title":"Intelligence artificielle de confiance\u00a0: en route vers la responsabilit\u00e9 by-design","author":"audreykone","date":"April 20, 2023","format":false,"excerpt":"L\u2019intelligence artificielle de Confiance est un sujet que l\u2019on adresse tous les jours chez Capgemini Invent pour fournir \u00e0 nos clients des algorithmes de qualit\u00e9. Ce sujet est trait\u00e9 par la tribu Trusted AI qui assure la connaissance et l\u2019application de l\u2019\u00c9tat de l\u2019Art des 8 th\u00e9matiques de l\u2019IA de\u2026","rel":"","context":"In &quot;Innovation&quot;","block_context":{"text":"Innovation","link":"https:\/\/www.capgemini.com\/fr-fr\/category\/innovation\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/www.capgemini.com\/fr-fr\/wp-content\/uploads\/sites\/6\/2025\/05\/responsabilite.jpg?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/www.capgemini.com\/fr-fr\/wp-content\/uploads\/sites\/6\/2025\/05\/responsabilite.jpg?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/www.capgemini.com\/fr-fr\/wp-content\/uploads\/sites\/6\/2025\/05\/responsabilite.jpg?resize=525%2C300&ssl=1 1.5x, https:\/\/i0.wp.com\/www.capgemini.com\/fr-fr\/wp-content\/uploads\/sites\/6\/2025\/05\/responsabilite.jpg?resize=700%2C400&ssl=1 2x"},"classes":[]},{"id":783217,"url":"https:\/\/www.capgemini.com\/fr-fr\/perspectives\/blog\/qualite-des-donnees-ou-la-genese-de-lintelligence-artificielle-de-confiance\/","url_meta":{"origin":783158,"position":5},"title":"Qualit\u00e9 des donn\u00e9es, ou la gen\u00e8se de l\u2019intelligence artificielle de confiance","author":"Tarek Edde Gomez","date":"April 20, 2023","format":false,"excerpt":"L\u2019intelligence artificielle de Confiance est un sujet que l\u2019on adresse tous les jours chez Capgemini Invent pour fournir \u00e0 nos clients des algorithmes de qualit\u00e9. Ce sujet est trait\u00e9 par la tribu Trusted AI qui assure la connaissance et l\u2019application de l\u2019\u00e9tat de l\u2019art des 8 th\u00e9matiques de l\u2019IA de\u2026","rel":"","context":"In &quot;Innovation&quot;","block_context":{"text":"Innovation","link":"https:\/\/www.capgemini.com\/fr-fr\/category\/innovation\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/www.capgemini.com\/fr-fr\/wp-content\/uploads\/sites\/6\/2025\/05\/microsoftteams-image-3-scaled-1.jpg?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/www.capgemini.com\/fr-fr\/wp-content\/uploads\/sites\/6\/2025\/05\/microsoftteams-image-3-scaled-1.jpg?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/www.capgemini.com\/fr-fr\/wp-content\/uploads\/sites\/6\/2025\/05\/microsoftteams-image-3-scaled-1.jpg?resize=525%2C300&ssl=1 1.5x, https:\/\/i0.wp.com\/www.capgemini.com\/fr-fr\/wp-content\/uploads\/sites\/6\/2025\/05\/microsoftteams-image-3-scaled-1.jpg?resize=700%2C400&ssl=1 2x"},"classes":[]}],"jetpack_sharing_enabled":true,"distributor_meta":false,"distributor_terms":false,"distributor_media":false,"distributor_original_site_name":"Capgemini France","distributor_original_site_url":"https:\/\/www.capgemini.com\/fr-fr","push-errors":false,"featured_image_url":"https:\/\/www.capgemini.com\/fr-fr\/wp-content\/uploads\/sites\/6\/2025\/05\/johannes-plenio-1vzlw-ihjam-unsplash-scaled-1.jpg","_links":{"self":[{"href":"https:\/\/www.capgemini.com\/fr-fr\/wp-json\/wp\/v2\/posts\/783158","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.capgemini.com\/fr-fr\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.capgemini.com\/fr-fr\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.capgemini.com\/fr-fr\/wp-json\/wp\/v2\/users\/12386"}],"replies":[{"embeddable":true,"href":"https:\/\/www.capgemini.com\/fr-fr\/wp-json\/wp\/v2\/comments?post=783158"}],"version-history":[{"count":20,"href":"https:\/\/www.capgemini.com\/fr-fr\/wp-json\/wp\/v2\/posts\/783158\/revisions"}],"predecessor-version":[{"id":784103,"href":"https:\/\/www.capgemini.com\/fr-fr\/wp-json\/wp\/v2\/posts\/783158\/revisions\/784103"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.capgemini.com\/fr-fr\/wp-json\/wp\/v2\/media\/783311"}],"wp:attachment":[{"href":"https:\/\/www.capgemini.com\/fr-fr\/wp-json\/wp\/v2\/media?parent=783158"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.capgemini.com\/fr-fr\/wp-json\/wp\/v2\/categories?post=783158"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.capgemini.com\/fr-fr\/wp-json\/wp\/v2\/tags?post=783158"},{"taxonomy":"brand","embeddable":true,"href":"https:\/\/www.capgemini.com\/fr-fr\/wp-json\/wp\/v2\/brand?post=783158"},{"taxonomy":"service","embeddable":true,"href":"https:\/\/www.capgemini.com\/fr-fr\/wp-json\/wp\/v2\/service?post=783158"},{"taxonomy":"industry","embeddable":true,"href":"https:\/\/www.capgemini.com\/fr-fr\/wp-json\/wp\/v2\/industry?post=783158"},{"taxonomy":"partners","embeddable":true,"href":"https:\/\/www.capgemini.com\/fr-fr\/wp-json\/wp\/v2\/partners?post=783158"},{"taxonomy":"blog-topic","embeddable":true,"href":"https:\/\/www.capgemini.com\/fr-fr\/wp-json\/wp\/v2\/blog-topic?post=783158"},{"taxonomy":"content-group","embeddable":true,"href":"https:\/\/www.capgemini.com\/fr-fr\/wp-json\/wp\/v2\/content-group?post=783158"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}