Skip to Content

How can telcos drive sustainability for their clients?

Capgemini
1 Apr 2022
capgemini-invent

Telcos are leveraging their tools to help their business clients meet their sustainability goals, and the results are game-changing.

Our new blog talks about some real-world examples and recommendations.

In 2021, Vodafone teamed up with a Romanian transport company on a vehicle tracking project. Together they implemented an IoT solution that’s on target to save as much as 16 million tons of CO2 emissions per year. A new trend is beginning. Read on to learn how Telco technologies are helping businesses boost their sustainability strategies.

The enablement effect

The amount of global CO2 emissions has reached a total of 53 gigatons per year. Industries like manufacturing, power, energy and utilities, transport and logistics, and buildings contribute 80% of that number.

The telco sector alone makes for 2 – 3% of total global CO2 emissions, and most major players are already implementing environment friendly solutions for themselves. Where it gets especially interesting is the ways they’re helping their clients. Internet of Things, 5G, cellular connectivity, and many other telco related technologies have massive potential when it comes to tracking and measuring – key elements of achieving sustainability.

Companies helping clients reduce their carbon footprint is known as the “enablement effect.” GSMA describes it as “mobile connectivity, associated digital infrastructure and AI improv[ing] productivity in other industries by an order of magnitude more than that of the telecoms sector directly.” In this way telcos can take a leading role in changing today’s sustainability landscape, and industries from agriculture to automotive have a chance to reinvent their carbon footprint reduction strategies. 

A digital foundation

If telcos want to develop solutions to aid industries in achieving sustainability benefits, a strong digital foundation is a must. This is especially critical in light of the 5G revolution, which will necessarily increase energy use to handle the growing volume of data (more than can be offset by greater efficiency). IoT, as we’ve seen in the Vodafone example above, can help with tracking and optimization. It can also troubleshoot and perform diagnostic control for CO2 emission for the overall supply chain. Cellular connectivity and AR/VR solutions are game changers when it comes to the decarbonization process.

Now, let’s take a look at some real-life examples.

A new approach to sustainability

SHV Energy, an off-grid energy distributor, used Orange’s help when deploying new telemetry solutions on their gas tanks in Europe and the US. Thanks to IoT devices it was possible to optimize gas delivery routes, cut carbon emissions, reduce their carbon footprint, and become much greener in the process. 

Orange has also partnered with Dacom, an agricultural yield management system provider from the Netherlands. One of the main objectives was to improve sustainable arable crop production. The telco used its solutions to create a flexible and scalable M2M communications infrastructure with a dedicated SIM card management portal. This led to increased support for sustainability targets, while achieving other goals like increased production yields and greater profits for farmers.

We are also collaborating with telcos. Our Applied Innovation Exchange (AIE) Collaboration Zone(CoZone) has created Project FARM, which helps small-scale farmers find patterns in their data to support yield optimization. Accessibility by cell phones makes it convenient for farmers on the job. “With the profits I can educate my children,” reports one Kenyan farmer in the Exchange. “I can buy a cow and build a house.”

Benefits for industries

These examples demonstrate only a small portion of what telcos can do to help their clients achieve sustainability benefits. When it comes to smart buildings for example, IoT sensors can regulate energy consumption for savings of 3–5%. For a skyscraper such as London’s famous “Gherkin” building, that’s 80 tons of CO2 saved yearly – more than five city buses worth of carbon that don’t enter the atmosphere.

In the energy and power sector, using IoT can help a lot with energy distribution and preventing wastage. Meanwhile high speed connectivity (e.g. 5G, LTE) can track operational data, reduce repair times, and lead to more efficient and sustainable turbines.

The manufacturing industry also stands to benefit from sustainable technologies implemented by telcos. For example, IoT in manufacturing sites allows the capture of environmental data and a reduction in energy costs. GSMA estimates a savings of 10–20% in energy consumption per year compared to industrial settings without connected technology.

No matter which industry you look at – there is always an opportunity to make it more sustainable and efficient with the help of digital technologies. Many organizations share a goal of reaching net zero by 2030. According to a report by GSMA, mobile and tech contributions can take companies a long way down that road: an average of 40% of the way for the top 4 sectors. Mobile and tech innovations can take manufacturing 16% of the way to net zero, power and energy 46% of the way, and buildings 53% of the way. And it can take transportation a massive 65% of the way to reaching their net zero goal.

Be the enabler

And how do telcos benefit from all of the above? First: they can sell new sets of offerings with Sustainability as a Service, mobile as a service platforms, and other similar solutions. It’s also possible to gain new revenues with an increased usage of network and data monetization. Second, collaboration on decarbonization provides a stable base for future upselling. Telcos have an opportunity to drive decarbonization in multiple industries, and the demand is only growing. Enablement gives telcos a chance to demonstrate their capabilities on a massive scale.

At Capgemini, we have an ambition to become carbon neutral for our operations by 2025, and are committed to becoming a net zero company throughout our value chain. We are also committed to helping our clients save 10 million tons of CO2 emissions by 2030. Are you interested in learning more about the ways your CSP could be driving sustainability for your clients? Contact us below.

TelcoInsights is a series of posts about the latest trends and opportunities in the telecommunications industry – powered by a community of global industry experts and thought leaders.

Substance or style: predicting the future of automotive innovation

Jean-Marie Lapeyre – Our expert
Jean-Marie Lapeyre
30 Mar 2022

First from the blog series – Driving the innovation journey together.

This blog speaks about how innovation in the Automotive Industry is being driven by customer expectations for what role a car plays in the future.

Let’s start with a basic question: what is a car? Ask that a few years ago and you’d get a basic answer: the means to get from point A to point B. An opinion that still holds sway in many automotive Original Equipment Manufacturers (OEM) today.

But increasingly there is a rival viewpoint, driven by evolving customer expectations and the frenetic pace of technological innovation. Where the answer emphasises the role of the vehicle as a virtual companion, a platform for entertainment, connectivity, and an extension of a person’s digital life – with the transport aspect considered the most basic of features. At least when it comes to the purchasing decision, as the concept of mobility itself faces a radical redefinition. Understanding what this means for product development is a sizeable task for any OEM, as they enter a new reality where the substance of a vehicle – its mechanical performance – is overshadowed by the style that comes with it.

A more interactive relationship

Not that any of this should come as a surprise. The industry’s shift in focus from product to service was always going to have inevitable consequences. Customers have had their eyes opened to new features and possibilities. Priorities have moved on, and leading OEMs are doing their best to both keep up while anticipating future demand.

The challenge however is significant, and begins with the understanding that cars can no longer be built and sold as part of an isolated, one-off process. Instead, the emphasis is increasingly being placed on delivering connected services that evolve and adapt over the vehicle’s full lifetime of operation.

Powered by software, these are services that take advantage of increased cloud connectivity to enable a more interactive relationship between customer and car maker. Technology that’s also transforming the overall mobility experience. Where OEMs are being challenged to redirect their innovation resources toward delivering a constant stream of new features and functions, the proverbial ‘style’, or risk getting left behind.

The cutting edge of innovation

The scale of change being discussed here is analogous to the development of the smartphone. A product that over recent years has progressed from a very specific function (making calls, and SMS texting), to being a platform for software development – and a seemingly endless array of applications. Each one enabling users to personalise their experience in terms of entertainment and value-adding features, supported by constant updates and enhancements from the manufacturer.

Now compare the smartphone to the modern smart car. Two products that have seen the traditional focus on utility and cost give way to the demands of interactive mobility and entertainment. And for automotive OEMs, the implications of this shift are already being keenly felt. Where the delivery of value-adding services to complement the mobility experience is fast becoming a key battleground in the war for consumer mindshare – and therefore the cutting edge of innovation:

  • Where AI tools are now being deployed to recognise when a driver is tired, and make suggestions for a suitable rest stop
  • Where vehicle sensors can detect the driver’s mood and provide appropriate lighting and music for the journey
  • Where navigation aids offer proactive updates in real-time, factoring in data such as congestion and EV charging needs

In other words, technical ingenuity is helping open a pandora’s box of future capability. Being first to imagine new offerings will in turn become critical to OEMs, and tax their R&D teams to the maximum. Every aspect of the driver and passenger experience will be carefully assessed, and limits pushed to their logical end points. For example, can more immersive interactivity be delivered when cars are in cruise control on motorways? And what opportunities will enhanced voice control present? Answering these questions demands a clear idea of how consumers want to consume content – from social media to work presentations – in transit, and consistently innovating to meet these needs.

A two-phased approach to progress

There is however a logical ‘before and after’ scenario that OEMs face in the development of new capabilities, due to the future introduction of fully autonomous vehicles:

  • Before: where obvious limitations exist for the driver to remain free from distractions while in control of the vehicle
  • After: where drivers become ‘just another passenger’ able to direct their full attention to the complete suite of digital services available

What we can say though with a degree of confidence is that the after stage is still a few years away. Hence why current activity is largely centred on enhancing the before. And it’s here that we’re seeing tools like Software-Driven Transformation, Artificial Intelligence, and data connectivity being used with creative freedom to revolutionise the ‘style’ on offer. To imagine new ways for delighting customers, and for bringing an unmistakable wow factor to new models. All done with a view to a vehicle’s entire lifecycle, and particularly the latter stages where arguably the most untapped potential for revenue generation is to be found.

Summing up

So, is substance now beginning to trump style in automotive design thinking? The answer is not a certain yes, but the trend is heading in that direction. It has to, as consumers recognise what’s possible when their car becomes a virtual companion, and feeding this expectation will undoubtedly become a fixation for OEMs.

Progress will be based on new business models that help introduce a more pragmatic, flexible approach to problem solving and innovation delivery. Speed to outcome will be everything. All supported by a technology infrastructure with the scope and scale needed to reboot traditional production processes. This is an inevitable development as OEMs make the disruptive move from being an engineering/hardware operation to technology-led businesses.

Now is the time for automotive OEMs to reinforce their development capabilities. To ensure the agility is in place to respond dynamically to any new opportunity, while allowing innovation to flow seamlessly across the business. This is our goal at Capgemini, helping automotive OEMs become ‘fluid like water’. To find out more about how we do this, including a more in-depth analysis of challenge and opportunity, read our latest episode of Technovision 2022 – which you can download here.

Jean-Marie Lapeyre – Our expert

Improved identification through AI-driven document control

Kilian Toelge
30 Mar 2022

How can public security and safety organizations make use of Artificial Intelligence (AI) when working with privacy-sensitive documents?

highlights

Public security and safety organizations can use AI-driven solutions to tackle the growing problem of identity fraud.

Public security and safety authorities must deal with the complexity of identity documents.
Lack of data requires a division into generic AI components.
A hybrid form of people, data and AI ensures a future-proof application.

The struggle between order and crime is a continuing phenomenon in society, where innovation plays a crucial role for both sides. Europol research in 2020[i] shows that criminals have been using AI to fraudulently get their hands on money or obtain other benefits for some time.

Identity documents are the most important documents that people possess, with some countries insisting that everyone over a certain age has at least one. Identity documents contain an individual’s basic information and are important during official activities or events, such as traveling, opening a bank account, taking out insurance, a police check, etc.

This makes it particularly worrying that the development and availability of advanced image editing technologies and printing techniques is causing an increase in identity fraud. In the past six years, this form of crime has increased by more than 500 percent within the Netherlands alone, and further afield some 47 percent of Americans experienced financial identity theft in 2020[ii]. It’s a big problem elsewhere too, with a report published  in 2021 stating that France had seen an explosion in the rates of identity and biometric document fraud at four times higher than the rest of Europe.[iii]

Figure 1: Identity theft and fraud complaints in the USA, 2016-2020 (US Federal Trade Commission, Consumer Sentinel Network[iv])

Identity theft complaints (as shown in the bottom layer of the graphic above) increased by roughly 250% in just 5 years, according to the US Federal Trade Commission.

This does not mean that identity documents are insecure. Countries regularly introduce new and more complex security features for identity documents precisely to make it as difficult as possible for fraudsters.

The problem lies with controlling authorities, such as government agencies or institutions like banks, insurers, airports, embassies, etc. They have to carry out increasingly complex and specific controls to validate the authenticity of an identity document. So, it is important that the means being used evolve with time and that these controls are incorporated.

Challenges in the field of document control

Three major challenges with respect to document control must be taken into account when developing such an application.

The complexity of identity documents

One of the biggest challenges is the complexity of the documents themselves. There are around two hundred countries in the world, each of which has its own identity documents. Per country, there are often more than ten different valid types and models. Examples include ordinary passports, service passports, ID cards and residence permits. Every document has between fifty and a hundred security features. These can be categorized into standardized agreements with regard to the structure of a document, and country-specific and/or model-specific security features on the document, such as the check digit in an MRZ (Machine-Readable-Zone) at the bottom of the passport) or a country-specific hologram.

The complexity of documents is such that forensic document experts and specialized equipment are needed in order to investigate each aspect. For many people and companies, such knowledge and resources are insufficiently available, if at all. In addition, there are unknown security features or ones that may not be disclosed to everyone. Finally, there are processes where there is simply not enough time to carry out in-depth checks.

The complexity of the form and structure of identity documents is clearly a challenge for automated processes. In addition to the form, the content of the documents also poses problems.

Privacy-sensitive data

A second challenge in developing an application to validate identity documents is the lack of data. Identity documents contain personal and sensitive data about their holder, which means that the scans must not be stored and retained for the development of an AI model. However, a complete AI solution would require a large number of documents. As such, it would not be enough to have only authentic documents in the dataset for an AI solution to be effective. In fact, it is particularly important to include dozens of examples of (authentically) falsified security features for each document. However, this is not feasible, because for every security feature, there are only so many known forgeries out there.

In addition, documents are issued by countries that do not completely adhere to the standards, or where production errors have arisen. This means that examples of these must be fed to the AI model as well, so it learns that deviations can exist and that these are not counterfeits.

The sensitivity of the data and human error in the production process mean that it is impossible to apply a complete AI solution to the verification of identity documents. Both the documents themselves and the availability of the data cause limitations in the application.

Technical limits of scanning equipment

As a third challenge, there are the technical limits of current scanners and control processes. Despite the fact that new scanners are regularly introduced, with higher resolutions and additional functionalities allowing for the recognition of very small print (so-called microprints), there will always be security features that cannot be checked via scans. Countries are deliberately developing security features that can only be checked on the physical identity document using specialist forensic equipment. Thus, common document scanners cannot see certain security features. In addition, some control bodies, such as the police, do not always have the option of using these advanced scanners, because they cannot take them out on the street.

As a result, the application used for automatic checks is constrained by the technical limits and the availability of the scanners.

Hybrid AI solutions as a future-proof application

How can existing technologies best be used to meet these challenges?

The added value of document templates

In the initial stage, it is important to collect and store information about the various security features and documents in a smart way. There are multiple publicly available collections of this information, such as PRADO and Edison. This information can be used to create separate document templates for each type and model of an identity document, which will serve as the basis for the application.

It is advisable to take into account repeating elements or structural properties, such as the ICAO standards, to minimize the required storage in the database and the amount of work for those entering the data into the system. In this way, a database can be created that contains templates for every known identity document with the corresponding security features, variations, and checks. These document templates will make the complexity of the identity documents manageable and enable the application to apply country-specific and model-specific checks in addition to the standard ICAO checks.

The challenge with regard to the form of the documents can thus be solved by the use of document templates, but what about the challenge with regard to the sensitivity of the data?

Applied generic AI

The lack of data for a complete AI solution does not mean that the power of AI cannot be used in the validation of identity documents. It is possible to break up the validation process into steps that are generic enough to make it possible to develop specialized AI components, which can perform the required tasks without access to large amounts of privacy-sensitive data.

Figure 2: Validation process broken up into generic steps with specialized AI components.

For example, a Deep Learning model capable of comparing newly scanned documents to a database consisting of only one sample document (specimen) per document template can recognize the correct document template (classification of the documents). This step is important in retrieving the country-specific and model-specific information for a scanned document from the database.

In addition, object recognition can be used to crop the document from the scan. The AI for this can be trained on all types of documents and scans, allowing for the bypassing of sensitive data. Facial recognition, like the one on mobile phones, makes it possible to compare the photo of the document to a live recording of the holder. The text of the document can be read with the latest OCR (Optical Character Recognition) techniques to further verify it in a later step of the process.

None of these specialized AI components require sensitive identity documents and data for their development. So, instead of a full AI solution, it is possible to solve the data problem by making smart use of multiple and more specific AI components. In addition, there is control of what steps are carried out and how these are carried out, preventing a ‘black box’ phenomenon. This approach will deliver more reliable and more manageable results while leveraging the latest advances in AI.

Dealing with technical limits

The problem of the scanners’ technical limits and those of the scanning process cannot be solved by an application. In order to perform a complete document inspection, it is and will continue to be necessary to inspect the physical identity document manually. A manual inspection supported by technology is recommended to solve the previously identified complexity problem, and to minimize human errors.

In the first stage, the classification from the application can be (re)used to show the correct document template for the manual inspection. This saves time. In addition, the overview of the document template can be arranged in such a way that the inspector is guided through the manual inspection associated with this specific document step by step. This ensures all the security features are checked in the correct manner. Further, an overview also offers the possibility of showing the inspector additional information. For example, an alert for known forgeries may appear for the document shown, ensuring that the inspector pays extra attention to this.

This approach can also be used by the police to remedy the lack of document scanners on the street. In addition to the automatic checks via the camera of their mobile phone, they could also use a mobile application to access the manual inspection and carry out a more thorough inspection.

The addition of a step-by-step manual inspection of some security features by humans circumvents the scanners’ technical limits.

Working together to combat identity fraud

Identity fraud is an ever-increasing problem. Countries will start to incorporate more and more complex security features into their identity documents to make life as difficult as possible for fraudsters. This means that government agencies and other institutions will need smart, scalable, and future-proof tools to continue to inspect these characteristics.

The sensitivity of the personal data is holding back implementation of the complete AI solutions already used in other areas. This means that control bodies must switch to specialized AI components to keep up with technological progress and be able to use the power of AI in the future.

Find out more

This article has been adapted from a chapter in the Trends in Safety 2021-2022 report giving European leaders insight into the safety and security trends affecting citizens in the Netherlands.

  • The full report in Dutch can be found here.
  • An executive summary in English can be found here.

For information on Capgemini’s Public Security and Safety solutions, visit our website here

_______

[i] malicious_uses_and_abuses_of_artificial_intelligence_europol.pdf

[ii] U.S. Identity Theft: The Stark Reality – https://www.giact.com/aite-report-us-identity-theft-the-stark-reality/

[iii] https://www.biometricupdate.com/202107/how-digital-identity-authentication-can-help-frances-document-fraud-problem-fourthline

[iv] https://www.iii.org/fact-statistic/facts-statistics-identity-theft-and-cybercrime

Author

Killian Toelge

Data Scientist & Software Architect
Kilian specializes in creating tailor-made AI solutions. In recent years, he has been designing and developing an application for the validation and verification of identity documents through AI in the public sector. Email id – kilian.toelge@capgemini.com

    Digital therapeutics – taking a break in today’s fast-paced world

    Scott Manghillis
    28 Mar 2022

    Digital therapeutics approaches are being used to help children with anti-social tendencies understand and regulate their emotions.

    “Time Out!” For most kids of the 1980s that meant that your parents needed a break from the chaos, or it was used as a way of stopping of bad behavior. This was usually followed by: “Go to your room!”

    However, today, a company has created an environment where your child plays games to practice calming down or “taking a pause.” But what does this look like, and what’s the value?

    The value of digital therapeutics

    During these games, kids wear a heart rate monitor while they play – enabling them to see their emotions and connect with them directly as they are experiencing them. For example, as they play, your child reacts to their heart rate. As their heart rates go up, the game gets harder to play and they practice how to take a pause in order to bring their heart rate down and earn rewards in their games.

    Granted, this technology wasn’t created for your average unruly child, it was created for kids with anti-social behavior. The goal here is to build “better emotional regulation in children.” Another game is aimed at those with ADHD to help improve attention function by activating the prefrontal cortex of the brain.

    The next step in connected health

    These digital therapeutics have seen extensive growth in the healthcare sector since the outset of COVID-19. Today, they aren’t just targeted at children, there are also games for those with Alzheimer’s and chronic conditions such as diabetes and cardiovascular disease. This is all part of the healthcare sector’s drive towards Connected Health in the digital age.

    The Capgemini Research institute (CRI) recently published a report titled “Unlocking the Value in Connected Health,” which explores some of the key themes in digital therapeutics. This includes companies’ connected health strategies and governance structures, capabilities, product development and launch processes, benefits and use cases, and key challenges in building a connected health portfolio.

    Read the full CRI report to learn how digital therapeutics can drive value in your organization, and discover how Capgemini’s Intelligent Customer Operations for Healthcare solution drives frictionless patient and member experiences.

    How can business respond best to the IPCC’s latest report?

    Benjamin Alleau
    28 Mar 2022

    With more than 3.6 Billion people already living in zones highly vulnerable to climate change and many ecosystems at the point of no return, IPCC scientists have determined that the impacts are irreversible, and UN Secretary General António Guterres declared on February 28, 2022, that unchecked carbon pollution was “forcing the world’s most vulnerable on a frog march to destruction”.

    Forceful, emotive words. And few would disagree with the Secretary General. But the recent assessment report from IPCC working group II – written by 270 science researchers from 67 countries and approved by 195 governments –argues that it is now time to adapt or die. This might be overstating things, but the report must be heard as a call to action for people and countries globally.

    Guterres went on to say: “Now is the time to turn rage into action” and urged “every country to honor the Glasgow pledge to strengthen national climate plans every year until they are aligned with 1.5 .”

    Too small, too slow

    The rage that Mr. Guterres spoke about is understandable. The response to climate change is simply not extensive or fast enough. If the IPCC’s previous Assessment Report (AR5) published in 2014 was alarming, this latest one is even more so. Now, the AR6 Working Group ll (WG2) report points out “Across sectors and regions the most vulnerable people and systems are observed to be disproportionately affected. The rise in weather and climate extremes has led to some irreversible impacts as natural and human systems are pushed beyond their ability to adapt.”

    The challenge for business

    What can – and should – business do? Our own group CEO Aiman Ezzat was a keynote speaker at the 2021 UN Climate Change Conference in Glasgow (COP26). After the summit, he called for “nothing short of a revolution” adding, “the private sector must spearhead the carbon revolution! Solutions to decarbonate will mostly come from businesses.”

    So, putting politics aside, let’s first consider why currently businesses are struggling to address the core causes of climate change, such as the operations and supply chains that are the highest emitters of greenhouse gases (GHG) and waste across industries today. Or IT landscapes, where user devices, data centers, and the networks that power business are big CO2 emitters. Or products and services that are still not designed with a circular mindset or from a planet-centric perspective.

    These are clearly barriers to an accelerated response to climate change – but barriers can be torn down. At present, many businesses are struggling because they lack innovative tools and strategies to tackle the challenges, yet we believe these tools are available and must be brought into play, or we risk failing both our planet and humankind. For example, in its report FIT FOR NET ZERO: 55 Tech Quests to accelerate Europe’s recovery and pave the way to climate neutrality, Capgemini Invent offered in-depth analysis of some extraordinary tools (existing and future technologies) that have the potential to transform the global response to climate change. The report focuses on five core economic domains: buildings, energy, food and land use, industry, and transport. And it offers hope on several levels, not least the potential to reduce CO2 by 871 megatons by 2050.

    Another reason to hope

    Further, despite the many hurdles, the AR6 WG2 report also offers hope in the form of nature itself. It provides new insights into nature’s potential both to reduce climate risks and to improve people’s lives. For example, as the IPCC WG2 co-chair Hans-Otto Pörtner pointed out: “By restoring degraded ecosystems and effectively and equitably conserving 30% to 50% of Earth’s land, freshwater, and ocean habitats, society can benefit from nature’s capacity to absorb and store carbon, and we can accelerate progress toward sustainable development, but adequate finance and political support are essential.” Co-Chair Debra Roberts added “that governments, the private sector, and civil society have to work together to prioritize risk reduction, as well as equity and justice in decision-making and investment.”

    Making tech part of the solution

    As a global leader in consulting, technology services, and digital transformation, Capgemini puts technology at the heart of our own ambitious sustainability journey and those of our clients. We are following a 1.5 science-based carbon-reduction pathway and envision a world where sustainability features in every value proposition on the market, whatever the industry and business – just like digital does today.

    Capgemini CEO Aiman Ezzat cites technology as one of the three core requirements for reducing carbon emissions, which he describes as “first, technology, which is key in many sectors to reshape business models toward sustainable models; second, data to measure footprints and to monitor progress; and third, concrete action plans that deliver measurable change.”

    This reflects our approach to working with clients, with whom we are driving sustainable transformation and the journey to net zero. The use of the word ‘journey’ here is not arbitrary. Organizations must consider their response with long-term planning combined with a strategy that commits them to rapid action. And if you can’t measure what you are doing, how do you know whether you are on target to achieving your goals? That’s why monitoring the solutions implemented is so important. It enables you to compare where you are now, challenge those parts of your business that are not moving fast enough, and measure the impact of your carbon reduction efforts.

    Commit, Act, Monitor

    These three pillars form the basis of our approach to sustainability as we steer our clients’ journeys from initial commitment to sustainable developments. While many organizations have signed up to the ideal of sustainability, the IPCC report clearly shows that they now need to accelerate their transformations. So, let’s consider how these three pillars will help with speeding up this new transformation:

    • Commit: It is important to define your climate vision and engage your stakeholders to ensure the success of your low carbon transition. Clarify the purpose and trajectory path of your transformation so that you commit to getting there as fast as possible. This commit phase also requires you to identify the new organizational structures you might need and to consider whether you have the right talent to support your low carbon transformation. Discover how Capgemini client GASAG Group has embarked on a carbon neutral path with a CO2 savings roadmap that highlights and prioritizes concrete measures to reduce emissions.
    • Act: We have defined three core areas on which to focus to accelerate the transition:
      • Sustainable products and services – delivering a green consumer experience with planet-centric design and low carbon products, as well as circular products and services
      • Sustainable operations – implementing sustainable procurement strategies across manufacturing and supply chains, decarbonizing factories and the supply chain, and implementing circular supply concepts
      • Sustainable IT – assessing and reducing the environmental impact of IT across devices, applications, and infrastructure, and utilizing green equipment, apps, and infrastructure, such as by migrating to the cloud. But also leveraging IT and technologies to imagine new solutions for organizations to become more sustainable. Find out how Capgemini helped the UK tax authority HMRC reduce its overnight energy use by 90%.
    • Monitor and Report: Data and technology sit at the heart of how you measure the effectiveness of your response. For example, with data and artificial intelligence solutions, it becomes possible to power climate action innovation roadmaps. A data platform will support the modeling of environmental impacts (carbon footprint, product lifecycle, etc.) and provide access to ESG data to improve performance and reduce climate risk. Discover how we worked with Red Eléctrica de España to introduce a tool enabling the business to measure the impact of its circular economy roadmap initiatives.

    Act – right now

    The IPCC Sixth Assessment Report (Working Group ll) pulls no punches in terms of the impacts and risks of climate change. At the same time, it points to “progress in adaptation planning and implementation across all sectors and regions, generating multiple benefits”. The AR6 WG2 report adds that “climate resilient development is facilitated by international cooperation…”

    At Capgemini, we believe that businesses across all sectors must be part of that cooperation, utilizing technology and data to accelerate change.

    To find out more, get in touch

    Author

    Benjamin Alleau

    Group Sustainability Acceleration Services Lead

    Cloud economics – different types of cost optimization techniques for SAP

    Devendra Goyal
    11 Mar 2022

    Cloud economics plays a vital role in managing the costs, benefits, and the economic principles of cloud computing.



    The etymology of the word economics can be traced back to the Greek word ‘oikos nomos’ meaning ‘household manager’. In the context of cloud computing, how do you run a metaphorical cloud “household”? Cloud economics plays a vital role in managing the costs, benefits, and the economic principles of cloud computing. The question is, what are the best ways to optimize the costs of cloud computing and obtain the greatest value for your organization?

    With the onset of the pandemic, social distancing pushed employees and customers online, generating a demand for cloud solutions. Unfortunately cloud economics is still a grey area, making it challenging for businesses to consume cloud services cost effectively while consistently driving value. According to an IDG study well-managed cloud environments reduce operating budgets and drive new revenue, reducing IT operations overhead by 26%, which results in great ROI. Bad cloud economic decisions are sprung from a few blind spots along the way to do with operational costs, application performance, talent, and reskilling. Luckily there is a variety of techniques that businesses can employ to better manage their cloud ‘household’.

    There are two foundational principles that need to be addressed as a first step towards improving the cost efficiency of cloud computing. Economies of scale help cloud providers lower long-term expenditures (CAPEX) and enable implementation of a pay-as-you-go pricing model. Large entities have a competitive advantage over smaller ones since the sheer size of these companies allows them to buy in bulk, increase managerial specialization and obtain lower-interest charges when borrowing from banks. All these sources optimize costs. Global reach also results in substantial savings. With servers located and accessed from anywhere in the world, companies can easily reduce labor cost.

    Another way of optimizing costs is the practice of right sizing. Right sizing involves several small processes such as determining the hardware requirements, physical memory, CPU power and the I/O capacity. Right sizing SAP is simple with the help of a web-based tool called Quick Sizer which improves the sizing of SAP applications by making the process easier and faster. Different sizing methods in SAP include greenfield, brownfield and expert sizing.

    Reserved instances (a discount billing concept) are being employed by market giants such as Amazon’s AWS and Microsoft’s Azure, for consistent workloads to optimize costs. Shutting down unused resources, auto-scaling, setting budgets, allocating costs, and choosing the right compute services are some of the actions taken as part of the reserved instances protocol.

    Choosing the correct region is another significant factor to consider, to bring down the cost of operations. Different regions affect the latency, pricing, availability of machines and carbon footprints, so a strategic choice of region must be made to amplify savings. Furthermore, ramping up your business agility goes hand in hand with everything else – it is a major economic benefit of cloud economics. Deploying computing resources and applications at a faster rate and ramping up storage and computing power on demand is necessary allowing businesses to respond faster to market changes leading to faster revenue growth. Data transfer also play main role in cloud economics. Generally there is no charge on ingress data, they certainly cloud vendors charges it for egress and inter-region transfers. If data transfers can be measured and controlled by using monitoring mechanism, it helps in reducing the public cloud bills.

    The migration of SAP from on-prem DC to public cloud reduces the total cost of ownership (TCO). Hardware is also a crucial element. Although often overlooked, it is a key component of an SAP system, comprising of servers, disk storage systems and network gear (routers, security firewalls) which all create the base layer of the SAP system. Investing in new hardware is a good idea as it is cheaper and more efficient ultimately resulting in a good ROI.  SAP also consists of a disaster recovery strategy (RPO) predicting how much data you can afford to lose in the case of a disaster which is useful when assessing your company’s finances and the possible risks.

    Through automation, SAP2Cloud transformation can optimize the costs of your target cloud environment by quickly identifying the optimal SAP-certified virtual machines (VMs), increasing the speed and efficiency of migration without disruption. Capgemini’s cloud migration assistant an advanced prebuilt accelerator, analyzes existing CPU memory utilization and cloud parameters for any public cloud platform, enabling us to identify and propose the optimal VM for your SAP2Cloud transition. We can then recommend a target architecture for rightsizing, complete with migrations costs and timeframes.

    All these cost optimizing techniques act as a gateway to your perfect cloud “household”. More and more data will be stored, managed, and analyzed in cloud. Organizations with a solid understanding of cloud economics will be in a better position to optimize costs and a well-designed cloud strategy is the optimal solution for cost reduction.

    Author


    Devendra Goyal
    Head – Global SAP2Cloud Offer & SAP2Cloud Transformation

    Arm SystemReady gives developers an out-of-the-box software experience that “just works”

    Capgemini
    25 Mar 2022
    capgemini-engineering

    As an official Arm SystemReady IR certification partner, Capgemini is helping chip, platform, and product companies streamline embedded development with software that “just works.”

    The value chain for developing today’s intelligent embedded products is a case study in complex ecosystem partnerships. It involves many moving parts and many vendors – developing IP and chips, low-level software; operating systems (OSs) and real-time operating systems (RTOSs); many layers of embedded software, middleware, frameworks, and protocol stacks; boards and system hardware, application software; along with system integrators, testers, certification specialists, and more — all adding their unique technology and business value along the way. With this kind of complexity and diversity, it’s crucial that each company in the chain be able to rely on the features, functionality, and performance of the previous building blocks on which their solutions are built. For example, application developers must rely on communications and connectivity stacks to work as specified; stacks must rely on the OS and lower-level software to correctly enable chip and other hardware features, and so on.

    For firmware and other low-level software, however, this ‘trust’ is not always guaranteed.  Low-level software, such as firmware, drivers, board support packages, and SDKs, are developed for and tested at a specific OS/kernel revs. But by the time device development begins, a new OS/kernel has often been released, or the OS specified or hardware chosen by embedded developers is not the same as when this low-level software was released. These differences between what was released by chip or chip software vendors and what is required by embedded developers (who typically need to be working with the latest OS rev), if left unaligned, can often lead to bugs discovered later in product development in some not so straightforward ways. These could include issues such as transient feature discrepancies, sub-par performance, elusive security glitches, even device crashes, to name just a few, that can be challenging to detect and tie back to the cause. The consequences can be as straightforward as increased cost or delayed schedules, or if not uncovered and debugged during rigorous testing, even product failure after deployment.

    Arm SystemReady enables software to ‘just work’

    It’s key that embedded developers – anywhere in the value chain — be able to rely on precursor building blocks when they design, develop and test their platforms and products. This was precisely the problem Arm set out to fix with its SystemReady compliance certification program for chip, platform, and product developers building embedded components based on their core IP. SystemReady certification ensures that Arm-based chips and their low-level software are tested and certified to work as expected. Or as Arm puts it, they want to Arm developers to have confidence that generic off-the-shelf chips, low-level software, OS and subsequent layers of software in their devices will ‘just work’… right out of the box.  Certification assures chip & firmware developers their chip is ready for development; and it assures embedded developers they can port the chip and low-level software to any OS or hardware platform they choose and the chip will behave as expected.  This can bring dramatic cost and time savings compared to finding out later… this was not the case.

    Who benefits?

    1) Silicon providers, both fabless and IDMs – get assurance their products (chips, and low-level software) will perform as expected for their platform and product developer customers.

    2) OS & RTOS vendors – Regardless of the age of the low-level software, certification includes testing at the current OS rev, giving OS vendors the assurance that low level software will ‘just’ work at any OS rev up to the rev used in the certification. This gives their platform developer customers a wider choice of OS.

    3) Platform and component OEMs & ODMs – are assured that their application developers can just start using chips, software and hardware platforms with confidence that they can use standard firmware interfaces to build, deploy and maintain their products.

    Capgemini is now an official certification testing lab for SystemReady IR

    Arm SystemReady certification is based on a set of hardware and firmware standards and a selection of market-specific supplements and is performed by an official certification testing lab. Four SystemReady bands are designed to support different device classes:  SystemReady SR, SystemReady ES, SystemReady IR, and SystemReady LS.  More on SystemReady bands here.

    Recently, Capgemini became an official certification testing lab for the SystemReady IR band in the Arm® SystemReady program.  The SystemReady IR band provides system certification for devices in the IoT edge sector built around system-on-chip (SoC) ICs based on the Arm A-profile architecture. More about the SystemReady IR band here.

    Certification is typically thought of as the ‘last step’ in device development, a ‘gate’ of sorts before devices – whether chips, platforms, or products – are made available for the next phase of product development. But it is common for certification testing to uncover issues with device software both before and during the testing process.  To help developers prepare for certification, perform the certification testing itself, and debug issues encountered during certification, Capgemini offers three options for companies seeking ServiceReady IR certification.

    • SystemReady IR certification testing includes complete testing of a qualified chip, platform or product according to the Arm SystemReady IR test & certification plan. On completion of the test suites, CapGemini submits the results to Arm for review, and certified products are issued a formal certificate by Arm, including promotion into the Arm SystemReady Catalog.
    • Pre-certification firmware upgrade services are available for device firmware or other low-level software that may require an upgrade prior to certification in order to meet baseline requirements for certification.
    • Custom engineering services assist developers in handling issues uncovered during the certification process such as: diagnosing and fixing problems encountered during certification (and re-run testing until certification is successful), custom firmware migration, multiple OS distribution verifications for the latest releases, device driver and BSP development or modification, and other services as needed.


    For more information on SystemReady certification services, to download the Capgemini Engineering Arm SystemReady IR Certification brochure, or to kick-off SystemReady IR certification project, visit the Capgemini SystemReady IR certification portal.

    Author: Nitya Verma, Senior Director DSP, Capgemini Engineering

    How can we build AIs with user data while respecting personal privacy?

    Warrick Cooke
    25 Mar 2022

    In a world where customer data is a major source of value, privacy concerns could limit innovation.

    Most companies want to be data-driven. From healthcare to automotive to consumer products, they all want to collect data on the people using their products so they can launch personalized apps and gain customer insights. Whether it’s data on patient lifestyles, driving habits, or skincare regimes, the information is valuable.

    However, the user’s data belongs to the user. They are not obliged to part with it. Sometimes they will do so in return for a clear benefit or freebie. But users increasingly opt-out when they can, especially when it concerns things like health metrics that they may not want on a company server. They will also hold companies to account if they don’t take proper care of their data.

    There is another challenge emerging. Data is valuable because it can be used to build artificial intelligence (AI) models at the heart of personalized customer apps. However, these models can be reverse engineered to identify the private data used to train them, even if the data is anonymous. In one well-known example, Netflix made an anonymous data set available to a data science competition. However, some clever data scientists showed how they could identify private records by combining them with public data from IMDb, an online database of information on films, television series, home videos, video games, and streaming online content.[1]

    Creating AI tools that respect user privacy

    Imagine a wearable device that monitors your health metrics and gives you personalized health advice to explore solutions. Such a device would collect data about your state of health (e.g., heart rate, steps taken, etc.) and other parameters to help maintain optimal health, such as temperature, humidity, weather, etc. This data would feed a model trained to spot markers of health concerns and recommend solutions.

    One way to maintain privacy is to store and process all the data on the device. But, of course, this requires massive computing power. However, edge computing can run sophisticated models at the wearable-device level. In addition, processing at the edge means the company doesn’t receive the user data, so there are no privacy concerns for the user.

    Still, some data, such as weather information, needs to be requested from an external source. This could disclose personally identifiable data to the source since a weather request shares the location.

    A disclosure like this highlights how hard it is for the user to prevent sharing personal data. One solution is to make lots of requests via a proxy server. The device’s internal model knows the location, so it discards the wrong ones, but the receiver has no idea which is the right one, who is requesting it, or why it has been requested.

    How can we model data without compromising privacy?

    The proxy server idea above is a good solution if your primary goal is to provide users with a useful AI tool. But what if you want to collect their data?

    Say you are studying arthritis. You want to dig into your wearables’ health data to pull out records of users with arthritis so you can review the link between lifestyle and change in health metrics over time. Or, more prosaically, you may want to know how the device is used so users can maximize its value.

    If you take the data off the device and upload it to your company’s cloud to be processed, you get into privacy issues.

    When stored or transmitted, the private data is encrypted, which doesn’t cause too many worries. However, it needs to be decrypted to train models. This step creates the possibility that the user’s identity is revealed to people working on the model, creating a window of opportunity for your data to be stolen while being decrypted. Reverse-engineering the model makes it possible to identify the user.

    Decryption requires user consent, which may not be forthcoming. People worry about their data being hacked, and not everyone likes the idea of strangers looking at their data, even those beyond reproach, such as data scientists.

    The solution is to adopt techniques that allow anonymous data to be combined into larger models without anyone seeing anything that could be used to identify an individual.

    Three techniques that deliver personal data privacy

    One relatively simple solution to reverse engineering is to insert fake records. Here, the model can be designed to compensate for the noise, so anyone attacking the model would be unable to identify real users.

    There are more sophisticated techniques that provide greater end-to-end data privacy. One is differential privacy which performs random changes to data at the point of collection (i.e., on the device) before transmitting the data. So the model – or anyone who steals the data – has no idea whether any individual data record is accurate. But because we know the level of randomness and the probability that a piece of data is wrong, we can reconstruct an accurate group-level picture that is reliably predictive of user behavior.

    Homomorphic encryption is another option that is starting to be used. This complex modeling technique uses completely anonymized datasets that allow the data to be processed while still encrypted. For example, it makes it possible to find data on people with arthritis from the wearables data set, run calculations on it, and create a useful model based on group-level insights without decrypting any personal records.

    The math of homomorphic encryption date back to the 1970s, but computing power has only recently allowed us to use it in practical applications. As a result, the applications are limited to well-funded organizations that can throw significant computing power at the problem. However, it is gaining interest and is likely to become an important tool for building complex AIs without compromising privacy.

    Building a privacy-preserving app

    For makers of privacy applications and devices, the options are best considered in the design stage. That is because it is hard to layer stringent privacy requirements on top of a fully formed app.

    The early design stage should encompass exploring the available data, gaining insights from the data, and adding data that would be beneficial to acquire, such as location or weather data. If computation is being partially or wholly handled on the device, the technical capabilities and restraints must be considered. In addition, it is crucial to explore data privacy techniques that ensure the user cannot be identified. Privacy needs to be given serious consideration in the context of a complete understanding of the data being processing.

    As models become more complex and hackers become more sophisticated, privacy needs to be built into AIs from the very start.

    Capgemini Engineering helps customers design and monetize intelligent services from connected products while ensuring personal data remains private and secure. To discuss your intelligent product and data privacy challenges, contact: engineering@capgemini.com

    Author: Warrick COOKE, Consultant, Hybrid Intelligence, Capgemini Engineering

    Warrick is an experienced strategic technology consultant who helps clients apply advanced and emerging technologies to solve their business problems. He has worked with scientific and R&D companies across multiple domains, including the pharmaceutical and energy sectors.

    [1] Bruce Schneier, “Why ‘Anonymous’ Data Sometimes Isn’t,” Dec 12, 2007, Wired
    https://www.wired.com/2007/12/why-anonymous-data-sometimes-isnt/

    We enable digital workplace transformation in the aerospace industry

    Capgemini
    24 Mar 2022

    Aerospace companies are evaluating new ways of working to meet today’s challenges, such as digitization, financial constraints, and operations management.

    Air travel is witnessing a slight uptick encouraged by less stringent travel restrictions and growing vaccination rates.  However, the aerospace industry is still reeling from the pandemic, as drastic reduction in air travel has lowered the demand for aircraft. According to the International Air Transport Association (IATA), industry-wide revenue passenger kilometers (RPKs) were down 53% compared with pre-crisis 2019. The aerospace industry has hit a brick wall, as travel demand is not likely to return to pre-COVID levels until 2024.

    Aerospace companies are evaluating new ways of working to meet today’s challenges, such as digitization, financial constraints, and operations management. Capgemini has worked with 9 of the top 10 aerospace and defense companies, providing a range of IT solutions to solve their business problems. We have delivered innovative engineering, manufacturing, , and supply chain solutions to optimize production, operations, collaboration, and budgets to secure their futures.

    Our interactions with aerospace companies reveal that they have effectively managed the shift to remote work, particularly in the engineering and design areas. Although remote working has resulted in productivity gains, employee wellbeing is a top priority for many enterprises. For better employee experience, we recommend building a flexible hybrid workplace, where employees can choose to work anytime, anywhere, and from any device. According to Gartner, organizations with high levels of flexibility are almost three times more likely to see high employee performance.

    Capgemini’s  Employee Experience provides a new level of choice and flexibility to employee interactions, engagement, collaboration, and support. We deploy the right tools and technologies required to create a modern hybridworkplace that promotes productivity, agility, and employee satisfaction. We help you create a truly amazing employee experience that translates into great business results. Our recent client wins in the aerospace industry are a true testimony to the transformative work we have been doing in this space.

    • We have signed a five-year contract with Airbus to support the redesign of its global collaborative workplace, both in terms of working methods and tools. Highly based on Google Workspace technology, the leapfrog innovation we provide will benefit all Airbus businesses, enabling them to work better together.
    • For Heathrow , our longstanding customer for 12 years, we will deliver service desk and end-user services as part of our new five-year contract. Capgemini will provide flexible and efficient services to support thousands of Heathrow colleagues and their devices. Together with IT Service Management tooling we will provide an integrated solution to create better traveler and employee experiences.

    Equally important, our focus on the employee experience is also winning kudos from independent research firms. Recently, we were recognized as a Leader in Avasant RadarView™ report for Digital Workplace Services, as well as in the NelsonHall NEAT vendor evaluation of Advanced Digital Workplace Services, for the third time in a row.

    Air transport, one of the industries hit hardest by the pandemic, is on the road to recovery. Many of our clients are looking for new ways to work and collaborate to emerge stronger after the crisis. If you are looking to transform employee experiences that drive business results, please contact me. I’ll be happy to help!

    Author


     Alan Connolly
    Global Head of Digital Workplace Services, Cloud Infrastructure Services

    Using AI augmentation to empower database administrators

    Capgemini
    Capgemini
    23 Mar 2022

    As data platforms rapidly evolve and become more powerful, DBAs are the important link between data scientists and business users.

    This article first appeared on Capgemini’s Data-powered Innovation Review | Wave 3.

    Written by:


    Arvind Rao
    Partner Architect Advisor
    Google Cloud

    Most enterprises already have the talent in-house to start using AI to unlock the full potential of their data. They are the database administrators. They know the data, they know the organization, and they are trusted advisors – they just need a little help from data-platform vendors.

    The world’s largest organizations generally understand that to continue to succeed in today’s competitive environment, they need to become data-powered enterprises. They acknowledge that it’s imperative to modernize their data and harness the full power of tools such as AI to derive actionable insights. However, many of these companies have also learned that human resources are a major challenge in making this transformation.

    In short, there are not enough data scientists – those who create the solutions that leverage state-of-the-art technologies such as AI. Based on my experience in data analytics over the past couple of decades, in an ideal world data scientists would account for 10 to 15 percent of the staff at a data-powered organization. Yet the majority of organizations – including most successful technology enterprises – have not achieved that ideal/goal.

    “ORGANIZATIONS UNDERSTAND IT’S IMPERATIVE TO MODERNIZE THEIR DATA AND HARNESS THE FULL  POWER OF AI WHILE HUMAN RESOURCES POSE A MAJOR CHALLENGE.”

    DBAs to the rescue

    The good news is most enterprises already have the talent to successfully make this transformation. Database administrators (DBAs) – those who manage a company’s data warehouses and similar data platforms – are the backbone of most IT operations. These professionals understand the data an enterprise has collected, where it’s stored, and how to use it. They ensure authorized people have access to the data they need. And since data is sensitive and valuable, they control who has access to it to keep it safe from misuse or theft.

    Knowledge and trust

    As a result, database administrators know more about their company’s data than anyone else in their organization. They certainly know more than the data scientists who work for the technology vendors that develop the data platforms upon which modern enterprises rely.

    At the same time, database administrators are trusted advisors within their enterprise. They’re the go-to source for help when a business user needs to derive insights – whether that’s a salesperson looking to improve lead generation, a service manager trying to spot potential customer satisfaction issues, or an executive seeking market predictions for the coming year.

    It, therefore, makes sense to ensure database administrators can leverage the insights and capabilities of AI-augmented data platforms.

    The Lake House

    The majority of data-platform vendors have been working towards the concept of a Lake House – a convergence of databases, data warehouses, and data lakes – that makes the platform usable and accessible to everyone and everywhere. With data scientists increasingly focused on creating these new platforms, vendors have fewer resources to dedicate to building, managing, and maintaining the – often highly customized – tools required by business users. That’s why it’s important that data platform vendors augment their solutions with AI. It’s also why these AI augmentations must be easy to use in the DBA’s day-to-day role: They should not have to invest huge amounts of time learning data science to take advantage of these tools. Enterprises are increasingly demanding this simplicity of their suppliers – whether they are vendors of databases, data platforms, analytics, or cloud-based solutions.

    At Google, we’ve developed a number of solutions that help bridge the gap and create data warehouses infused with AI/ML that work for all users – not just data scientists.

    • Vertex AI brings together Google Cloud services for building machine learning in a unified user interface and API. With Vertex AI, a database administrator can easily train and compare models using AutoML or custom code training. All models are stored in one central repository and can be deployed in ways that allow DBAs and other non-data scientists to start using AI/ML in their day-today work, with very little training.
    • Dataplex is an intelligent data fabric that breaks apart silos. It provides a single pane of glass that allows database administrators to centrally manage, monitor, and govern an organization’s data – including ingestion, storage, analytics, AI/ML, and reporting. It does this across any type of platform – including data lakes, data warehouses, and data marts – with consistent controls that provide access to trusted data and power analytics at scale.
    • BigQuery is a serverless, cost-effective, multi-cloud data warehouse designed for business agility. BigQuery democratizes insights with a secure and scalable platform to perform functions such as anomaly detection, customer segmentation, product recommendation, and predictive forecasting. It features built-in machine learning to derive business insights using a flexible, multi-cloud analytics solution and adapts to data at any scale from bytes to petabytes with zero operational overhead. Most importantly, database administrators can learn BigQuery and easily incorporate it into their tasks

    The smart data-warehouse platform

    Looking ahead, I envision a future in which most successful organizations deploy a smart data-warehouse platform that provides a number of important benefits. These include:

    • Easy access to the organization’s data, public data, and other business data – without worrying about what kind it is or where it’s stored
    • Serverless tools to access data in real-time, to mine and infuse AI/ML capabilities. These would be scalable on-demand, set a strong foundation for building AI models, and be cost-effective.
    • Reporting tools that showcase analytics in real-time – in a safe, secure, and scalable way
    • Modern data warehouse capabilities equip all users with the tools and resources they need to do their jobs efficiently and effectively, and that provide CXOs with the tools they need to keep their staff motivated.

    As enterprises work to achieve this goal, leveraging AI to empower database administrators in their day-to-day work is something they can do now, and do cost-effectively. They just need the right tools from their vendors.

    Giving DBAs easy-to-learn AI-powered tools will enhance the value they already provide to the enterprise. It can also help keep these knowledgeable team members – the organization’s trusted advisors on all matters IT-related – relevant as the enterprise embraces a new, more powerful, and innovative data-powered future.

    INNOVATION TAKEAWAYS

    A VALUABLE RESOURCE

    Database administrators know the company’s data and are trusted by its people. They have important roles to play in an organization’s transformation into a data-powered enterprise.

    SHARE THE LOAD

    Database administrators bridge the gap between the data scientists who are creating the next generation of AI-powered analytics tools and the business users who will benefit from the insights such tools provide.

    VENDORS MUST HELP

    Data-platform vendors must incorporate easy-to-learn AI tools into their products so database administrators can take full advantage of these state-of-the-art solutions.

    Interesting read?

    Data-powered Innovation Review | Wave 3 features 15 such articles crafted by leading Capgemini and partner experts in data, sharing their life-long experience and vision in innovation. In addition, several articles are in collaboration with key technology partners such as Google Cloud, Snowflake, Informatica, Altair, A21 Labs, and Zelros to reimagine what’s possible.