Skip to Content

Omnisumers: The future of active energy consumption

Capgemini
Capgemini
14 Jun 2023

As the energy industry navigates the energy transition, we hope to create a world in which energy is available, affordable and clean by the 2030s. At Capgemini we want to envision this world. The rich and diverse opportunities the energy transition will enable across every geography and within every community. The new value opportunities, the economic and societal prosperity, and the reimagining of the energy industry as a whole.

Revolutionizing Energy Consumption: How Omnisumers Are Leading the Way

By the 2030s, a new type of energy customer is sweeping across Western Europe: the active omnisumer. The omnisumer is defined as a person or business who participates in a dynamic energy ecosystem across various solutions, products and providers. Historically, the relationship between energy provider and consumer was passive, disengaged and one-way. Energy was a commodity, delivered to consumers from unknown, high-carbon sources with prices dictated by set tariffs. In the 2030s, the energy ecosystem is transformed. Growing concern around energy consumption and its impact, both financial and environmental, led to consumers demanding control. Energy providers had to fundamentally reshape their relationship with customers.

The top-down power dynamic has been replaced by a more equal, choice-based relationship. Energy providers must be responsive to a paradigm where the majority of consumers are omnisumers: aware of their personal energy footprint and taking active action to manage it within a dynamic, diverse ecosystem. In the 2030’s blockchain-enabled, highly fragmented marketplace it’s not enough to simply sell new units of energy. Energy is no longer a commodity but a service. A highly personalized service that deeply understands its customers, engages with their lifestyle and needs, and enables them to easily exchange value and make informed decisions. To achieve this, energy companies have built complex partnership ecosystems with technology companies, automotive manufacturers and power generators – all geared to empower the omnisumer. Simplicity and control are at the heart of the omnisumer’s behavior and underpin the three key themes which have enabled the omnisumer’s rise: more choice, digitization, and self-generation.

My energy, my way

Being an omnisumer is not simply about choosing to use less energy, it’s about choosing which energy you want to use. As self-generation technologies and micro-grids have become more available – increasingly democratized due to government schemes and financing options – people can choose between the national grid or their own home to avoid spikes and congestion. As energy has moved from commodity to service, the market has become segmented by price preference. Some consumers still view energy as a commodity, wanting the cheapest available regardless of its source, but desire the extra flexibility to help them save money. Others want maximum comfort with minimum trouble – they are motivated less by the environmental impact, more by the advancements in digital apps and tech and are willing to pay  a premium for a high-end service. Enabled by blockchain technology, energy becomes highly traceable. Consumers track the energy they use from its source, empowering them to care more deeply about its origin. Local energy producers build meaningful relationships with their customers, sharing news, photos and updates, to keep them engaged and loyal; increasingly more people have shares in local energy production, making them both customer and investor. By embracing the shift to localized, distributed energy, utilities companies embed themselves in communities’ and individuals’ wider daily lives. Every omnisumer has a different relationship with energy built on a unique, diverse ecosystem of providers and products. But the crux of being an omnisumer is universal: making active choices to ensure they get the energy they want, the way they want it.

A digitized, tailored industry

Digitization has transformed the world of energy. It’s enabled the energy industry to become more connected and intelligent. Smart meters offer omnisumers access to rich data and actionable insights about their energy habits. Managed through easily navigable AI-powered mobile apps, consumers monitor connected appliances in real-time. From energy insights such as peer-to-peer and carbon footprint recommendations, to appliance health checks and safety alerts. Energy providers leverage this influx of data to deliver highly tailored products and services at an industrial scale. From learning the customer’s patterns of behaviors, energy companies optimize their offering to each individual. Leading energy providers act as the single broker for their customers’ entire energy ecosystem, from providing e-mobility services to leasing domestic renewable energy technologies. Partnerships with other service providers, such as EV charging stations, build a holistic interconnected offering that serves every need.

Digitization also helps the omnisumers of the 2030s optimize their energy costs by making it easy to get closer to nature. AI software tracks the ebbs and flows of renewable energy output and advises the consumer via their mobile app when to use high-energy appliances, such as charging their EV or using the washing machine at night when energy costs are lower. Time-of-use, dynamic tariffs also notify consumers of negative price plunges, enabling them to be paid to use electricity when demand is low, or the grid is oversupplied by renewable power. If they have personal batteries, it automatically optimizes the charging and usage of this battery power based not only on the output variation but on the consumer’s lifestyle. The software can intimately learn the consumer’s habits and routines and adapt their energy consumption accordingly, for example if they regularly need to drive at nighttime and therefore can’t charge overnight. Thanks to digitization and intuitive UX, omnisumers don’t need to have in-depth knowledge of energy in order to have an active relationship with it. Digital technologies level the playing field for everyone and makes it easy to consume energy in a more considered, efficient and ultimately cheaper way. Energy service providers have the opportunity to differentiate themselves from the competition by optimizing an individual’s energy needs and providing them with the best algorithms and offers aligned with their consumption habits.

Powering independence with self-consumption

For many, being an omnisumer revolves around reduced costs. They want insight and control over what they’re spending and how. But as technologies such as battery storage and electric vehicles become more ubiquitous and therefore more affordable, and as the energy conversation continues to permeate the mainstream, more and more consumers are investing in self-generation. 100 million households around the world rely on solar PV. 65% of Western Europe’s car sales in 2030 are battery electric vehicles. The shift to renewable energy has heightened its localism. Local government plays a key role in renewable generation as planning authorities, as well as promoting self-generation technologies via subsidies. Omnisumers may choose to grid-share and be part of a local community collective of domestic solar installations and batteries. By the 2030s, there will be energy services built into new multiple occupancy developments, as well as community-based energy storage solutions such as batteries that serve an entire village. Alternatively, omnisumers may keep a private closed circuit of energy, any surplus of which their energy provider helps them sell back to the grid at the best rates. Some consumers are becoming ever more independent prosumers; they’ve transitioned from simply being a buyer to becoming an interwoven player in the energy ecosystem. The opportunity for energy providers lies in enabling this transition for all by successfully simplifying the complexity and making consumers feel rewarded and engaged in their energy interactions.

Lighting the way for the omnisumer

The rise of the omnisumer sparks deeper, active customer relationships that facilitate grid management and open up new commercial models and social valorization. These models create greater value connections and avoid commoditization – a total transformation of the role of energy companies in the eyes of the customer. With increased control comes trust, loyalty and empowerment. The omnisumer is motivated not only by cost, but by a sense of purpose. Their relationship with energy is governed by simple yet sophisticated end-to-end solution s that work to help them save money, time and the planet simultaneously. They can easily choose and control the energy they want, the way they want it – all enabled by the new energy ecosystem and its innovative technologies.

Sustainable packaging: a critical part of your green credentials

Maryem Sahnoun
7 June 2023
capgemini-engineering

Packaging is the window to the product. It shapes buying decisions, and provides vital information. It protects the product, ensuring happy customers, and minimising returns and waste. Perhaps most importantly in the current age, the choice of packaging shows a company’s commitment to sustainability.

In recent years, sustainability has moved from a premium offering for conscientious customers, to an essential element of all packaging. Consumers will increasingly pay more for products with sustainable packaging, regulations on packaging sustainability are likely to become stricter, and of course, it is the right thing to do for the environment. Promisingly, most companies seem to agree, with industry giants such as Unilever and P&G making bold commitments on sustainable packaging in recent years.

The opportunities and challenges of sustainable packaging

Sustainable packaging means reducing the environmental impact of the packaging. This can come from new materials which are recycled, recyclable, or biodegradable. Last year’s Packaging Sustainability Awards saw gongs go to cardboard made from leftover barley straw, mono-material plastics which improve recyclability, and water-based inks.

It can also come from redesigning packaging to reduce materials – Pilgrim’s Choice Cheddar made a big deal of its 40% reduction in packaging by wrapping its cheese snuggly, rather than in a loose bag.

And the latest trend is to design packaging to be continuously reusable and refillable, eliminating (most) waste altogether.

Although sustainable packaging is a long run trend, there is still plenty of room for improvement, with many goods still using unsustainable packaging, and even the leaders having room to improve through new packaging innovations.

But nothing is ever simple. Packaging already constitutes around 9% of the product’s total cost.  Reducing packaging can save money, but replacing it with sustainable alternatives will likely add costs, at least in the short term.

Sustainable packaging may also come with design trade-offs. New materials, fewer layers, and sustainable inks may not offer the same opportunities for bold colours and 3D shapes as plastics and synthetic dyes.

But problems can be opportunities. Sustainable packaging presents an opportunity to differentiate and win new customers, in a world where company ethics matter more than brand loyalty. And new material innovations create new aesthetics, that may come to be seen as more modern than today’s in-your-face brands.

Nonetheless, business is business, and cost is still a big customer concern. So all of this needs to be done whilst keeping cost to a minimum. How do we do that?

Embedding sustainability into packaging design and engineering

CPG companies need to embed sustainability in their packaging development process, whilst also ensuring it can be delivered at scale and cost-effectively. That means adopting a number of processes and skills.

It needs design and simulation to model new sustainable packaging designs, which balance sustainability with other factors such as visual appeal, transportation, protective value, manufacturing costs, and so on. When moving to whole new types of packaging, this is an essential first step to derisk decisions.

Then it needs expertise in material selection, synthesis and formulation to choose or create the optimal packaging and ink materials for your product’s needs. And in designing the optimal manufacturing processes to produce it at scale.

All of this can be optimised through computer-aided engineering, simulation, and analysis. Such digital tools help speed innovation, but cannot provide all the answers, so physical testing is also critical throughout the development process to test products, validate designs and feed back into simulations.

All materials have some environmental footprint, so the packaging must also minimise lifetime impact. That means modelling what its production and use looks like in the real world, understanding the resources required to produce it at scale, the manufacturing processes, and the transport implications. Sometimes a piece of packaging that looks sustainable in the lab, suddenly looks less sustainable when you see how many can fit in a truck, or where critical raw materials come from. Understanding these real-world impacts allows you to tweak designs to ensure the right balance of both lifetime sustainability and cost.

Finally, a relatively new area of sustainable packaging is reuse. Not all packaging will take this route, but it is increasingly popular to design packaging that is intended to be refilled, either by the product’s manufacturer, or by the user at dedicated refill stations. Companies pursuing this approach need to setup processes for managing their recirculating packaging, such as trackable containers, asset management systems, and analytics.

Conclusion

Sustainable packaging is not a one-off project. Things change. Today’s cutting-edge sustainability may look dated in five years. Sustainability must become deeply embedded within ongoing innovation. That means embracing all of the above as part of a continuous and agile development process. It is vital to take this end-to-end approach to enable ongoing sustainable packaging innovation whilst keeping cost manageable.

How Capgemini can help

Our end-to-end packaging approach creates a seamless experience for clients. Our team combines packaging design, sustainable materials development, optimization of primary, secondary and tertiary packaging levels and different packaging types (eg. steel, plastic, cardboard, wood, foam), and technological and engineering expertise. We consider all elements of packaging to ensure packaging is optimized for sustainability, whilst also protecting the product during shipping and handling, and enhancing its appeal and value.

Maryem Sahnoun

Sustainability Senior Specialist, Capgemini Engineering
Expert in developing sustainable packaging solutions that balance economic, environmental and social considerations. Adept at collaborating with cross-functional teams and managing complex projects from concept to launch. She is committed to driving positive impact and creating a more sustainable future for all.

    How the benefits of FinOps go beyond balancing the books for financial services

    benefits of FinOps
    Tanya Anand
    12 Jun 2023
    capgemini-invent

    Is there more to FinOps than cost reduction for financial services organizations? Our FinOps experts share their perspectives from their firsthand experience.

    In recent years, the financial services sector has seen an increase in the pace of digitization. This is a reaction to the demands of the omnichannel customer and the need to compete with FinTech challengers whilst maintaining a cost discipline. Cloud transformation has been a key pillar of digitization for many of these organizations, leading to a mix of hybrid and multi-cloud environments.

    Throughout the past few years, increased cost consciousness amongst financial services companies and a lower appetite for risk have meant that organizations are reducing their technology investments and reprioritizing their technology portfolio. It is in this climate that the cost of cloud computing has taken a number of organizations by surprise. As a result, many CIOs, COOs, and CFOs are now turning to FinOps to help reduce the cost of their cloud spends. A recent survey indicated 31% organizations have a cloud spend over $12 million per annum.[1] FinOps is an operational framework as well as a shift in both culture and mindset, enabling organizations to maximize the value of cloud investments.[2] However, when FinOps is not approached strategically, it fails to deliver sustainable cost and business benefits that CXOs need now more than ever. In this blog, our Capgemini Cloud Advisory and FinOps experts share some lessons learned from their experience with FinOps implementation and share practical solutions for FS CIOs, COOs, and CFOs to consider in their FinOps transformation journey.

    What is the business case for FinOps?

    Ewan MacLeod, who has been part of several FinOps setups with financial services clients, describes how enterprises often start their journey to cloud computing by assuming that replicating their IT environment in the cloud will result in cost savings.

    “Most organizations will overspend on cloud solutions if they do not adopt the right FinOps thinking right from the start. Early adoption is key to the business sustainability of a cloud-based organization.”

    Vikram Rajan, VP at Capgemini for Cloud Advisory Services for financial services organizations, agrees:

    “Even if an organization does not have a significant cloud spend today, they should establish the FinOps thinking early on. This is to ensure that teams are proactively making cost-conscious decisions and do need to go into fire-fighting mode against wasted cloud spend.”

    Our experts unanimously challenge the myth that FinOps is mainly utilized to achieve cost reductions.

    FinOps enables an organization to answer two key questions:

    FinOps provides greater transparency of which department, project, and team the cost should be allocated to.

    FinOps enables the organization to tie the spending back to business benefits and, in many cases, maximize business value at the unit spend level. For instance, onboarding a new customer may have an instance cost of $10 per customer.

    Elaborating on the above $10 per customer example, Ewan believes the benefits of FinOps for onboarding are self-evident:

    “When you can onboard 1,000 customers quicker due to being on the cloud, you can trace the value of a $10,000 spend and justify the cost.”

    FinOps thereby facilitates greater flexibility and agility for investments in cloud computing. It enables organizations to think in terms of how to be flexible over how they spend, how and when they should shut resources down, how long they need an environment for, and what the working hours for that environment should be.

    In Vikram’s experience, FinOps can prove pivotal for high-value product lines:

    “Where organizations have products delivering value in hundreds of millions of dollars, they are looking to improve the accuracy of forecasts and predictability of spends to manage their platform/products’ scalability.”

    Lokesh Sah, a Cloud Advisory specialist at Capgemini, is also quick to point out that FinOps insights can also enable organizations to develop operational improvements, such as developing a cloud data archiving policy to manage data storage costs, which then helps with cost avoidance and improves data efficiencies.

    Our experts emphasize that the business case for FinOps programs appears to be straightforward:

    ‘FinOps pays for itself by accelerating the business benefits, such as improved cost management, value maximization, agility, and scalability.’

    benefits of FinOps for financial services

    Is it too late to adopt FinOps if your organization has already started its cloud transformation without it?

    In a perfect world, organizations would have incorporated FinOps at the start of their cloud transformation journey, but Ewan has good news for those organizations who didn’t:

    “It’s never too late. It’s like tackling your credit card spending. From the point at which you start auditing your spending, you can identify the overspending and savings and redirect them to a holiday or a car purchase or back to your cashflows. In the case of FinOps, I would say the sooner an organization adopts it in the right way, the better, but it’s never too late.”


    1 Flexera (2023) 2023 State of the Cloud Report; 2 Capgemini (2023) The Rise of FinOps; 3 Sarrazin, T. (2023) The Secret to Successful FinOps

    Explore our services and recent thought leadership for FinOps

    Contributors

      h2 – The future of automotive fuel

      Anuraag Bharadwaj
      6 Jun 2023

      Why hydrogen and how to implement it?

      This article explains why hydrogen is the best option for the future and lays out a basic approach for attaining a hydrogen economy.

      Underpinned by a global shift towards decarbonization, hydrogen is gaining significance as an energy vector, especially for high-emission sectors that do not use electricity directly. Our research (Capgemini Research Institute Report – Low-Carbon Hydrogen: A Path to a Greener Future) suggests that a majority (64%) of E&U organizations are planning to invest in low-carbon hydrogen (or green hydrogen) initiatives by 2030; and 9 in 10 plan to do so by 2050.

      On average, 0.4% of total annual revenue is earmarked for low-carbon hydrogen by E&U organizations. Investments are flowing in across the hydrogen value chain – especially into the development of cost-effective production technology (52% of organizations investing), electrolyzers and, fuel cells (45%), and hydrogen infrastructure (53%) to help create alternative revenue streams and aid in decarbonization efforts.

      For hydrogen production to be considered low-carbon, it must come under the EU’s proposed emissions threshold of 3.38 kg CO2 -equivalent per kg of hydrogen 5, which is 70% lower than that of the predefined fossil fuel comparator, including transport and other nonproduction emissions.

      Hydrogen adoption in the automotive industry

      There is a raging discussion about hydrogen adoption in the automotive industry today. Questions start with the very generation of hydrogen (source of generation, cost of generation, energy density, transportation) to issues relating to energy transformation (electric to potential (hydrogen) to electric (FCV), loss during these conversions, etc.)

      The current cost of hydrogen generation is USD0.5–USD8/kg (depending on the process) and primary sources of generation include coal (black), crude (blue), and electrolysis (green). Today, most production is the B2 (square) type, which is derived either from carbon or crude.

      Currently, this type of hydrogen is generally used as a reducing agent in industrial manufacturing. Demand is limited and therefore production and associated costs remain high. Moreover, CO2 emissions per kg of hydrogen are very high and while this could probably be justified in cases calling for the specialized use of hydrogen, the story changes when it is looked at as fuel.

      Types of Hydrogen
      Types of Hydrogen

      Coal is burned to generate methane and from there, hydrogen is generated. Alternatively, it can be the product of the fractional distillation of crude. In either case, consuming fuel to generate another fuel does little to solve the problem of CO2 emissions. Generating emissions to create an emission-free fuel that will have no emissions defeats the entire purpose of an alternate fuel.

      Thus, as an automotive fuel, only green hydrogen can be useful in reducing emissions and achieving carbon neutrality. However, this green hydrogen must be generated using electricity from alternate/renewable sources (solar, wind, hydro, biomass).

      If we want hydrogen to be a viable fuel in the automobile industry, we need to ensure uninterrupted availability and extensive distribution capabilities.

      But let’s first revisit the concept of energy transformation (transmutation) in an automobile. An automobile carries energy as potential energy. It moves when potential energy is transformed by combustion (gasoline/natural gas in ICE) or magnetic induction (electric charge stored in EV batteries), into kinetic energy. Thus, a good fuel carries more potential energy per kg of its weight. Let’s see where hydrogen stands in this aspect.

      Transformation of hydrogen - From source to automobile
      Transformation of hydrogen – From source to automobile

      There are four cost-input points and emission-generation points. To make hydrogen cost-effective and usable as fuel while achieving emission neutrality, we need to analyze inputs and outputs at these four stages.

      As discussed earlier, B2 (squared) type hydrogen is not viable from either a cost or an emission perspective. Truly green hydrogen, however, is another matter.

      Green hydrogen is generated from hydrolysis, which requires two important components: water and electricity

      • Water – There is no need to use fresh water. Recycled water, methane-rich water (sewage, industrial waste), or even seawater can be used, serving the dual purpose of procuring hydrogen and reusing water. Moreover, if seawater is used, numerous mineral bi-products are created (Na, K, etc.).
      • Electricity – The idea of using electricity generated from fossil fuels (coal, gas) to create green hydrogen is pointless. Only renewable electricity (solar, wind, hydro, biomass – this one is very relevant for agrarian societies such as India, East Asia, Africa, Latin America, parts of the US, and Australia) should be used. This will increase the well-being of farmers by increasing their income while enabling the extraction of every last bit of energy from agricultural produce, thereby reducing the carbon footprint and achieving full circularity – sustainability.

      For green electricity, large solar farms can be established in coastal or desert locations with high solar radiation zones and wind farms can be constructed in hilly areas. Apart from the desert, anywhere there is sun and wind, there will also be water.  Therefore, to generate green hydrogen you would just have to establish hydrolysers.  

      The green hydrogen produced can be transported through pipelines and distributed just like LNG. Farmers have biomass, water, and solar panels to generate electricity; they use electricity to generate H2 by electrolysis; they compress it in compressors that run on green electricity; and supply H2 for transportation.

      Having established that H2 is a better fuel due to its high heating value and discussed how green hydrogen can be generated, the next question is what technology should be used to convert this hydrogen back into kinetic energy to drive automotive.

      What will be the cost of such technologies and where will they be effective – i.e., the cost-benefit conundrum? Given hydrogen’s better fuel capability, it would be more effective in commercial transportation, which utilizes heavy equipment such as HCV, trains, heavy earth movers, ships etc. In this context, it would be generally safe to consider hydrogen as a replacement for diesel/CNG engines.

      There are two technologies to do this:

      1. Fuel cell: A fuel cell acts like an engine that converts H2 into electricity and that electricity drives the electric motor to achieve locomotion. Fuel cell technology uses polymer and platinum, making it expensive, however, research into developing ceramic membranes is currently underway.
      2. Hydrogen ICE: Internal combustion engines would be used to burn H2 instead of gasoline or CNG and produce water vapour and NO2 as emissions. Such engines are easy to modify and easy to operate because distribution networks are already in place in various geographies, such as India.

      In conclusion, I am certain that H2 is not far away in areas spread up to 15⁰ on either side of the equator, where green hydrogen can easily be harnessed due to favourable climatic conditions and fuel cell technology can be used to drive automotive due to prevailing economic conditions.

      About Author

      Anuraag Bharadwaj

      VP, Head Automotive Industry Platform, Capgemini
      Anuraag Bharadwaj is Vice President, and Head of Automotive Industry Platform at Capgemini. He is an INSEAD alumni, and Digital Transformation Leader with 25 years of sales and delivery experience in Industrial & Automotive sectors. He provides thought leadership to automotive industry, predicts and deciphers new trend unfolding in Automotive Industry. He is an expert in IT/OT convergence, Supply Chain, Connected Vehicle, Quantum tech, and alternate fuels in automotive industry.

        Europe’s new push for interoperability and collaborative data ecosystems

        Gianfranco Cecconi - Collaborative data ecosystems lead
        Gianfranco Cecconi
        9 Jun 2023
        capgemini-invent

        Capgemini Invent at the 2023 Data Spaces Symposium

        Think back fifteen years to the launch of the very first iPhone: that sleek design, that 3.5-inch touchscreen, that first mobile web browsing experience. None of us had any idea how much this one device would transform our way of life. Now, hold on to that feeling because I am going to make a prediction. After having attended the 2023 Data Spaces Symposium in the Netherlands a few months ago, and the MyData Conference from May 31 to June 1, I believe this same scenario is playing out with data spaces.

        Data Sharing Vs. Data Spaces

        For many of you, data sharing will be a familiar expression. It often gains headlines, unfortunately, when it’s nefarious, typically when companies share your personal data with third parties without your permission. But today, there is a growing trend to rediscover the authentic opportunities that come from willingly sharing data. So, why the turnaround?

        The reason is simple: we now recognize the value of sharing data. Today, the challenges facing our global community are unparalleled. To overcome them, we all need to find new ways to collaborate. Organizations worldwide are now increasingly coming to the same conclusion.

        Data sharing is not a new concept, though. Traditionally, every organization collaborating along a supply chain, for example, has been sharing information along the chain to streamline the manufacturing of goods. But today we aim at data sharing that is decentralized, flexible, and heavily reliant on standards and automation. We call this new environment a “data space.” In a data space, participants’ data does not need to be stored in one central location and managed by some super-powerful intermediary. Rather, it is stored and managed on the individual participants’ systems. The figure below is a high-level representation of the evolution we are observing in the data-sharing models, moving towards data spaces.

        Interoperability and Collaborative Data Ecosystems
        Figure 1 – From traditional data sharing to data spaces

        Data Spaces Gather Pace

        The momentum of this trend to data space adoption was evidenced by the success of this year’s Data Spaces Symposium, a new yearly conference at which the sharpest minds in the field meet to share knowledge and discuss the paradigm shift in data sharing that is accelerating the development of collaborative data ecosystems.

        Additionally, data spaces are also getting a push from regulatory bodies. The European Union’s push is particularly noteworthy. Back in 2020, its Data Strategy recognized the value data spaces can bring to the economy of its Member States. The EU has been supporting the Data Strategy through dedicated legislation (the Data Governance Act, the Data Act, and more in the pipeline…). Moreover, it continues to support progressive initiatives, such as the “Data Spaces Support Centre” (DSSC) project, of which Capgemini is a partner.

        The truth is that a lot of people have been waiting a long time for this day to arrive. Certainly, I am delighted to see decades of work finally becoming a reality. As Capgemini Invent’s Lead for Collaborative Data Ecosystems, the Data Spaces Symposium was the highlight of 2023 so far.

        Capgemini Invent and the DSSC

        The work of my colleague, Marta was the main reason I was especially excited to attend the event. Marta Pont is one of our senior managers working for EU Institutions. Her team leads the activities within the Data Spaces Support Center aimed at evaluating the impact of data space initiatives on the European economy and society, building on the experience of Capgemini in conducting similar research and assessments. The DSSC is a research and coordination project that brings together 12 partners and aims at supporting the development of European data spaces by engaging with the data space communities and identifying and making available to them relevant resources to support interoperability. At the Symposium, Marta presented our methodology for pursuing the evaluation of data spaces that is – to the best of my knowledge – pioneering work, a first in its genre on this matter.

        But why do we need such evaluations? Let’s give the floor to Marta:

        “The first purpose of our evaluation is to check if data spaces are value for money, as the European Commission is investing money into their development. But the evaluation will also be a useful exercise for data spaces themselves, to see where they stand in their development, in their ability to deliver socio-economic benefits for the society, and to identify pain points or improvement areas where they can learn from peers and improve their performance in the coming years.

        In particular, the evaluation of data space performance takes into account three dimensions and various indicators that serve to understand 1) the regulatory, financial, business and societal ecosystem in which each data space operates, 2) the stage of development of data spaces, and 3) the individual outcomes that are being yielded by data spaces.

        Interoperability and Collaborative Data Ecosystems
        Figure 2: Dimensions and high-level indicators considered in the evaluation

        It is easy to understand why we need accurate evaluations of these innovative initiatives, but you might be wondering why Capgemini was on stage to talk about it. Marta has the answer:

        “This exercise is important to Capgemini because our organization has been involved for years now in performing similar assessments as part of the European Data Portal [data.europa.eu] to measure the socioeconomic impact of open data policies and the re-use of open data in the EU. […] And, also, Capgemini Invent was running the predecessor of this project, the Support Center for Data Sharing, so this is a continuation of the work we have done thus far.”

        You Can’t Manage What You Don’t Measure: Our Methodology for Data Spaces Evaluation

        By the time Marta took to the stage, the Postillion Convention Centre in The Haag was filled to capacity. She was part of a six-strong panel discussing How to Bring Data Spaces to Life, which was the overarching subject of a suite of pitches showcasing the Assets and Services of the DSSC. Marta began her presentation by referring to the rationale and the scope of this evaluation exercise and the reason for the DSSC to pursue it.

        Interoperability and Collaborative Data Ecosystems

        She was part of a research team who back in 2017 already conducted a study for the Commission to understand the extent to which the lack of access to data hindered the European economy in terms of missed business opportunities.  The data spaces program supported by the Commission aims at addressing this gap and the evaluation conducted by the DSSC should ultimately show whether data spaces are actually successful towards this goal. In her presentation, Marta outlined the various inputs that are used in the evaluation.

        These inputs feed into a five-phase process involving an analysis of the ecosystem surrounding each data space, their level of maturity, and their socioeconomic outcomes, and eventually result in three reports which will be produced at three different moments in the project’s duration, enabling a comparison of the evolution of data spaces’ performance over time. For a summary of our methodology, take a look at the graphic below:

        Interoperability and Collaborative Data Ecosystems
        Figure 3: The Data Spaces Support Center’s methodology for evaluating the impact of data space initiatives

        While the European Commission aims to foster collaboration and synergies among data spaces and avoid data silos, we need to acknowledge that there are differences between data spaces in different sectors and even within them. This is due to a number of factors, such as their funding scheme (procurement vs. grants), their business rationale (commercial vs. non-commercial initiatives), or the pursuit of their individual goals. To cater for these specificities, the evaluation methodology proposed by Capgemini envisages the possibility to consider specific sectorial indicators that might be only applicable to some data spaces, allowing for a differentiated approach.

        Of course, as with any innovation, the rate of adoption varies from sector to sector. But slow movers risk missing out on many opportunities and the chance to be known as a pioneer in the space. Perhaps they will see the light after our data space evaluations influence European policymaking. This is one of the key aspects of Marta’s work.

        The proposed methodology has already been tested with 17 EU-supported data space initiatives, mainly through funding from the European Commission’s Digital Europe Programme (DIGITAL), and from the EU’s research and innovation programme, Horizon 2020 (currently named Horizon Europe). The first evaluation report providing information on the performance of such data space initiatives is due to be submitted to the European Commission in June and will become available over the summer on the DSSC’s website.

        After the Symposium

        It’s encouraging to know so many experts are actively developing data spaces. But we need to ensure we do not fall into the outmoded way of working. There has been a tendency for passionate and independent parties to work separately in silos. But as a result, they each risk developing competing standards, leaving blind spots, and causing misalignment. We must strive for open collaboration that will ensure data spaces go from strength to strength.

        Within the next few years, “data space” will become a household term. We’ll all be direct or indirect participants in multiple data spaces. And soon, we will use these next-generation collaboration spaces to make unprecedented progress where it matters most.

        Interested in learning more?

        Visit our collaborative data ecosystems homepage for a wide variety of resources and insights.

        Author

        Gianfranco Cecconi - Collaborative data ecosystems lead

        Gianfranco Cecconi

        Collaborative data ecosystems lead, Capgemini Invent
        “The EU’s Data Governance Act has renewed the drive for governmental initiatives aiming to empower citizens, businesses, and organizations through data. The EU data strategy is alive and meaningful to all of us in the different roles we play in our data ecosystems. The Member States’ open data programs are among the pillars that this transformation builds upon.”

          The power of connected ecosystems in aerospace and defense

          Capgemini
          Capgemini
          9 June 2023

          The connected aerospace and defense industries have undergone significant transformations since the 2000s, when the primary target was to achieve interoperability and radio communication means for network-centric operations. Nowadays, the focus is more data-centric- providing the right information at the right time to the right people.

          As connectivity becomes increasingly important across business processes, there is a growing need for collaboration and synergy. We look forward to exploring this need more in our chalet at the 54th annual Paris Air Show. Ahead of that, we’d like to explore the current perspectives in aerospace and defense, the benefits of connected operations, and the opportunities presented by new business and operating models with connected ecosystems.

          The current perspective in aerospace

          Connectivity is becoming increasingly important across all business processes in the aerospace industry. From manufacturing to in-flight operations, landing, passenger services, and maintenance, seamless communication and data exchange demand is more significant than ever. The labor-intensive nature of the industry also calls for increased automation to streamline processes and improve efficiency.

          What we refer to as “Connected Aerospace” aims to address these challenges by developing new operating and business models from manufacturing to operations. This includes optimizing design and development through engineering automation, intelligent testing, and embedded software factories. Next-generation technologies, such as green aviation, UAVs, smart mobility, air-to-ground communication, and next-gen inflight communication and entertainment, are also being incorporated to enhance the overall experience.

          Intelligent operations, such as smart factories and supply chain monitoring, ensure efficient processes throughout the industry. With end-to-end 4G/5G connectivity powered by private networks and connected equipment, superior operational performance can be achieved. This includes remote operations, industrial automation, data-driven industrial control and monitoring, thereby increasing security and safety.

          Across all of this, there is a strong emphasis on application layers, data-centric operations, mesh networks, and securing data access and use.

          By embracing connectivity, the aerospace and defense sectors can elevate towards a more sustainable future, revolutionizing how aerospace and defense operate in the modern world.

          The current perspective in defense

          The defense industry faces the challenge of managing an overwhelming amount of information. This data must be sorted and digested for end-users, ensuring they can access essential insights quickly. There is a growing need to incorporate modern connectivity technologies into the military space to address these challenges. However, since the defense industry is legacy-heavy, long-lasting equipment must be maintained and updated to meet connectivity and data needs.

          Connected Defense can now become a reality as the technologies finally exist and are proven to function in the non-military environment. This will deliver the vision of full interoperability, leading to unprecedented situational awareness at the speed of relevance while guiding precision, and more compliant military operations.

          Implementing operations presents various challenges, including technical ones like dealing with complex systems and integrating with legacy systems, as well as organizational and procedural challenges such as managing larger ecosystems in order to achieve success.

          This is in contradiction to the need due to a changing geopolitical environment to develop much faster systems that inform faster and can act faster.

          The ‘Smart Forces’ will create a multitude of data compared to today, which must be sorted and digested for end-users, ensuring they can access essential insights quickly. There is a growing need to incorporate modern connectivity technologies, leveraging AI and establishing a ‘cloud way of working and thinking’ into the military space to address these challenges. At the same time, Connected Defense also has to function in highly contested and congested environments, which imposes specific requirements. Yet, the spin-in of available advanced technologies out of the non-military environment is inevitable to progress at the pace desired towards Connected Defense. However, since the defense industry is legacy-heavy, long-lasting equipment must also be made connectivity and data-ready.

          The avalanche of connected technologies

          The rapid advancement of technology has led to a convergence of multiple innovations, which have accelerated almost simultaneously. Integrating these emerging technologies can revolutionize these sectors’ operations, enhancing efficiency, security, and overall performance.

          Sensors and IoT

          Integrating IoT technologies in aerospace and defense allows for more efficient resource allocation, predictive maintenance, and enhanced security measures. For example, sensors on aircraft can monitor engine performance, fuel consumption, and structural integrity, enabling maintenance crews to identify potential issues before they become critical. In defense applications, IoT devices can monitor borders, track assets, and detect potential threats, improving response times and situational awareness.

          Many industries are moving away from age-related maintenance schedules and visual inspections toward IoT network management, remote tracking, and preventative and predictive maintenance.

          High-speed connectivity

          As the volume of data generated by IoT devices and sensors increases, the need for fast, reliable communication networks becomes essential. The emergence of 5G and Edge computing, the new global cellular communications standard, has significantly changed the connectivity landscape, paving the way for innovative use cases and applications.

          5G offers higher speeds, greater capacity, lower latency, and enhanced reliability compared to previous generations of cellular networks. These improvements enable real-time data transmission, seamless communication among various systems, and the ability to make quick decisions based on the information gathered. Furthermore, 5G’s flexibility and proximity processing of data at the network edge allows for implementing advanced technologies such as robotics, automated machines, increased factory automation, and augmented and virtual reality, all delivered at scale and cost-effectively through a multipurpose network.

          In the aerospace industry, high-speed connectivity, particularly 5G, can improve in-flight communication between pilots, ground control, and other aircraft, enhancing safety and efficiency. It also lets passengers stay connected during flights, providing a better travel experience. In defense applications, high-speed connectivity allows for more effective command and control, enabling instant communication between units and rapid response to emerging threats. Meanwhile, edge computing can process and analyze data locally on devices such as drones, sensors, or vehicles, without a constant connection to a network. This enables real-time decision-making and rapid response to emerging threats, even in remote or contested environments with limited bandwidth availability. 5G technology is integral at the core and adds significant capabilities in the fog and edge environment.

          Artificial Intelligence

          Artificial intelligence (AI) plays a crucial role in synthesizing and processing the vast amounts of data generated by the above technologies. AI’s ability to rapidly analyze and process data enables better resource allocation in defense applications, optimizes aircraft design and manufacturing processes in aerospace, and enhances flight operations by monitoring aircraft systems in real-time. You can enhance decision-making, increase efficiency, and improve safety while driving innovation in processes and projects.

          Despite the numerous benefits of AI, challenges and considerations must be addressed, such as ethical implications in defense applications and potential vulnerability to cyber-attacks. In a 2022 report on Intelligent Products and Services from the Capgemini Research Institute, “62% of the organizations struggling to scale up IoT applications cited cybersecurity and data-privacy threats.” We can anticipate a similar trend with AI. Establishing clear guidelines, ethical frameworks, and robust cybersecurity measures is crucial for maintaining the integrity and security of connected ecosystems. By addressing these challenges, the aerospace and defense industries can fully embrace AI’s transformative power in the connected ecosystem, revolutionizing various processes and projects.

          End-to-end architecture for cloud and data

          This powerful combination of pervasive sensing, high-speed connectivity and powerful AI both provides and demands the ability to manage, control and deliver enormous quantities of data with various constraints and characteristics.

          Systems of systems span environments from centralized cloud and networking via localized compute and network edge to local sites and heavy platforms to the lightest of platforms and smart devices. The physical requirements and available capabilities – latency, storage, computation, power – vary by orders of magnitude, as do the possible impacts of failure, the levels of integrity expected, and the traditional cycle times of updates. These differences promote different architectural principles and paradigms but must support consistent end-to-end concepts of confidentiality and identity, for example.

          To achieve the speed and relevance we require, we must adapt and deploy information assets – whether operational data, AI models, or code – to the proper infrastructure with the right security at the right time. Appropriate architectures for creation, release, deployment and operation now allow us to build and train AI, in environments with few constraints and easily accessible resources while deploying into an operational domain with stringent constraints – of security, of regulation, and of potential threat,

          New models linking the digital and physical worlds are increasingly able to support these needs and constraints.

          A system of systems – the importance of integration

          These technologies cannot stand alone to reach their full potential. Successful implementation requires a comprehensive ecosystem whereby data is shared and systems are integrated seamlessly and fast into the ‘System-of-Systems’ that Connected A&D is made of.  Companies like Capgemini play a crucial role in making this ecosystem work.

          With over 340,000 employees worldwide  Capgemini is a global leader in providing engineering, IT and business solutions to the industry. Incorporating the technology shift and innovation, we build solutions for successful products and services across industries including the  theA&D ecosystem.

          Whether you’re a partner of Capgemini or not, you need collaboration among various players in the connected ecosystem, including direct end-users (Armed forces, MoDs, or public sector organizations), OEMs, solution providers, and hyperscalers.

          However, as the number of players in the ecosystem increases, the complexity of the programs grows as well. That is why it is essential to find smart ways to accelerate program development and deployment, such as adopting methodologies and accelerators.

          Breaking silos and collaborative development

          It’s hard enough to create a connected ecosystem within one organization, but now organizations must also consider how they interact with those on the outside as well.

          Especially in large-scale military programs, competitors must find ways to work together effectively, despite their natural inclination to protect their proprietary information and technologies. This is where connectivity plays a crucial role, as it enables sharing of data and resources in the overall product development and concept.

          Still, companies may hesitate to share certain information with their competitors during development. To overcome this challenge, they must find strategies and frameworks to work together efficiently while protecting their intellectual property and competitive advantage.

          There is no way around it – you need to get the job done faster than before, sharing more data than before but at the same time protecting what’s yours. That is a big cultural transformation for two legacy-heavy industries, but one that must be made.

          New business and operating models with connected ecosystems: embracing softwarization

          The rise of connected ecosystems in the aerospace and defense industries drives a significant shift in business and operating models. Companies are moving from a product-focused approach to a product-plus-services model, which opens up new revenue streams and opportunities for growth.

          Shift from product to product plus services

          Connectivity enables companies to offer a range of services alongside their core products. For example, in the automotive industry, connected cars can access navigational services, infotainment, and other applications from the cloud. Although this example is not specific to aerospace and defense, it illustrates the potential for connected technologies to transform business models in these industries.

          Platforms and synergy in the ecosystem

          To successfully embrace this product-plus-services model, aerospace and defense companies need to develop platforms to synergize the ecosystem around their products. These platforms facilitate integrating various services and applications, allowing companies to offer comprehensive solutions to their customers.

          By leveraging platforms and fostering synergy within the ecosystem, aerospace and defense companies can streamline their operations, reduce the complexity of their product development processes, and focus on delivering value to their customers. By leveraging the services and applications offered by the ecosystem, these organizations can focus on building their core products without worrying about developing every aspect of the solution themselves.

          Final thoughts

          As the Paris Air Show prepares to kick off, it is evident that the aerospace and defense industries are on the cusp of a significant transformation. The rise of connected ecosystems and the integration of emerging technologies are revolutionizing these sectors, driving a shift towards new business and operating models.

          As we move forward, aerospace and defense companies must embrace the cultural transformation required to adapt to this new era of connectivity. By doing so, they can streamline their operations, enhance efficiency and security, and ultimately revolutionize how these industries operate in the modern world.

          This will be one of our talk tracks at the Paris Air Show, so come and join us! We’ll be in Chalet No. 323. We welcome the opportunity to have a conversation around Connected A&D, and what a strategy for connectivity can look like for your organization.

          Capgemini at Paris Air Show 2023

          Bring your vision into focus

          Meet our experts

          Tim Gerkens

          VP, Intelligent Industry Accelerator, Germany
          Tim is the SPOC for Connected Defense topics and lead in the Intelligent Industry Accelerator, a Capgemini group internal strategy program. Tim has 20+ years practical project and program management experience and has been involved in the design of large transformational deals towards new technology, digitization and near-/off-shoring in multi-national projects (Europe, North America, Asia, Middle East) with volumes greater than € 100 Mn. He has core domain knowledge in aerospace and associated defense platforms, rail & automotive, with experience in other industries & IT, having served Industry and public customers including Ministry of Defense.

          Shamik Mishra

          CTO of Connectivity, Capgemini Engineering
          Shamik Mishra is the Global CTO for connectivity, Capgemini Engineering. An experienced Technology and Innovation executive driving growth through technology innovation, strategy, roadmap, architecture, research, R&D in telecommunication & software domains. He has a rich experience in wireless, platform software and cloud computing domains, leading offer development & new product introduction for 5G, Edge Computing, Virtualisation, Intelligent network operations.

            FinOps: Building the pillars for success

            Zakia Queiroz
            8 Jun 2023

            Implementing FinOps relies on governance, support, and ownership 

            FinOps uses technology and data to lower the cost of operations in line with the cloud computing model selected by cloud vendors. However, achieving a profitable and efficient optimization model requires more than the basic applications of FinOps optimization actions. Success starts with the people in your organization. 

            We have helped many clients move to the cloud, and we’ve noticed that they often overlook cost optimization until the migration has started and the costs are already rising. This means that the importance of a FinOps framework is often underestimated, or even completely abandoned, resulting in considerable unrealized savings. 

            Strong support is vital 

            FinOps must be championed by the organization for the adoption to succeed. Key stakeholders should be informed and understand the importance of FinOps and what benefits it will bring. 

            It’s vitally important that everyone on the project understands that the implementation of the best practices needs total commitment from all collaborators. Open and enlightening conversations with colleagues throughout the project make sure everyone is following a common goal. 

            Ensuring governance 

            Creating clear rules and guidelines from the onset creates a smoother transition to FinOps activities.  

            A core team should be assigned to driving FinOps, setting up processes, and providing guardrails. Success here hinges on two important roles within this team – the FinOps practitioner and the architect. 

            A FinOps practitioner oversees the action plan management, the communication of the results, and the coordination of the optimization activities. 

            An architect’s responsibilities include the initial definition of the tagging plan, the optimization of the environment in collaboration with the application owners on the concerned perimeter, and the efficiency control of the measures and the propagation of the skills. 

            Taking responsibility for success 

            Cloud engineers have told us that they often don’t feel accountable for cloud costs. 

            They are more receptive to how functionalities are performing, or to deadlines imposed by the projects. Yet, engineers are key to avoiding costs, so it’s important that FinOps practitioners and engineers talk to each other about optimizing the project. 

            Assigning clear ownership helps. By sharing the vision of the actual costs and allocating those costs to the teams through a concrete tagging plan, engineers who own cloud resources will feel more accountable. 

            It’s also useful to establish regular FinOps operational meetings with the engineers. This is an opportunity to involve engineers in discussions about progress and challenges. 

            Everyone must take charge of how they use the cloud, as stated in the third FinOps principle. A RACI matrix helps define people’s responsibilities for each FinOps activity. This list should be updated and adapted to meet the customer’s needs as the project evolves. It’s also important to have clear process descriptions in a Statement of Work that define the key optimization options. Successful adoption of FinOps requires the definition of the target outcome in the form of agreed-upon metrics. 

            An organization and its engineers need to know what the ultimate goal is and track its progress using KPIs. 

            KPIs could include: 

            • Tagging coverage rate / % untagged resources, 
            • Computation costs per hour, 
            • Costs per GB of storage, 
            • Idle instances > 30 days, 
            • Usage on weekends vs. weekdays, and 
            • RI Coverage. 

            In any cloud optimization strategy, technology is only half the battle. The other half consists of your people. The foundation of a successful FinOps strategy starts with governance, support, and ownership.

            A leader in cloud and operational optimization, Capgemini is helping organizations around the world to optimize their cloud services, saving money and lowering their carbon footprint. 

            Looking to go deeper into FinOps? Read more in our new white paper ->The Rise of FinOps 

            Author

            Zakia Queiroz

            FinOps and Cloud Financial Manager at Capgemini 

              Deep stupidity – or why stupid is more likely to destroy the world than smart AI

              Steve Jones
              7 Jun 2023

              The hype in AI is about whether a truly intelligent AI is an existential risk to society. Are we heading for Skynet or The Culture? What will the future bring?

              I’d argue that the larger and more realistic threat is from Deep Stupidity — the weaponization of Artificial General Intelligence to amplify misinformation and create distrust in society.

              Social media is the platform, AI is the weapon

              One of the depressing things about the internet is how its made conspiracy theories spread. Where before people were lone idiots, potentially subscribing to some bizarre magazine or conspiracy society in a given area, you really didn’t have the ability to industrial scale these things. Social media and the Internet has increased the spread of such ideas. So while some AI folks talk about the existential threat of AGI, personally I’m much more concerned about Artificial General Stupidity.

              So I thought it is worth looking at why it is much easier to build an AI that is a flat earther than it is to build a High School physics teacher, let alone a Stephen Hawking.

              It is easier being confidently wrong and not understanding

              LLMs are confidently wrong, that inability to actually understand is a great advantage when being a conspiracy theorist. Because when you understand stuff, then conspiracy theories are dumb.

              This means the training data set for our AI conspiracy theorist must be incomplete, what we need is not something that has access to a broad set of data, but actually something that has access to an incredibly small and specific set of data that repeats the same point over and over again.

              To be a conspiracy theorist means denying evidence and ignoring contradictions, this is much easier to learn and code for than actually receiving new information that challenges your current model and altering it.

              Small data set for a single topic

              So this is a massive advantage for LLMs when trying to create a conspiracy theorist. What we need is a limited set of data that repeats a given conclusion and continually lines up all evidence to that conclusion. We can apply this to lots of conspiracy theorists out there, for instance those folks who scream “false flag” after every single mass shooting incident in the US, in other words we have a small set of data, possibly only a few hundred data points that always result in the same conclusion. This means for our custom trained conspiracy theorist the association it always knows is “what ever the data, the answer is the conspiracy”.

              Now we could get fancy and have a number of conspiracies, but given very few of them are logically consistent with each other, let alone with reality, it is more effective to have a model per conspiracy and just switch between them. That a conspiracy theorist is inconsistent with what they’ve previously said isn’t a problem, but we don’t want inconsistencies between conspiracies on a single topic. What we need to add are the standard “rebuttals of reality” like “Water finds its level”, “We don’t see the curve”, “NASA is fake” or “Spurs are a top Premier League club”.

              Hallucinations help

              This small set of data really helps us take advantage of the largest flaw in LLMs, hallucinations, or when the LLM just makes stuff up either because it has no data on the topic, or because the actual answer is rare so the weightings bias it towards an invalid answer. This is where LLMs really can scale conspiracy theories, because the probabilities are weighted towards the conspiracy theory already (as that is the only “correct” answer within the model) then any information we are provided with is recast within that context. So if someone tells us that the Greeks proved the earth was round in the 2nd Century BC our LLM could easily reply:

              Context makes hallucinations doubly annoying

              Our LLMs can go beyond the average conspiracy theorist thanks to the context and hallucinations. While an average conspiracy person will only have a fixed set of talking points, and potentially be constrained at some level by reality, the hallucinations and context of the conversation enables our conspiracy LLM to keep building its conspiracy and adding elements to it. Because our LLM is unconstrained by reality and counter arguments, instead being able to reframe any counter argument by using a hallucination it will be significantly more maddening. It will also mean it will create new justifications for the conspiracy that have never been put forwards before. These will, of course, be total nonsense but new total nonsense is mana from heaven to other conspiracy theorists.

              Reset and start again

              The final piece that makes a conspiracy LLM much easier to create is that if the LLM goes truly bonkers and you need to reset… this is exactly what conspiracy theorists do today. So if our LLM is creating hallucinations that fail some form of basic test, or just every 20 responses, we can reset the conversation in a totally different direction. Making my generative LLM detect either a frustration or an “ah ha” moment from the person it is annoying, a trivial task, enables me to then have my conspiracy bot just jump to another topic, and to do so in a much smoother way than most conspiracy theorists do today.

              This is a much smoother transition for a flat earth conspiracy than you’ll hear on TikTok or YouTube.

              We have achieved AGS, that isn’t a good thing

              I’ve argued that the current generation of AIs aren’t close to genuinely passing the Turing test, let alone more modern tests. Turing set the bar of intelligence as the CEO of a Fortune 50 company, and made it have awareness of what it didn’t know.

              Some folks are concerned about a coming existential crisis where Artificial General Intelligence becomes a threat to humanity.

              But for me that is assuming the current generation of technologies are not a threat, and that intelligence is a greater threat than weaponized stupidity. Many people in AI are in fact arguing that GPT passes the Turing test, not because it replicates an intelligent human, but because either it can pass a multiple choice or formulaic example, or because it can convince people they are speaking to a not very bright person.

              We can today make an AI that is the equivalent of a conspiracy theorist, someone untethered to reality and disconnected from logic. This isn’t General Intelligence, but it is General Stupidity.

              Deep fakes and deep stupidity

              Where Deep Fakes can make us not trust sources, Deep Stupidity can amplify misinformation and constantly give it justification and explanation. Where Deep Fakes imitate a person or event, Deep Stupidity can imitate the response of the crowd to that event. Spinning up a million conspiracy theorists to amplify not just the Deep Fake but the creation of an alternative reality around it.

              The internet and particularly social media has proven a fertile ground for human created stupidity and conspiracy theories. Entire political movements and groups have been created based on internet created nonsense. These have succeeded in gaining significant mindshare without having the capacity to really generate either convincing material or convincing narratives.

              AIs today have the ability to change that.

              Stupidity and misinformation are today’s existential threat

              We need to stop talking about the challenge with AI being only when it becomes “intelligent”, because it is already sufficiently stupid to have massive negative consequences on society. It is madness to think that companies, and especially governments, aren’t looking at this technologies and how they can use them to achieve their ends, even if their ends are simply to sew chaos.

              Stupidity is the foundation for worrying about intelligence

              Worrying about an AI ‘waking up’ and threatening humanity is a philosophical problem, but addressing Artificial Stupidity would give us the framework to deal with that future challenge. Everything about controlling and managing AI in future can be mapped to controlling and avoiding AGS today.

              When we talk about frameworks for Trusted AI and legislation on things like Ethical Data Sourcing these are elements that apply to General Stupidity just as much as to intelligence. So we should stop worrying simply about some amorphous future threat and instead start worrying about how we avoid, detect and control Artificial General Stupidity, because in doing that we lay the platform for controlling AI overall.

              This article first appeared on Medium.

              Will greater open data maturity help to deliver the EU’s European Green Deal?

              Eline Lincklaen Arriëns
              7 Jun 2023

              A European Green Deal is one of the six strategic priorities targeted by the European Commission for the years 2019-2024. It aims to make Europe the first climate-neutral continent by becoming a modern, resource-efficient economy. The EC believes that access to data (both open and private) is crucial to this ambition, but just how mature are EU Member States’ open data policies in terms of contributing to the European Green Deal?

              First, let’s recap on what we mean by open data. Open data is data that anyone can access, use, and share, free of charge. The annual Open Data Maturity (ODM) report, coordinated by Capgemini on behalf of the EC and the EU Publications Office, gathers insights into the state of open data in European countries, including the 27 EU Member States. It then extrapolates open data best practices that are already being implemented and offers recommendations for the adoption of open data policies Europe-wide.

              The latest report was published at a time when EU Member States were (and still are) working towards making high-value datasets publicly available by a mandated EC deadline of June 2024. High-value datasets are those deemed to have a high potential impact in areas such as the economy, society, and the environment. Of the six categories classified as high value, several pertain to the environment, notably earth observation & environment, and meteorological datasets.

              Another identified high-value dataset category with environmental implications is mobility, as it can encourage a lower consumption of energy based on vehicle fuel and the switch to renewables. Here, the ODM report cites the mFUND-Project ChargePlanner in Germany that aims to develop a prototype for calculating charging recommendations for electric cars along a route and for forecasting the capacity utilization of public charging stations. For this purpose, suitable data sources are first identified and then processed or merged. The resulting data will be made available in a smartphone app that will then be expanded to include a capacity utilization forecast for charging stations.

              Putting in place policy

              In the latest ODM report, one of the key indicators of maturity – the policy framework – included an overview of whether the 27 EU Member States aligned the objectives of their open data policies with the six EC 2019-2024 priorities. It found that more than half of the ODM survey respondents (60%) had policies and strategies aligning with A European Green Deal.

              This is important because the EC believes that a fully implemented open data policy has the potential to help governments enable more environmentally friendly cities, among other societal, economic, and environmental benefits. And from observation of litter, sea temperature and amphibian habitat data in Sweden to data on soil humidity, drinking water protection zones and yearly weather in Luxembourg, there is a wide variety of what the EC terms ‘environment’ datasets held by countries across Europe. 

              However, there is still clearly a long way to go in terms of open data maturity. The ODM report reveals that, in 2022, only 8 out of the 27 EU Member States said they held data on the impact of open data on the environment and connected issues. Furthermore, the reported 60% alignment with the EC’s European Green Deal priority is substantially lower than the 84% of countries saying that their open data policies or strategies align with the ‘Europe fit for the digital age’ priority.

              How is public sector environment data being used?

              Nonetheless, where countries have forged ahead in this respect, the potential for using data to limit the carbon footprint of cities and other areas is clear. In assessing the environmental impact of open data maturity, the ODM report looked at several open data use cases, specifically: increasing awareness on biodiversity-related topics; enabling more environmental-friendly cities; raising awareness on climate change and connected disasters; and encouraging a lower consumption of energy based on fuel and the switch to renewables.

              Among the success stories revealed in the report are:

              • Latvia’s Vides SOS app allows everyone to record environmental violations, report them to the relevant authorities, and get feedback on the progress of their remediation quickly and easily.
              • In Malta, the Environment and Resource Authority provides real time data regarding air quality online, which is widely reused to produce air pollution visualizations maps, air quality indexes, and street-level air quality, pollen, and wildfire intelligence.
              • The Czech Republic is running a project called the National Environmental Reporting Platform that focuses on available sources of environmental open data and analyzes their impact on the social, political, and legislative requirements caused by climate and environmental changes.
              • The Dutch city of The Hague is supporting the transition towards cleaner energy with the development of a Datalab, promoting data-driven working and data mindsets through data and trend analysis and risk projection, for example, and by helping to visualize the available datasets.
              • Spain’s Ministry of Ecological Transition and the Demographic Challenge launched the Climate Change Scenario Viewer, a useful tool for viewing and downloading data related to plausible representations of future climate. The service uses data fed by specific projections from the AEMET (State Meteorological Agency) and the grid projections from the international Euro-CORDEX initiative.

              A greener Europe?

              What do these use cases tell us about the potential for open data to help deliver the EC’s priority of a European Green Deal? 75% of EU Member States say that they see the use of open data in their countries as having an impact on environmental issues. In three countries, ‘Environment’ is also the top category of dataset visited on the national data portal.

              Looking ahead, the scope for using the increasing volume of environmental data to address climate-related issues continues to expand. Our work on the ODM report leads us to recommend that both public and private organizations should disclose environmental, social and governance related data to the wider public, so that everyone can reap benefits. This will contribute to Europe achieving the UN Sustainable Development Goals and realizing the European Green Deal.

              The ODM will continue to observe how European countries are measuring and monitoring their environmental impact across the four areas of: biodiversity-related topics; smart cities; awareness on climate change and related disasters; and energy. As actions towards achieving the European Green Deal mature, and countries start to publish high-value datasets (especially categories related to the environment, notably earth observation & environment, meteorological, and mobility), we expect to see an increase in the volume of data-led environmental use cases and more examples of organizations using open data for their projects and initiatives. 

              Find out more about open data and how it is being used across Europe in the Open Data Maturity Report 2022.

              Authors

              Eline Lincklaen Arriëns

              Senior Consultant and Expert on European data ecosystems Capgemini Invent NL
              “Digital technologies are crucial in addressing global challenges, including climate change and environmental degradation. Capgemini aims to support clients accelerate their digital transition in a manner that is sustainable to their organization, society, and the environment, and in line with EU priorities such as the EU Green Deal.”
              Luc Baardman

              Luc Baardman

              Managing Consultant and Lead Enabling Sustainability Capgemini Invent NL
              “Sustainability at its core is the most important transformation question of our time. Left unanswered, it will wreak havoc upon the world and its population, and it is up to all of us to play our part in becoming sustainable in an inclusive manner. Capgemini’s part is to remove the impediments for a better future, to truly enable sustainability.”

                The future of telecommunications is cloud-native engineering

                Chhavi Chaturvedi
                22 Mar 2022
                capgemini-engineering

                Communications service providers are making the transition to cloud-native so they can improve services and expand their business. What are you waiting for?

                Next-generation mobile networks are introducing new architectures and functionalities, significantly increasing network complexity. As a result, a growing number of communications service providers (CSPs) are concluding that traditional virtualization is insufficient for delivering 5G services.

                The writing is on the wall. It is time to embrace cloud-native technologies to accelerate innovation and service. According to a recent Gartner report, cloud-native platforms will serve as the foundation for more than 95% of new digital initiatives by 2025. That figure is more than double the less than 40% share of cloud-native platforms in 2021.

                But the adoption of cloud-native solutions isn’t just about the technology. It is also about new markets and business opportunities to accelerate growth by providing global trade management, faster time to market, and greater agility, flexibility, scalability, and service availability.

                The infrastructure transformation from legacy to cloud-native is perceived as crucial for creating a broader, more expansive ecosystem, where technology vendors, network operators, developers, and hyper-scale cloud providers work together to launch new business opportunities and generate new revenue streams.

                Today’s typical CSP cloud faces many challenges, such as more effort required to manage large monolithic virtual network functions, too many manual tasks, complexity, migration challenges, security and privacy issues, latency, vendor lock-in, etc. All of these challenges can be mitigated by transitioning to cloud-native practices.

                By adopting cloud-native engineering, CSPs can leverage cloud-native services such as containerization and orchestration, micro-services architecture, server-less architecture, DevOps CI/CD, observability, analysis, and many others. In addition, the transition can increase operational efficiency by designing and developing scalable, agile, cost-effective applications and running them in dynamic environments.

                To implement efficient network functions and technologies with cloud-native solutions, CSPs must be open to the cloud approach and flexible, scalable, reusable, shared cloud-native platforms that are easy to maintain, upgrade, and scale.

                For an in-depth look at what telecommunications companies must do to make the cloud-native transition, download the Capgemini Cloud-Native Capgemini Engineering white paper here.


                Authors:

                Chhavi Chaturvedi
                DevSecOps Engineer, Capgemini Engineering
                Chhavi is part of the Product Services and Support team at Capgemini Engineering. She focuses on developing DevOps and AWS security solutions and delivering them to customers. Chhavi also has experience in public cloud security. She earned a master’s degree in cybersecurity from the National Institute of Technology, Kurukshetra, India.

                Sunny Kumar
                Program Manager, Senior Cloud and DevOps Architect, Capgemini Engineering
                Sunny is part of the Product Services and Support team at Capgemini Engineering. He is responsible for helping customers with their cloud and DevOps transformations.

                Meet the author

                Chhavi Chaturvedi

                DevSecOps Engineer, Capgemini Engineering