Skip to Content

Riding in a waymo autonomous car in San Francisco – a safe and impressive journey

Andreas Sjostrom
Nov 2, 2023

Self-driving cars have been commonplace in the San Francisco and Bay Area traffic for some time.

Waymo, Cruise, Nuro… billion-dollar projects are piloted here. At the moment, Cruise has its license suspended due to a recent accident under investigation.

The waitlists to be enrolled as a customer are long, and I’m still waiting. So, I was all in when Josh Baillon on my team offered me to skip the Uber and join him on a Waymo. Check out the video below for some highlights of the trip.

  • The driving style felt safe- smooth, careful, and defensive.
  • I kept thinking about what I would have done in each situation that occurred and felt that I was witnessing the driving style of a professional and experienced driving instructor.
  • From a safety perspective, I liked the experience more than I do – on average – with a human driver.

That said… the whole “chatting with the taxi/Uber driver” topic is a different chapter. Some love it; some don’t. Perhaps the future of autonomous taxis includes the option to add a human or AI “driver” as a passenger just for social conversation.

Thanks, Josh and Waymo, for the ride!

Meet the author

Andreas Sjöström

CTO & VP at Applied Innovation Exchange
Leading the Capgemini Applied Innovation Exchange in San Francisco, Capgemini’s flagship innovation space. International experience as CTO of Capgemini Scandinavia, member of Sweden and Scandinavia country boards. Digital transformation and innovation advisor for key accounts in the US, Netherlands, France, and the Nordics.

    Extending returns on smart-meter investment

    Capgemini
    Nov 03, 2023

    Time to re-examine the use-case roadmap for new applications

    AMI projects are driven by business use cases, and utilities over the years have acted on many of these while discarding others. As the next generation of AMI emerges, some of those unrealized use cases need to be re-evaluated, because energy transition, changing technology, and increasing consumer expectations mean there is now potential that did not previously exist.

    Deriving the most value requires understanding the current technology landscape and the level of sophistication in deployed AMI 1.0 technology. Creating additional value depends on the capabilities of those meters.

    Some AMI 1.0 meters have advanced features which support a broader range of use cases, even with some inherent limitations and less data. Other utilities, with AMI 1.5 or 2.0 systems, have newer technologies that generate more information and are able to support more high-value use cases.

    In either case, the goal should be to leverage upgrade capabilities and as much data as possible to support the AMI investment. Data can impact new areas of the organization including operations, asset management, vegetation management, greater reliability and safety, customer experience, customer service, and billing. For some utilities, this will mean re-evaluating their internal operating units to ensure more cross-collaboration across all internal teams.

    New opportunities

    AMI also has the potential to support bigger opportunities, such as creating new rate structures for programs such as preferred times for charging electric vehicles. Careful evaluation of the upgrade capabilities of existing AMI meters will drive the potential ability to create a path forward and expand use-case value with technology upgrades to existing smart meters.

    The data produced by the new meters hold the potential to solve distribution system issues caused by more climate-friendly initiatives. For example, rooftop solar panels generate clean energy but can create power quality issues with the grid. Measuring and detecting potential issues and addressing them proactively or even reactively is important to ensure the reliability and stability of the grid as more renewable sources are added.

    A new customer experience

    This kind of change will take time. Most utilities are built on a retail business model that has not changed in more than 100 years. Utilities will continue to be a retailer but have traditionally offered few services.

    Utilities today need to change to stay relevant. Technologies are being introduced into the market and adopted by customers without the involvement of their utilities. Utilities need to increase engagement with customers and address potential issues with new technologies to keep serving their customers well – especially as new challenges arise. For example, if community-based microgrids can generate their power and provide essential energy services on their own, the role of a utility is diminished. And EV owners do not have to inform the utility company of their new car purchase, making infrastructure planning and upgrades more complex. This means it is critical and inevitable that utilities need to change business models and engage in a meaningful way with customers to make their lives better.

    Finding more value

    These are interesting conversations for utilities and they need to determine how they develop programs, services, and operating procedures with more of a customer mindset. Capgemini is already discussing potential programs with clients, such as remote disconnection capabilities, EV charging/discharging management, and the overarching customer journey. Connecting and disconnecting a customer’s power is a significant cost, even considering only the truck roll and qualified technicians. The next generation of meters means utilities could disconnect or reconnect with a click of a mouse, with the right procedures and governance in place. That is direct savings back to the consumer and an improved customer experience, because it can all be completed over the phone, as well as an opportunity to minimize safety concerns by requiring less vehicles on the road.

    Utilities would be more quickly alerted if there is a power quality issue. For example, tree branches interfering with lines would be evident in certain data patterns, and could trigger a vegetation-management crew to visit the area with the workforce management system. This kind of proactive maintenance could prevent future outages, boost grid reliability, and lead to a more positive customer experience.

    Expanding opportunities

    Each generation of AMI will bring new benefits and utilities will find ways to use the data to be successful. But business silos will need to be dismantled to achieve the biggest gains. Operations have not really been involved in the first generation of AMI projects but could have benefited tremendously. Utilities need to extend conversations into different parts of the organization.

    There is also an opportunity for third parties, as new technologies and service providers not affiliated with a utility, to provide valuable offerings to the end-customer by utilizing AMI data. So while the meter is recording data, the utility is facilitating the sharing this information to create a new layer of personalized services and business opportunities.

    Capgemini offers clients AMI expertise, an extensive use-case library, and a holistic modernization approach to craft a roadmap that finds new opportunities and enhances ROI.

    Meet the author

    Mike Lang

    Utility Transformation Leader
    Mike is a senior leader and Principal in our US Resources and Energy Transition team, responsible for offering development, delivery, and go-to-market strategy. He believes data and smart metering are the foundational pillars for a broader utility transformation in smart grid, electrification, and energy transition.

    Bill Brooks

    US VP Smart Grid
    I lead our Smart Grid initiatives designed to assist grid operators across the United States with major business transformations towards truly data driven digital organizations, enabling the transition to a reliable, safe, and renewable energy system. Examples of relevant business transformation areas include: smart meter, smart substation, distributive energy management, advanced asset management, control room of the future, data management, and digital twin.

      Graphite: The unsung hero of sustainable energy, and why we need more than a ‘Silver Bullet’

      Pascal Brier
      Oct 26, 2023

      I recently had a debate with a colleague on the critical materials that will be needed to support our #sustainability transformation.

      One of these materials is certainly #graphite, a derivative of carbon renowned for its excellent electrical conductivity, high-temperature stability and chemical inertness.
      Although we often mostly mention Lithium, Graphite stands out as a game changer in the energy storage technology, particularly the production of lithium-ion batteries which are central to the electric car industry. Few people know that each current electric battery is made of 60 to 90 kilogrammes of Graphite.

      And yet close to 90% of the world’s graphite is currently being refined in China, which recently decided to tighten its export restrictions of this highly strategic material. While I’ve seen many commentators call for Europe and the US to increase their graphite refinement capabilities, even going as far as to develop synthetic graphite, I would add that addressing the diverse energy needs of our evolving world will require a multitude of materials and technologies ; it’s unlikely that a perfect ‘’silver bullet’’ will overcome all challenges.

      This is why it is also important to keep investing in alternative forms of battery #technology, such as sodium-ion batteries, solid state, silicon batteries or even hydrogen fuel cells.

      Meet the author

      Pascal Brier

      Group Chief Innovation Officer, Member of the Group Executive Committee
      Pascal Brier was appointed Group Chief Innovation Officer and member of the Group Executive Committee on January 1st, 2021. Pascal oversees Technology, Innovation and Ventures for the Group in this position. Pascal holds a Masters degree from EDHEC and was voted “EDHEC of the Year” in 2017.

        How do you define and measure FinOps maturity?

        Jurjen Thie
        30 Oct 2023

        The first step to FinOps maturity is measurement.
         

        FinOps is based around continuous improvement, and that makes measuring progress crucial. Some of the questions FinOps teams must address include:

        • What measurements best reveal your progress in your FinOps journey?
        • What are the ingredients for success?
        • What are some practical examples of success?

        Before we can begin to answer these questions we must look at the concept of maturity in FinOps – simple in theory, but complex to carry out in practice.

        Introducing a cloud strategy framework

        There is no silver bullet to measuring FinOps maturity. By design, FinOps intersects with multiple departments, teams, factors, and actions, each one of which needs to be measured independently. The teams I work with use the framework below to define these “ingredients” of cloud & FinOps maturity.

        The eight topics in this model are all vital to FinOps. Of these, Organizational Change, Target Operating Model, and Financial Impact will be especially relevant for most FinOps practitioners.

        Connecting the pieces

        Defining and measuring your FinOps maturity begins with an assessment of your organization’s overall cloud adoption maturity, including each of the ingredients in our cloud strategy framework above. Look for gaps within each focus area, as well as in the interactions between them. FinOps maturity requires many pieces to come together, and the more gaps your assessment identifies, the greater your potential for gain.

        On the other side of the coin, your organization is also likely to have “bright spots” – areas (or individuals) that are especially successful. These bright spots can be even more valuable than gaps as they provide concrete information on what can be attained, and how.

        There are three main areas where gaps and bright spots are likely to turn up:

        • Organizational change alignment
          1. How mature are the teams that consume cloud?
          2. What kind of reporting do you require to give the proper feedback to the responsible teams so that decision-making becomes more fluent?
          3. What is the role for application transformation and application life cycle management in achieving greater efficiency?
        • Skills and governance
          1. How do you incentivize your teams to go beyond the basic hygiene factors? (Can gamification play a role here?) What culture and mindset do you need to inculcate?
          2. Is there a role or group where technology and cloud economics come together in the organization? Who is responsible?
          3. What kind of expertise does the organization have connected with cloud and the cost model attached to it?
        • Tools and insights
          1. What tooling is available on the market to provide the optimal cloud consumption?
          2. How can sustainable impact be improved?
          3. What are the cloud cost metrics; how can we make this more actionable and easier to understand?

        What is the mature end-state of FinOps?

        FinOps has two major stages: the harvesting of low-hanging fruit, followed by steady continuous improvement. In the second stage, a larger goal is also possible. Organizations may choose to craft a plan that relates costs to business value and use FinOps to support strategic choices around their core goals. At this level, FinOps is deeply embedded in the organization, affecting and being affected by the greater company culture.

        Over time, FinOps should become increasingly aligned with company strategy. In good times, when cloud requirements are on the rise, FinOps will help make investments more predictable. In difficult economic times, a solid, data-driven plan to prevent wasted cloud consumption is essential. Mature FinOps never stops growing. A well-governed FinOps program is an ever-evolving catalyst for growth within an organization, constantly uncovering innovative ways to drive progress.

        A leader in cloud and operational optimization, Capgemini is helping organizations around the world optimize their cloud services, saving money and lowering their carbon footprint.  

          

        Looking to go deeper into FinOps? Check out our FinOps Page and the whitepaper – The rise of Finops. 

        Author

        Jurjen Thie

        Enterprise Architect – Cloud COE 

          Exploring probabilistic modeling and the future of math

          Robert H. P. Engels
          Oct 23, 2023

          Some days you get these chains of interesting events following up on each other.

          This morning I read the “GPT can solve math[.. ]” paper (https://lnkd.in/dzd7K3sx), then I read some responses to that (o.a. Gary Marcus, X-tweets etc). During and after the TED AI I had many interesting discussions on the topic of probabilistic modelling vs models of math as we know (knew?) it, and this paper sparked some thoughts (so: mission accomplished).

          It occurs to me that we have a generation of PhD students building LLMs that have probably not really got interested model thinking and mathematical proofs. I.e. the thinking behind Einsteins´ relativity theory, the thinking behind Euler’s Graph theory, the type of thinking that led (indeed) to a mathematical model that you can implement in a calculator (low footprint), calculates correctly (100% trustworthy) and in addition can calculate things 100% correct on input unseen before.

          The question really condenses down to the fact whether you believe in the abstraction capability in current algorithms used for training todays LLMs. Are attention layers at all able to build abstractions on their own (and not regurgitating from abstractions it got ready-served by humans)? Optimism in the Valley is big, just add more data and the problem will go away.

          But without changing the underlying attention layer design this seems to be a fallacy. Learning to abstract really means to build metalevels on your information, condense signals and their relations. That is something different as predicting chains of tokens. Such an abstraction layer can be seen as building a 3D puzzle, whereas current attention mechanisms seem single-layered. With a single layer, the most you can build is a 2D puzzle.

          With that picture in mind, you can observe that current solutions by LLM suppliers is to extend the 2D puzzle, making it larger (adding more data from all over), or giving it higher resolution for a specific task (like the math paper mentioned above). But no sincere attempts seem to have been made yet to build a 3D picture, which would mean to rock the foundation of the foundation models and rebuild the attention layer mechanism to cover for this deficit.

          Until then, let’s focus on getting functions to work reliable and off-load model-based tasks (math, engineering, logic, reasoning) to external capability agents and stop pretending that 2D can become 3D without changing the foundation.

          Meet the author

          Robert Engels

          Vice President, CTIO Capgemini I&D North and Central Europe | Head of Generative AI Lab
          Robert is an innovation lead and a thought leader in several sectors and regions, and holds the position of Chief Technology Officer for Northern and Central Europe in our Insights & Data Global Business Line. Based in Norway, he is a known lecturer, public speaker, and panel moderator. Robert holds a PhD in artificial intelligence from the Technical University of Karlsruhe (KIT), Germany.

            How to securely monitor a 5G network

            Capgemini
            Aarthi Krishna & Kiran Gurudatt
            16 Oct 2023

            Every generation of wireless technology has required organizations to adapt their security practices to effectively monitor and protect their networks. But monitoring a 5G network presents a new level of complexity due to the different protocols and architecture involved.

            In our final blog of the 5G security series, it’s time to explore the complexities of monitoring a 5G network and how organizations can ensure that their infrastructure is watertight.

            The 5G step change

            Traditionally, security monitoring has focused on IT networks, such as MPLS or IP networks, where most security operations centers (SOCs) operate from. These SOCs primarily monitor enterprise systems like office, financial, and HR systems. However, with the proliferation of connectivity in operational environments, including manufacturing facilities, warehouses, and critical infrastructure, the monitoring scope has evolved to include operational technology (OT) networks too.

            OT networks differ from enterprise networks in terms of protocols and tools required for monitoring. Proprietary protocols often govern devices and equipment in the OT environment, each requiring specific protocols and tools for monitoring.

            Unlike IP networks, 5G networks operate on cellular protocols and follow cellular standards developed over previous generations (e.g., 2G, 3G, 4G). The difference is that as organizations deploy their own private or hybrid 5G networks, the responsibility for monitoring these networks shifts from telco providers to the enterprises themselves.

            This is a completely new world for organizations, introducing unique complexities tied to cellular protocols and the division between the control and data plane (the former handles the initial handshake, authentication, encryption, and bandwidth allocation, while the latter facilitates the actual data transfer). Monitoring both planes and correlating the data is essential for effective 5G network security operations.

            24×7 log collection

            A fundamental aspect of 5G network security is continuous monitoring through 24×7 log collection. Logs are gathered from various components spanning from the user equipment (UE) to the core, providing crucial insights into potential security events.

            The extent of log collection depends on the deployment model adopted. In private deployment models, higher volumes of log collection is possible. However, where the 5G architecture is shared with mobile network operators (MNOs), the service provider must collaborate to ensure the necessary logs are collected.

            To achieve comprehensive monitoring, it is essential to collect logs from both the control plane and the data plane of the 5G architecture. Additionally, specialized toolsets are required as existing enterprise log collection tools may not fully comprehend the specific protocols, such as GTP, used in 5G networks. These tools not only collect data but also correlate them to identify ongoing attacks effectively.

            Indicators of compromise

            The next aspect of monitoring are indicators of compromise(IoCs),which play a vital role in detecting security attacks within the 5G environment. The best-in-class toolsets available today provide a range of IoCs that can be utilized by SOC analysts to identify potential security breaches. These IoCs can subsequently be categorized into device-related and traffic/performance-related indicators.

            Some examples of device related IoCs include:

            Detecting unknown devices in the network, Monitoring changes in device connection status, Identifying new device vendors, Detecting devices that have not been seen for a specific period, Identifying new device types, Monitoring abnormal device traffic usage, Tracking abnormal traffic usage by devices in specific locations, Identifying user equipment (UE) connection failures, Detecting consistent failures in UE IP allocation, Identifying conflicting IMEI numbers with SUPI and SUCI mapping, Detecting unknown UEs joining the network, Monitoring repeated UE authorization failures, Identifying devices with unknown locations, Identifying devices with vulnerabilities or performance issues.

            Similarly some traffic and performance IoCs include

            Identifying unauthorized traffic patterns, Monitoring compliance with quality of service (QoS) parameters, Detecting abnormal traffic for specific devices or applications, Monitoring the absence of traffic, Identifying abnormal protocol usage for user equipment (UE) and Internet of Things (IoT) devices, Detecting spikes in control traffic to UE, radio access network (RAN), and core, Monitoring spikes in user plane data, potentially indicating distributed denial of service (DDoS) attacks

            These IoC examples offer a glimpse into the extensive use cases built around them, and with the right tools, SOC analysts should feel empowered to detect and respond to security breaches effectively.

            Best practices for securely monitoring 5G networks

            Monitoring a cellular network can be complex but when taken step-by-step it can also be manageable and efficient:

            • Develop expertise: Invest in training and familiarize the team with the unique aspects of 5G protocols and the control-data plane division.
            • Collaborate with telco providers: Engage with telco providers to understand their monitoring capabilities and coordinate efforts to ensure end-to-end security for private 5G networks.
            • Adopt specialized tools: Acquire monitoring tools designed specifically for 5G networks, capable of monitoring both the control plane and the data plane. These tools should provide comprehensive visibility and correlation capabilities.
            • Implement network slicing: Leverage network slicing capabilities to isolate and monitor different slices within the 5G network. This approach enhances security and allows focused monitoring for specific services or devices.
            • Continuous monitoring and analysis: Implement real-time monitoring and analysis to identify anomalies, detect potential threats, and respond promptly. Incorporate threat intelligence feeds to stay updated on emerging threats and vulnerabilities in 5G networks.

            As these different components come together in different deployment models, achieving end-to-end security in 5G can become challenging. This is why IT, OT, and cellular network security policies must all be well aligned and integrated to bring enterprise grade security that is governed by zero trust principle protecting north–south and east–west traffic as well as data at storage and in transit. 

            Overall, any monitoring of a 5G network requires organizations to adapt their security practices to the unique characteristics of cellular protocols and the control-data plane division. By investing in expertise, collaborating with telco providers, leveraging specialized tools, and adopting best practices, organizations can ensure the security of their 5G networks and start embracing the benefits of 5G technology.

            Contact Capgemini today to find out about 5G security.

            Author

            Aarthi Krishna

            Global Head, Intelligent Industry Security, Capgemini
            Aarthi Krishna is the Global Head of Intelligent Industry Security with the Cloud, Infrastructure and Security (CIS) business line at Capgemini. In her current role, she is responsible for the Intelligent Industry Security practice with a portfolio focussed on both emerging technologies (as OT, IoT, 5G and DevSecOps) and industry verticals (as automotive, life sciences, energy and utilities) to ensure our clients can benefit from a true end to end cyber offering.

            Kiran Gurudatt

            Director, Cybersecurity, Capgemini

              The future of learning is immersive

              Alexandre Embry
              Oct 18, 2023

              Here’s the last PoV from Capgemini that I co-authored with my colleague Isabelle Lamothe addressing how immersive and metaverse experiences can revolutionize the future of learning and training.

              Against a backdrop of transformation in our relationship with work, companies today must continually adapt to gain or maintain their competitive edge. Today, 77% employers find it difficult to recruit the right people while 60% of workers will require training before 2027.

              If we talk about employee experience, we cannot ignore the major pillar that is what we call “future of learning”. And a lot of work is required to transform the companies in the right direction.

              This objective can be accelerated by using technologies to empower our ways to train and learn. And we are deeply convinced that immersive experiences, like metaverse, are a main lever to reach the next level of learning.

              Today, many organizations continue to innovate through the metaverse and immersive experiences and curiosity about the metaverse remains high, especially when it comes to training, as shown by a study carried out by Capgemini Research Institute. 61% of respondents believe that immersive experiences can have an impact on the training sector. We still have some time to go before the metaverse reaches full maturity, but it opens a wide field of actionable possibilities right now.

              At Capgemini, we aim at becoming a strong partner for our clients by accompanying them through their will to put their training ecosystem to the next step.

              Let’s open discussion if you’re interested in this.

              No posts

              Alexandre Embry

              Vice President, Head of the Capgemini AI Robotics and Experiences Lab
              Alexandre leads a global team of experts who explore emerging tech trends and devise at-scale solutioning across various horizons, sectors and geographies, with a focus on asset creation, IP, patents and go-to market strategies. Alexandre specializes in exploring and advising C-suite executives and their organizations on the transformative impact of emerging digital tech trends. He is passionate about improving the operational efficiency of organizations across all industries, as well as enhancing the customer and employee digital experience. He focuses on how the most advanced technologies, such as embodied AI, physical AI, AI robotics, polyfunctional robots & humanoids, digital twin, real time 3D, spatial computing, XR, IoT can drive business value, empower people, and contribute to sustainability by increasing autonomy and enhancing human-machine interaction.

                Transforming the data terrain through generative AI and synthetic data

                Aruna Pattam
                Aruna Pattam
                18th October 2023

                Welcome to the brave new world of data, a world that is not just evolving but also actively being reshaped by remarkable technologies. It is a realm where our traditional understanding of data is continuously being challenged and transformed, paving the way for revolutionary methodologies and innovative tools.

                Among these cutting-edge technologies, two stand out for their potential to dramatically redefine our data-driven future: generative AI and synthetic data.

                In this article, we will delve deeper into these fascinating concepts.

                We will explore what generative AI and synthetic data are, how they interact, and, most importantly, how they are changing the data landscape.

                So, strap in and get ready for a tour into the future of data!

                Understanding generative AI and synthetic data

                Generative AI refers to a subset of artificial intelligence, particularly machine learning, that uses algorithms like generative adversarial networks (GANs) to create new content. It’s “generative” because it can generate something new and unique from random noise or existing data inputs, whether that be an image, a piece of text, data, or even music.

                GANs are powerful algorithms that comprise two neural networks — the generator, which produces new data instances, and the discriminator, which evaluates them for authenticity. Over time, the generator learns to create more realistic outputs.

                Today, the capabilities of generative AI have evolved significantly, with models like OpenAI’s GPT-4 showcasing a staggering potential to create human-like text. The technology is being refined and optimized continuously, making the outputs increasingly indistinguishable from real-world data.

                Synthetic data refers to artificially created information that mimics the characteristics of real-world data but does not directly correspond to real-world events. It is generated via algorithms or simulations, effectively bypassing the need for traditional data collection methods.

                In our increasingly data-driven world, the demand for high-quality, diverse, and privacy-compliant data is soaring.

                Current challenges with real data

                Across industries, companies are grappling with data-related challenges that prevent them from unlocking the full potential of artificial intelligence (AI) solutions.

                These hurdles can be traced to various factors, including regulatory constraints, sensitivity of data, financial implications, and data scarcity.

                1. Regulations:

                Data regulations have placed strict rules on data usage, demanding transparency in data processing. These regulations are in place to protect the privacy of individuals, but they can significantly limit the types and quantities of data available for developing AI systems.

                • Sensitive data:

                Moreover, many AI applications involve customer data, which is inherently sensitive. The use of production data poses significant privacy risks and requires careful anonymization, which can be a complex and costly process.

                • Financial implications:

                Financial implications add another layer of complexity. Non-compliance with regulations can lead to severe penalties.

                • Data availability:

                Furthermore, AI models typically require vast amounts of high-quality, historical data for training. However, such data is often hard to come by, posing a challenge in developing robust AI models.

                This is where synthetic data comes in.

                Synthetic data can be used to generate rich, diverse datasets that resemble real-world data but do not contain any personal information, thus mitigating any compliance risks. Additionally, synthetic data can be created on-demand, solving the problem of data scarcity and allowing for more robust AI model training.

                By leveraging synthetic data, companies can navigate the data-related challenges and unlock the full potential of AI.

                What is synthetic data?

                Synthetic data refers to data that’s artificially generated rather than collected from real-world events. It’s a product of advanced deep learning models, which can create a wide range of data types, from images and text to complex tabular data.

                Synthetic data aims to mimic the characteristics and relationships inherent in real data, but without any direct linkage to actual events or individuals.

                A synthetic data generating solution can be a game-changer for complex AI models, which typically require massive volumes of data for training. These models can be “fed” with synthetically generated data, thereby accelerating their development process and enhancing their performance.

                One of the key features of synthetic data is its inherent anonymization.

                Because it’s not derived from real individuals or events, it doesn’t contain any personally identifiable information (PII). This makes it a powerful tool for data-related tasks where privacy and confidentiality are paramount.

                As such, it can help companies navigate stringent data protection regulations, such as GDPR, by providing a rich, diverse, and compliant data source for various purposes.

                In essence, synthetic data can be seen as a powerful catalyst for advanced AI model development, offering a privacy-friendly, versatile, and abundant alternative to traditional data.

                Its generation and use have the potential to redefine the data landscape across industries.

                Synthetic data use cases:

                Synthetic data finds significant utility across various industries due to its ability to replicate real-world data characteristics while maintaining privacy.

                Here are a few key use cases.

                Testing and development:

                In testing and development, synthetic data can generate production-like data for testing purposes. This enables developers to validate applications under conditions that closely mimic real-world operations.

                Furthermore, synthetic data can be used to create testing datasets for machine learning models, accelerating the quality assurance process by providing diverse and scalable data without any privacy concerns.

                Healthcare:

                The health sector also reaps benefits from synthetic data. For instance, synthetic medical records or claims can be generated for research purposes, boosting AI capabilities without violating patient confidentiality.

                Similarly, synthetic CT/MRI scans can be created to train and refine machine learning models, ultimately improving diagnostic accuracy.

                Financial services:

                Financial services can utilize synthetic data to anonymize sensitive client data, allowing for secure development and testing.

                Moreover, synthetic data can be used to enhance scarce fraud detection datasets, improving the performance of detection algorithms.

                Insurance:

                In insurance, synthetic data can be used to generate artificial claims data. This can help in modeling various risk scenarios and aid in creating more accurate and fair policies, while keeping the actual claimant’s data private.

                These use cases are just the tip of the iceberg, demonstrating the transformative potential of synthetic data across industries.

                Conclusion

                In conclusion, the dynamic duo of generative AI and synthetic data is set to transform the data landscape as we know it.

                As we’ve seen, these technologies address critical issues, ranging from data scarcity and privacy concerns to regulatory compliance, thereby unlocking new potential for AI development.

                The future of synthetic data is promising, with an ever-expanding range of applications across industries. Its ability to provide an abundant, diverse, and privacy-compliant data source could be the key to unlocking revolutionary AI solutions and propelling us toward a more data-driven future.

                As we continue to explore the depths of these transformative technologies, we encourage you to delve deeper and stay informed about the latest advancements.

                Remember, understanding and embracing these changes today will equip us for the data-driven challenges and opportunities of tomorrow.

                Aruna Pattam

                Aruna Pattam

                Head of AI Analytics & Data Science, Insights & Data, APAC
                Aruna is a seasoned data science leader with a successful track record of developing and implementing data and analytics and data science solutions through cutting-edge technologies, agile development, continuous delivery, and DevOps. With over 22 years of experience, Aruna is a Microsoft-certified data scientist and AI engineer. She is a member of the Responsible AI Think Tank at CSIRO NAIC, which focuses on the responsible and ethical use of #AI in businesses in Australia, and a known public voice in Australia for Women in AI.

                  Cyber angel– Marieke Van De Putte

                  Capgemini
                  Capgemini
                  12 Oct 2023

                  Let’s delve into Marieke van de Putte’s career trajectory at Capgemini, spotlighting her achievements in automating security and compliance for SAP, her extension into the security realm, and her proactive initiatives to advance gender and cultural diversity within the cybersecurity sphere. Acting as a mentor and harnessing Capgemini’s digital transformation prowess, she aims to steer clients through the intricacies of cybersecurity, all while playing a part in building a sustainable future for future generations.

                  Tell us about your role. What does a day in your life at Capgemini look like?

                  When I joined Capgemini, I defined a clear business case and roadmap for automating risk and compliance in SAP. Over the past few years, I’ve built a steady revenue base on this topic and extended it to security. What excites me is that there are no standard or dull days at Capgemini. I might be involved in setting up and implementing a new control framework, conducting due diligence for large deals, or driving a new proposition for continuous compliance with hyperscalers, including trips to the US. I thrive on this diversity!

                  What makes you proud to work at Capgemini?

                  About four years ago, I made a deliberate choice to dive deep into IT. I take pride in being a part of the digital transformation, leveraging Capgemini’s capabilities in AI, data analytics, cloud, and more to empower our clients with control over their IT landscape. Working at Capgemini enables me to collaborate with large clients facing significant security and compliance challenges.

                  How are you working towards the future you want?

                  Crafting the future, my desire involves several elements. Firstly, I aim to simplify security and compliance for my clients through automation. Secondly, I strive to create a supportive and challenging work environment for my team, guiding them in their career journeys, reminiscent of the positive experiences I had early in my career. Lastly, I aspire to be a catalyst for sustainability, ensuring a bright future for my children and future generations.

                  What difference does it make to have diversity in cyber leadership?

                  Cybersecurity historically lacked gender diversity. To rectify this, I’ve actively recruited women with backgrounds in compliance, IT audit, and criminology. I am also a huge fan of creating a mix of cultural diversity in my teams. I cherish the diverse perspectives and energy that our team brings to our clients.

                  What advice would you give to someone joining Capgemini Cybersecurity?

                  No one is a cybersecurity expert in all areas. We need individuals who possess a business-focused mindset and can provide smart guidance to clients as they navigate the intricacies of this field.

                  If you are looking for a role in Cybersecurity at Capgemini, please visit our career page.

                  Marieke Van De Putte

                  Global Domain Lead Cyber Compliance | SAP & Cyber | NL Service Line Lead Security & Compliance 
                  Specialized in developing practical approaches to security, risk and compliance, and applying automation possibilities. Contributing our team’s expertise to digital transformation projects, like IT outsourcing and cloud migration.

                  Mangesh Patil 

                  Cloud Architect, infrastructure and security

                    Observing the future
                    The importance of observability in FinOps

                    Mangesh Patil
                    13 Oct 2023

                    FinOps is critical to effective cloud computing – and observability is the core of FinOps.

                    Cloud practitioners face three main challenges to optimizing cloud consumption and cost:

                    • Understanding where all costs are coming from
                    • Identifying the cost for specific applications or teams
                    • Predicting costs for the future and budgeting purposes

                    The answer to each one comes down to the concept of observability. Observability is the ability to gain insight into the behavior and performance of systems, applications, and services by collecting, analyzing, and visualizing data from various sources. This visibility is the starting point for optimizing cloud environments and reducing waste, saving money, and improving efficiency.

                    Observability vs. monitoring

                    Observability is an evolution of traditional monitoring. Monitoring relies on building dashboards and alerts to escalate known problem scenarios. However, monitoring alone can’t always detect previously unknown problems, especially during times of high load. Observability, on the other hand, emphasizes visibility into the state of the digital service by exploring high cardinality data outputs from the application. This allows for a more comprehensive understanding of the system and informed decision-making about its design and operation.

                    The first steps to gaining observability in FinOps

                    To maximize the benefits of observability in FinOps, organizations should:

                    • Define what data to collect: system performance data, application logs, and user activity data, for example
                    • Set up monitoring and alerting: leverage a variety of tools to collect data and set up alerts for issues
                    • Analyze data: identify patterns and optimize performance
                    • Take action: use the information gathered to resolve issues and improve performance, moving from FinOps observability to FinOps orchestration

                    Once these basic structural elements are in place, it’s time to optimize. That’s where distributed tracing comes in.

                    Distributed tracing for better visibility

                    When it comes to monitoring resources, backend components have all the low-level information we need, such as CPU and memory usage. But, they often miss out on the high-level context of who made the request and why. Distributed tracing enables us to pass that information down to the backend and infrastructure, making it easy to see how resources are being used by different business units, products, or tenants. This information can be a game-changer for capacity planning today and can help us prepare for future growth.

                    From infrastructure to practice

                    A successful FinOps strategy brings traceability from the level of architecture into everyday use through the practice of observability. Observability is a foundational component of FinOps, providing the data and insights organizations need to manage their cloud environments effectively. It requires thorough planning and execution, but the rewards are well worth the effort.


                    A leader in cloud and operational optimization, Capgemini is helping organizations around the world to optimize their cloud services, saving money and lowering their carbon footprint. 

                    Looking to go deeper into FinOps? Check out our FinOps Page and the whitepaper – The rise of Finops. 

                    Author

                    Mangesh Patil 

                    Cloud Architect, infrastructure and security