Skip to Content

Trust in transparency – Looking into bridging the gap between business and IT

Jaap van Arragon
February 11, 2021

Most of today’s businesses are inherently complex – and as organizations become larger and more diversified, there’s an increased need to standardize and organize tasks in order to keep operations on track. But how do you do this? How do you effectively measure the adoption of a culture of continuous transformation across your business? How can you ensure that the operational activities and priorities of your teams reflect and contribute positively towards your broader business strategy and goals? Simply put, it all boils down to leveraging operational transparency.

Increased transparency = improved trust, teamwork, and business decisions

Transparency is key for acquiring deeper insight into the quality of delivery necessary to achieve KPIs – and it’s one of the most important elements in determining performance quality. Increased transparency acts as a funded decision-aid – especially in terms of reasoning and problem solving. Thus, it can positively influence customer attitudes (including acceptance and trust). In addition, your teams’ efforts will also play a huge role in achieving greater maturity around automation as a whole. So essentially, transparency brings trust – the foundation of great teamwork – which then ensures that business and IT can align faster and more closely for better decision making.

Capgemini helps bring transparency to clients by creating near real-time dashboards and smart insights to help them gain more insight into:

  • Delivery metrics – are we good enough or is there opportunity to improve?
  • Ticket status – historical trends and upcoming ticket breaches and predictions

Security concerns – to ensure business and customers are always up to date with the latest information surrounding possible vulnerabilities.

This means that both Capgemini and client have the same information – and that’s how we build trust between business, IT, and external partners.

Transparency can have a huge impact on your business. In one of my previous blogs “Why application logging helps in achieving great things,” we already discussed how utilizing log data can help you achieve more operational efficiency. Now, let’s dig a bit deeper.

Transparency can really help in optimizing business and IT. For instance, by leveraging end-customer services, such as bringing transparency around the use of log data or how we monitor customers’ use of our web applications to get more information or insights. Customer insights are of extreme value to us, as they help us optimize our own processes and web applications, in order to enhance end-user experience.

We’ve already established that transparency can really help in bridging the gap between business and IT. One example of this is the leveraging of data around end-customer services. Here, we can extract insights around how end customers are using our web applications, so we can better optimize how users engage with the business. More specifically, we can offer customers more relevant content, improved experiences, and even generate more leads through our digital channels. And transparency on the data involved brings valuable insights from the business perspective, in order to get the innovative engine going on the IT side.

What does a lack of transparency look like?

Of course, people are often reluctant to really be transparent about things that are going wrong. For instance –mistakes that impact end-user experience, the misuse of developed applications, or even issues with developed code quality. By creating transparency on both business and IT sides, we can better relate to each other’s challenges in this already fast-paced world. Loss of valuable customer data and the resulting negative press can harm the reputation of an organization. And from an end-user perspective, a lack of transparency could also have major impact on client satisfaction, trust, and relationship as a whole.

Automation – the clear choice for unity

Automation is a prime candidate for effectively addressing the above potential visibility issues. The right automation partner can help bring heightened transparency between your business and IT function. And in my next post I’ll show all the possibilities automation holds for the effectiveness of your operations, the delight of your customers, and your future vision as a whole.

In the meantime, connect with me here so we can get started on your automation strategy immediately!

Advanced Utilities Operations leveraging Advance Meter Infrastructure (AMI)

Capgemini
February 10, 2021

Today, the utilities industry is facing real cost challenges in replacing aging infrastructure, maintaining customer satisfaction, meeting a growing demand for power, improving reliability, and regulatory issues and environmental concerns. This has increased the importance and focus on cost effective and efficient utilization of physical assets. Traditional approaches to maintaining electrical grid infrastructure are based on “preventive” or, in some cases, “reactive” methods. Preventive maintenance approach involves a time-based, periodic maintenance program for all assets with a higher priority placed on critical, higher cost assets. For example, certain assets are inspected every five years, while others are inspected every year, and so on. Under a reactive maintenance approach, assets are inspected and repaired only after they fail. Neither approach fully mitigates the risk that critical assets will fail while in service, in some cases resulting in catastrophic consequences. Predictive planned maintenance helps utilities organizations to accurately predict events that cause outages and to run their assets at peak performance.

This involves the data integration and predictive analytics of the equipment nameplate operating ratings with actual equipment operating data to boost uptime, performance and productivity while lowering maintenance costs and the risk of revenue loss.  AMI information for asset management include power quality and load profile data which provides detailed information on how electrical components are performing and which components are at risk. Leveraging predictive analytics using this and other information such as weather data help planning engineers identify over-utilized assets and respond quickly to prevent outages and improve asset life.

Predictive analytics can help the utilities improve the cost effectiveness of the planning and scheduling of their asset maintenance program through a risk-based approach rather than a time-based approach, prioritize field crew activities on the most critical maintenance components and improve the overall reliability of service.

This capability provides several potential benefits for the utilities, including:

  • Reducing unplanned outages by predicting failures and replacing overloaded transformers before the onset of the peak-loading season.
  • Informed and sound decision making to determine the load growth for any given transformer to decide whether to move a given customer from one transformer to another.
  • Improving customer service and avoiding costly emergency/overtime work by replacing a large percentage of unplanned outage work effort with planned maintenance work activity.
  • Accurately optimizing capacitor bank size and location based on actual Volt/VAR readings.
  • Discovering suspect transformers proactively before they lead to customer voltage complaints.
  • Better asset utilization by relocating transformers or rebalancing the customer load on under-utilized and overloaded transformers.
  • Choosing the appropriate transformer size when replacing failed transformers.
  • Troubleshooting voltage problems by knowing the voltage at each customer access point on the feeder.
  • Scheduling maintenance and adjusting load based on accurate indications of conductor, sectionalizer and regulator overload rather than estimates and periodic scheduling.

Real-time Operations leveraging AMI technology

Outage Management Systems (OMS) are required to integrate, and link varied operational systems such as Advanced Metering Infrastructure (AMI), geographical information systems (GIS), customer information systems (CIS), IVR systems and mobile data systems to provide near real-time, dynamic data from the field. The AMI network extends to the ends of a utilities network at the customer premise. As a result, it can provide remote monitoring of the end nodes of the delivery points. This capability can be used not only to pinpoint outages but also to verify power restoration, enabling utilities to proactively identify customers whose power has yet to be restored. Outages reported by other systems such as SCADA (Supervisory Control and Data Acquisition) and DMS (Distribution Management System), or by the customer directly, can then be explored to verify the outage and determine the extent of the outage. Because AMI systems improve the processes for identifying the location of outages accurately, the effective dispatch of appropriate personnel and equipment can reduce labor and truck roll costs.

Integration of OT/IT solutions

As Operational Technology (OT) and Information Technology (IT) continue to converge and improve information sharing, utilities can expedite new business processes, applications, and data management to drive the transformation. The integration of complex event processing and analytics engines into the utilities’s enterprise architecture landscape that is comprised of IT systems like Customer Information System (CIS), Enterprise Asset Management (EAM), HeadEnd System(HES) and Meter Data Management System(MDMS) and OT systems  like OMS, DMS, SCADA and WFM will enable effective outage management, asset management and workforce management.

Finally, looking forward, the AMI infrastructure will allow for many other future distribution operation management capabilities and improvements. The enablement of AMI and additional data elements allows the utilities to deploy additional real-time monitoring, control and management solutions. Distribution Management applications such as:

  • Distribution – Automated feeder restoration
  • Distribution Power Analysis – Real-time unbalanced load flow
  • Volt/Var Optimization – Multi-objective optimization system

Over the last decade, utilities have invested in AMI technologies that provide data to support processes for meter-to-cash, conservation, theft detection, outage management and asset management. By leveraging AMI information and the AMI network to authenticate and prioritize potential and real outage events, the utilities can realize significant benefits and return on investment in reduced operating costs, capital costs and improved reliability.

Key Benefits of integrating AMI with OMS

Key benefits achieved by leveraging AMI include but are not limited to the following use cases:

  • Improved device prediction accuracy by using meters to verify outages in a timely manner. Ideally, the OMS will identify and validate an outage before the first customer calls to report the outage. The IVR should notify the customer that the utilities are aware of the outage and responding. This leads to improved customer satisfaction.
  • Improved crew management and utilization by reducing the crew effort required to return, repair and restore nested outages by pinging meters to validate power restoration of all customers affected.
  • Improved outage detection and management process where outage can be verified even without customer intervention.
  • AMI information and technology can improve customer outage call handling processes that result in reduced labor costs for CSRs. When OMS is made aware of an outage reported by smart meters, the IVR at the customer contact center should inform the customer that the utilities is already aware of the customer outage, provide an auto-ETR and in most cases the customer will hang up unless the customers wants to provide damage or causal information. This reduces the requirement for a CSR to answer customer calls that do not add value to the outage management process.  Also, if the CSR could ping a meter to confirm power on at the utilities side of the meter base, they are more informed when dealing with customers who have been disconnected for payment arrears.
  • Detection of outages at distribution transformers or other common points of failure can improve response times and reduce restoration costs. This is especially valuable in remote areas where the crew would normally have to spend a significant amount of time patrolling the grid to find the exact fault location.
  • Improved accuracy of distribution network reliability statistics by detecting outages in a timely manner.
  • Validation of liability claims. Detection and recording of outages allow utilities to know which claims attributed to outages correlate to an outage and which do not.

Connect with me on LinkedIn for more information.

IAM’s role within your enterprise cyber framework

Capgemini
February 10, 2021

Identity and Access Management (IAM) is a very important functional area of cybersecurity, as it involves identifying the people, accounts, and objects connecting to your network and accessing your data, applications, and other resources. These capabilities are critically important to the protection of the modern enterprise, where billions of computers are communicating together over the internet, every day, serving the purposes of billions of people conducting their personal and business lives. The old joke, “on the internet, nobody knows you’re a dog,” holds true and is applicable as ever.

Failures in IAM can result in inadvertent breaches of data, intruder access to online systems, and loss of control of enterprise IT. Many of the most devastating cyberattacks of the past two decades have involved some measure of IAM breach, compromise, hijacking, or failure.

As a security practice, IAM usually involves eight major areas of capabilities, processes, and technology:

  1. Identity governance: The process of managing the lifecycle of electronic identities, including identity provisioning, de-provisioning, and revision upon changes to relationships, roles, permissions, and personnel
  2. Enterprise directories: Infrastructure used to keep track of who the users are and what they can access, and to make that information available to enterprise IT applications
  3. Access management: The process of identifying who can access what and who can do what within the enterprise, its data, and its applications; major access management models role-based access control (RBAC) and attribute-based access control (ABAC)
  4. Credential management: Capabilities related to managing user credentials including password policies, password management, password reset, account unlock, and emergency access; also includes management of biometric identity validation, multi-actor authentication (MFA), and cryptographic keys used for online identities
  5. Single sign-on (SSO): Capabilities related to enabling enterprise users to access multiple applications without having to log in separately to each application; this greatly simplifies the user experience and productivity
  6. Identity Federation: Capabilities related to allowing the organization to conduct identity collaboration, or federation, with external parties; with federation, the organization can allow third parties to validate their users’ credentials when those individuals access the organization’s IT systems, and can similarly allow its users to access third-party applications, without having to directly share credentials
  7. Privileged account management (PAM): Capabilities related to managing highly privileged accounts such as system administrator accounts, application administrator accounts, system and service accounts, and “break-glass” emergency accounts or system backdoors; usually coordinated with network protections, bastion hosts, and MFA capabilities to provide robust protection for system administration channels
  8. Audit and compliance: Capabilities related to tracking user logins, permissions, and activity, to detect cyber incidents, investigating cyber incidents, and auditing cyber controls related to IAM.

Over the past decade, IAM has dramatically increased in importance for most enterprises.  Several factors drive this increase:

  • First, breaches targeting user accounts and credentials have caused the enterprise to pay increasing attention to their users, their users’ accesses, and the protection of their accounts.
  • Second, the transition to cloud computing has broken down the traditional network perimeter and replaced it with an enterprise IT ecosystem, protected not by firewalls but by user credentials.
  • Third, the increased use of third-party contractors, third-party services, and external providers has made it increasingly difficult to keep track of who has access to enterprise data, when, and why.
  • Fourth, increased security regulation, scrutiny, and liability have resulted in an increasing number of regulators, auditors, and insurers taking interest in the organization’s online identities and their accesses.

Over the past several years, we have seen these drivers cause many of our clients and partners to invest significantly in deploying, maintaining, expanding, and improving their enterprise’s IAM capabilities. A strong IAM infrastructure can help the organization effectively apply its policies and standards to reduce cyber risk across the enterprise and supply chain, and ensure the ongoing compliance of its cyber program.

Learn more about Capgemini Identity & Access Management.

Cybersecurity in 2021: Four predictions

Geert van der Linden
February 10, 2021

The main lesson I am taking from 2020 is that change is truly the only constant. While the new year looks hopeful in terms of a vaccine for COVID-19, that doesn’t mean we should all be going back to old ways – and that’s no different for enterprises.

The pandemic has reinforced the importance of fostering enterprise agility and, most importantly, resilience. Being a successful organization today is less about careful planning, and more about being able to handle whatever comes your way.

This is especially true for the dynamic world of cybersecurity. Data is the new currency and threats are constantly evolving. As we look to 2021, organizations must continue developing and transforming their cybersecurity processes so they can handle whatever comes next.

But what can we expect in 2021? While I don’t have a crystal ball, I have seen some trends start to emerge that will continue to develop over the next 12 months.

A market evolution

The cybersecurity landscape used to be a jumble of specialized vendors who were good at one thing – be it cloud security, data security, or user authentication. Now, we are seeing the globalization of cybersecurity services.

In an era of increasingly diverse attack vectors, clients want integration and end-to-end protection; they want specialists with both the sector-specific and technical expertise to create an aligned security strategy. Rather than purchasing many different pieces of software, they want to use their money in a smarter way and use solutions that complement and work with each other. More and more, clients will look for global players who offer end-to-end protection across regions. The latter does not mean the SI will do all themselves.

The new face of the CISO

The CISO was traditionally viewed as the “department of no” – cautious and a blocker to change. But cybersecurity has begun to move away from being a backroom function. This evolution was quickened by COVID-19, which highlighted just how essential cybersecurity is to a successful business. Now, rather than being seen as a roadblock to innovation, the cybersecurity department is viewed as an enabler. For the CISO, this means a new, boardroom-focused role, responsible for shaping the business as much as other C-level executives are.

At Capgemini, our suite of services helps CISOs to connect cybersecurity to wider business objectives. We connect these objectives with cybersecurity risks so that CISOs can make informed decisions that enhance innovation while also ensuring security.

The consumerism of security

It’s estimated that there will be three internet of things (IoT) devices in existence for every person next year. At the same time, social commerce continues to rise, with more brands focusing on direct-to-consumer selling and relationships. Both these levers offer an expansive attack surface in the form of connected devices, digital storefronts, and engagement tools.

For consumer-focused organizations, this means a higher risk of data breaches and loss if the right protocols and technologies are not in place. As a result, we expect to see product and platform security come to the forefront next year, particularly as organizations realize the value that consumers place in trust, privacy, and security.

Intelligent, real-time threat detection and response

Breach detection and response time are moving to become instantaneous – which in itself will become normal in 2021. With more IoT devices than ever before, organizations do not have the luxury of time in responding to breaches. Take a self-driving car.  If an attacker was to hack this while on the road, the impact could be detrimental to human safety. The focus on speed both in the detection and in remediation is essential. At the center of this is automation and artificial intelligence (AI).

While AI is used commonly for detecting threats, it’s at a relatively nascent stage when it comes to actually responding. We know that less than 18% of organizations make significant use today of AI for cyber threat response. However, AI can reduce the time taken to create a virtual patch for a detected threat or develop new protection mechanisms for evolving technologies.

Next year, more organizations should be using AI in the form of security orchestration, automation, and response (SOAR) technologies, which enable the collection of security data and alerts from different sources. SOAR allows incident analysis and triage to be performed, leveraging a combination of human and machine power. This helps define, prioritize, and drive standardized incident response activities according to a standard workflow through connections to data sources and platforms.

For cybersecurity professionals, the task for next year is one of evolution. COVID-19 has heightened the importance of cybersecurity as a business enabler, giving cybersecurity leaders an opportunity to become more involved in business strategy and innovation. With the right technologies and roadmaps in place for security, organizations can move forward with confidence into the new year – armed with the knowledge that they have fostered the resilience and agility needed for success.

Visit the Capgemini website to learn more about  Capgemini’s Cybersecurity Services.

Follow me on Twitter andLinkedIn

The changing face of operational risk

Capgemini
Capgemini
8 Feb 2021
capgemini-invent

The risks facing financial services players are multiplying and evolving. Creating a dynamic and proactive risk culture is essential to prevent serious losses.

From the ever-present threat of cyber-attack, to the unexpected and sudden impact of a global pandemic, operational risk is a fact of life in the financial industry. And while operational risk management is critical, the practice is still in its infancy.

Despite this immaturity, its relevance is highlighted by the continuous revisions and reviews published by the Basel Committee on Banking Supervision (the Committee). Their more recent being the publication of a consultative paper with proposed updates to the Principles for the Sound Management of Operational Risk (PSMOR), as well as the newly minted Principles for Operational Resilience (POR), both in 2020.

Both documents are at the forefront of current affairs in this industry and offer a glimpse of the regulatory challenges financial institutions will face in the future. In this article, we offer an overview of the updates and new principles, and consider the impact on Finance, Risk and Compliance (FRC) functions.

In short

Additions and changes to the Principles for the Sound Management of Operational Risk include:

  • More details to each specification in each principle
  • Fleshed out roles and responsibilities of the board of directors and senior management
  • A fully new principle on Information and Communication Technology (ICT)

The Principles for Operational Resilience aim to:

  • Improve banks’ ability to deliver critical operations through disruptions
  • Strengthen banks’ ability to absorb operational risk-related events

The PSMOR: and then there were twelve

Since the adoption of the PSMOR in 2011, the operational risks faced by financial institutions have increased and evolved. The current consultative paper addresses this changed landscape in the following twelve principles:

  1. Risk culture
  2. Operational Risk Management Framework (ORMF)
  3. Board of directors: implementation ORMF
  4. Board of directors: risk appetite
  5. Senior management
  6. Identification and assessment of operational risks
  7. Change management
  8. Monitoring and reporting
  9. Control and mitigation
  10. ICT
  11. Business continuity
  12. Disclosure

The following additions are impending:

  • Expanded requirements on risk culture, code of conduct, and ethical behavior
  • Explicit delineations of the roles and responsibilities of the board, senior management and the Three Lines of Defense
  • A comprehensive non-exhaustive list of tools to identify and assess operational risks, such as operational risk event data, self-assessments, event data and scenario analyses
  • A new principle (Principle 10) addressing the implementation of sound ICT: its aims, its maintenance, and the roles and responsibilities related to them

The BCBS has published a paper on cyber security.

The following changes were proposed:

  • A request for the inclusion of a standardized and fully developed ORMF
  • A call for clear-cut definitions of processes and controls regarding the review and approval for new products, processes, and systems and that these should be monitored by a dedicated change manager
  • Demands for the analysis of severe but plausible disruption scenarios and the corresponding business continuity planning (e.g.: thresholds, business impact analysis, discovery and recovery procedures)

The POR: brace for impact

The Principles for Operational Resilience were developed and proposed by the Committee to mitigate operational risks and to strengthen operational resilience in this industry. The latest updates aim to enable banks to deliver critical operations through disruption. Their objectives are as follows:

improving operational resiliencePromote a principles-based approach to improving operational resilience – the ability of a bank to deliver critical operations through disruption.
initial lessons learnedReflect any initial lessons learned from the impact of the Covid-19 pandemic.
risk management frameworksEnsure that existing risk management frameworksbusiness continuity plans, and third-party dependency-management are implemented consistently within the organization.

The seven newly designed POR address many critical incidents faced by financial institutions, amongst them the Covid-19 pandemic and a rise in cyber-attacks. The scope lies primarily within:

  1. Governance
  2. Operational risk management
  3. Business continuity planning and testing
  4. Mapping interconnections and interdependencies
  5. Third-party dependency management
  6. Incident management
  7. ICT including cyber security

With respect to ICT, the Committee sets requirements on how the physical and logical design of information technology and communication systems need to be met by banks. This includes the individual hardware and software components, relevant data and the operating environment. Additionally, a documented ICT policy incorporating the increasing issue of cyber security is expected from banks.

When suggesting these principles, the Committee considered third-party activities where failure would lead to the disruption of vital services. This was especially the case with regard to major institutions with a high market share and globally interconnected operations where consequences might represent a serious potential for danger in terms of the non-functioning of the real economy and for financial instability.

Moreover, the POR require that banks reflect on any initial lessons learned from the impact of Covid-19 in order to improve the pain points in their operations. Simultaneously, banks should ensure that their existing risk management frameworks, business continuity plans, and third-party dependency-management are implemented consistently within the organization.

How will these changes affect the FRC function?

There are three distinct challenges: risk culture, roles and responsibilities and risk assessment.

Risk culture includes setting standards and incentives for professional behavior. Roles and responsibilities refer to explicitly delineating the roles and responsibilities of the board and senior management, as well as the Three Lines of Defense, by which we refer to a widely used model for managing risk. Risk assessment comprises choosing and setting up the tools to identify and assess operational risks (e.g. event data, self-assessments, and scenario analyses). Responding to these challenges can require fundamental changes both operationally and institutionally.

At Capgemini Invent, we have many years of expertise in helping financial intuitions ensure regulatory compliance throughout all corporate functions on a global level. We have drawn on this experience to develop enhanced risk management solutions to tackling the three key challenges:

Risk CultureRisk Culture: The concept of a risk culture should be a core part of a company’s strategy. Firms need to establish a mature preemptive risk culture to better manage their risks and reduce risks of failure, even when they are dealing with extreme unexpected events. The building of a risk culture is a dynamic and ongoing process, which enables organizations to resiliently thrive within an uncertain and constantly changing environment. Getting this right can create a competitive advantage by providing the agility to quickly and efficiently navigate through unfavorable market conditions, whether external or internal to the financial industry. Find more details about our preemptive risk culture concept in our Risk Culture Blog. 
Roles and responsibilitiesRoles and responsibilities: Understanding both current and future roles and responsibilities in an organization is the first step in a business optimization process. Organizations need to be clear on their degree of compliance with the recently introduced Basel Committee on Banking Supervision (BCBS) requirements. To support our clients with this, we have developed an extensive governmental and organizational assessment providing guidance on ensuring a compliant corporate structure. The Capgemini Invent Governmental and Organizational Assessment uses customized questions to examine any compliance gap and helps to prioritize remedial actions with the key stakeholders. 
Risk assessmentRisk assessment: The BCBS formulated specific risk management measures as part of its ICT policy, including access controls, critical information asset protection and identity management, to ensure that appropriate risk mitigation strategies are in place. ICT, and cyber security in particular, is embedded in an evolving threat landscape. A recent study highlights the extent of the average losses for different types of incidents across different economic sectors, as visualized in the diagram below:
Cyberincident and their total losses

An intelligent response

At Capgemini Invent, we have created and use various empirical and analytical tools with enhanced visualization, such as our Incident Management Tool. This intelligent tool supports the identification, capture, and analysis of risks, as well as the elaboration of next actions. It enables our clients to proactively address potential vulnerabilities, promote a faster response to risks, and prevent further incidents. Furthermore, this solid Incident Management Tool provides a dashboard with customizable outputs to track and report incidents. It is compatible with the latest technologies, such as natural language processing, optical character recognition, machine learning, etc. You can find more details about our Incident Management Tool and best practices in our Incident Management Blog.

Inventive Finance, Risk & Compliance from Capgemini Invent helps Finance, Risk and Compliance teams in the financial sector address critical challenges. This article focuses on operational risk.

Stay tuned for further updates on the PSMOR and POR by Capgemini Invent.

This blog is authored by Erekle Tolordava, Dr. Rita Motzigkeit and Kerem Cigerli.

Next-generation healthcare – Software as a Medical Device (SaMD) in the Intelligent Industry

Vivek Jaykrishnan
February 5, 2021

Medical devices and healthcare companies are embracing digital technologies and are moving headfirst into an era where innovation is powered by new technologies that catalyze change. In the last 20 years, technological innovation – both within and around medical devices – have accelerated thanks to the IoT, wireless connectivity, cloud computing, AI and analytics, and more. These advancements are shifting the treatment-centric approach to a patient-centric and collaborative care.

These apps or software, or Software as a Medical Device (SaMD) which are medical devices on their own – that are fast becoming an inherent part of users’ lives in terms of both diagnosis and the monitoring and treatment process.

SaMD brings new opportunities and challenges for both medical device companies and regulators to enable innovation while ensuring patient safety and clinical effectiveness.

What is Software as a Medical Device (SaMD)?

As per The International Medical Device Regulators Forum (IMDRF) document (IMDRF/SaMD WG/N10 FINAL:2013), Software as a Medical Device (SaMD) is defined as software intended to be used for one or more medical purposes that perform these purposes without being part of a hardware medical device.

Risk categorization for SAMD

Seeing the growing inclusion of SaMD in everyday healthcare, the IMDRF has described the concept and SaMD risk categories in detail for the medical app development industry to follow. (IMDRF/SaMD WG/N12 FINAL:2014)

It also considers the accuracy of information provided by the SaMD to treat or diagnose, drive or inform clinical management, this is vital to avoid death, long-term disability or other serious deterioration of health, mitigating public health.

SaMD lifecycle

In general, the makers of SaMD products intend to gather specific information/health parameters of users, analyze the data, and deliver it along with software that has been designed for safety and effectiveness. The software should be fully documented to identify its role and its place within the clinical environment.

Failure in software functionality can have fatal consequences or cause serious injury to patients. Therefore, the software development and testing process is of vital importance, and regulating this across medical devices is a fundamental core element of medical device manufacturing. The verification and validation (V&V) activities should be targeted towards the criticality and impact on patient safety of the SaMD.

In our next blog we will explain the critical aspects of verification and validation of SaMD.

Hyperscalers and managed services: what’s the future

Capgemini
February 5, 2021

Several factors contributed to digital transformation’s recent domination. Trends such as an increased dependence on cloud computing, IoT, the explosion of data across the organization – and the wider need for firms to develop digital platforms – are all driving transformation, the use of the cloud, and hyperscalers forward.

Couple this with an accelerated need for these technologies, brought on by the side effects of COVID on business and a requirement for better technology – this has equated to many organizations transitioning toward the cloud and its plethora of applications and possibilities at pace.

What is a hyperscaler?

At an enterprise level, enabling the cloud is critical. The majority of organizations’ applications need to be hosted in some sort of cloud and so for many, it means working with a hyperscaler.

Typically, at its most basic level, hyperscalers provide cloud, networking, and internet services at scale by offering organizations access to infrastructure via an IaaS model. Examples of today’s hyperscalers include Google, Microsoft, Facebook, Alibaba, and Amazon Web Services (AWS). These large companies are mainly dominating cloud services and are continuing to grow.

Many of today’s largest enterprises are already customers of all of the hyperscalers, allowing them to pick and choose services that best fit their business and, at the same time, avoid vendor lock-in.

The benefits of using hyperscaler services are multifold. Firstly, it’s important to recognize the power behind hyperscale data centers. To accommodate fluctuating and high demand, their infrastructure is built on thousands of physical servers and millions of virtual machines. The result is data center resources that are easily accessible, cost effective, reliable, and scalable.

What does hyperscale mean for my business?

For businesses, hyperscalers’ architecture often overshadows that of the traditional data center, offering them next-level performance without the complexity of managing a corporate data center. Furthermore, using a hyperscaler offers a level of reassurance in terms of the future. Hyperscalers constantly have an eye on what’s next. Microsoft, for example, recently revealed that it invested nearly $20bn to build the infrastructure necessary to support its Azure cloud. These organizations are constantly innovating and developing their infrastructure for the future – and businesses that use these services stand to grow and develop in tandem.

Taking advantage of hyperscale

For the next few years, enterprises will want to operate across multiple clouds and multiple data centers. The transformation challenge lies in reconciling a complex ecosystem of physical and virtual platforms that includes the existing enterprise data centers and a multi-cloud strategy.

This must address opening up legacy infrastructure to work with the cloud by answering how it can be transformed and operated to perform as a cloud-like platform. It’s often too difficult to refactor legacy applications for the cloud, so why not instead transform the underlying infrastructure to deliver many of the same benefits promised by the cloud?

On this journey towards adopting hyperscaler computing, it will be important for organizations to work with proven consultants and systems integrators that have the knowledge and expertise to enable the benefits of hyperscale computing while controlling the risks and uncertainties.

Within this, the key to achieving these goals is to adopt an end-to-end approach that allows the organization to maximize its cloud value. At Capgemini, we bring our substantial experience of migrations and delivery to enable a rapid, predictable migration to cloud.

Wherever you are on your journey to the cloud and hyperscale computing, Capgemini helps you take control, move forward with confidence, and reach the right destination sooner.

The Future of Retail

Capgemini
February 4, 2021

Retail is in the midst of a renaissance. In order to survive and thrive to make it through this new age, retailers now more than ever, need to consider a new set of services, experiences and business models that will enable new capabilities within their businesses. The implication of this development means that retailers will need to redefine propositions, set up ways to test and measure their success and create tactics that will align to what their business can look like in the future – all while ensuring that they are able to adapt to changing customer needs and market conditions.

Adapting to this new age means that the right approach to deliver these new ways of working needs to be considered, whereby businesses can focus on unlocking value today while also setting the strategic pace for what tomorrow will become. Fundamental to this is establishing the foundations to shape what those future needs will look like. This can be built in a modular fashion, unlocking quick benefits while ensuring long term agility.

Capgemini Invent works with numerous retailers from varying verticals and with this insight we’ve identified and collated 6 key trends we expect will come to shape the future of retail and the way it will operate. These trends vary from operational models to physical spaces and even the products retailers choose to provide to their customers.

  1. Experiential Spaces: The role of the physical store is changing, shifting from shopping destinations to experiential spaces. Using customer data to tailor both online and physical stores to adapt to customers’ needs and preferences. Not only will customer experiences be enhanced by this tactic, but it will contribute to customer loyalty as they receive direct communications and experiences tailored especially for them.
  2. New Retail Channels: omni channel customers are forcing emergence and evolvement of the traditional store and online experience to become one. Through AR, VR, new digital spaces; new opportunities and additional touchpoints are becoming more readily available – businesses can build deeper insights and understandings of their customers all while delivering messages in new ways to garner higher engagement and interaction with their most loyal customers.
  3. Choice Paradox: Customers are burdened by limitless choice, expecting retailers to sell them solutions not products. Providing customers with bundled offers enables customers to find an all in one solution that’s fully curated to their needs and provided to them by retailers based on the insight collated. Delivering customers with the products and services that they need makes for a smoother and improved customer experience.
  4. Data as currency: Now more than ever, customers understand the value of their data and are driving retailers to create new value exchanges to be prepared to share. With access to far greater amounts of data, customers expect more than the bog-standard personalised marketing and product recommendations; they’ll look to find products, services and packages tailored specifically to them based on their own personal data profile a retailer holds.
  5. Purposeful Consumption: In a world where products are heavily commoditised, consumers are now choosing retailers based on their values not their range. This in turn is forcing retailers to review their supply chain operations and source responsible products. Customers value perception of products will become a much more emotional investment, where they actively seek products and services that meet these requirements that are both good for the planet and for the wallet
  6. Untapped Ecosystems: The retail value chain is fragmenting with consumers, suppliers and new entrants alike playing increasing roles in the e2e value chain of shopping. Therefore, there will be a time when collaborating with partners and suppliers will become essential in order to continue delivering products and services that customers will come to expect. Deeper collaborations will see more of customers needs being met through one retailer as opposed to many, giving these all-round retailers the edge over their competitors

Capgemini, SharpEnd and The Drum have joined forces to create a live retail store – CornerShop. It is a conceptual representation to illustrate the store of the future. Customers and clients alike can expect real life products, the latest technology and personalised experiences based on the insights gathered during real life transactions that take place in store. Find out more here.

Author


Steve Hewett

Towards a Digital Renaissance

Capgemini
February 4, 2021

The exponential growth of new edge technology will lead us to a completely new digital world. Though there are several definitions of Edge computing, for the purposes of this article, we will consider it as any computational power outside data centers and/or clouds.

The challenge of edge computing as an enabling technology for increasingly distributed needs is changing the emerging technological landscape and seizing the ensuant development opportunities for companies and service providers.

Accelerated by the ongoing pandemic emergency, the extensive expansion of edge computing technology is paving the way towards a new digital world, one in which cybersecurity issues take on even greater importance.

According to Gartner, only 10% of the data generated by the companies is currently created and processed outside of a traditional data center or cloud. There will be an increase of 75% in the total data produced and processed by companies using the Smart Edge model over the next five years.

This trend will strengthen the decentralization of IT resources related to the Digital Workplace, the extension of campus networks, cellular networks, data center networks, cloud computing resources and the data itself. However, the very extension of the IT architecture perimeter requires another high level of security and data protection.

What impact does such an expansion have on the cybersecurity approach? The configuration of a constantly growing edge – which includes cloud providers, smart cities, augmented reality (AR), and the widespread use of artificial intelligence (AI) in Industry 4.0 – is challenging both service providers and telecommunication companies. At the same time, there is a huge possibility of expanding business through new applications.

The evolution of edge computing entails important infrastructural challenges:  the problems and solutions related to the management of a huge flow of data – both in download and upload – are exposed. These have, with the recent widespread use of teleworking and agile work, increased exponentially. Greater diffusion of cybersecurity, but also particular attention to the methods of data protection and the transparency of data archiving and backup services.

We are already able to provide a wide range of different use cases that will redefine the business model of any specific industry sector:

  • Automotive – Connected cars, which interact with the driver but also directly with other cars (Bluetooth or LiFi) to obtain road conditions by sharing shock absorber data. You will have a personal device, e. a black box, which will work as a smart key, showing the car status for you and your car status, to prevent accidents.
  • Smart Home – Local sensors and sensor correlation, with cloud support to analyze and share home data. The actuators that will handle any environment-related service will be in our homes tomorrow. The smart home will also monitor the physical safety and health of the people living there.
  • Industry and robotics – Automate and manage dangerous and tedious processes with minimal human support. IoT is the most common scenario associated with edge computing where the barrier between IT and OT is completely breaking down, allowing control from the cloud using virtual digital twins of physical devices and enhancing predictive maintenance processes.
  • Smart devices – Smartwatches, wearable devices, and health devices allow us to monitor our location and our activities and collect health data that can be processed and shared with smartphones, tablets, PCs, and cloud providers.
  • Smart payments – Payments will be increasingly using a smartwatch, smartphone, virtual badge using the features of digital Banking, such as PSD2.
  • Telco and media Software-defined networking (SDN) and 5G will ensure better bandwidth, flexibility, elasticity, and low latency in network performance. We will be able to use cloud services to the limit by leveraging the containerization capacity and portability of the cloud workload. This will provide a potentially infinite range of services that we can bring to our homes, our vehicles, our offices by connecting things, people, and digital services and creating a smart digital service mesh around us.
  • Energy and distribution Edge computing can help manage energy across enterprises. Sensors and IoT devices connected to an edge platform in factories, plants, and offices are being used to monitor energy use and analyze energy levels in real-time.

The opportunities related to edge computing are far beyond and to take advantage of them we cannot forget the role that security, privacy, and compliance (for example the EU GDPR) will have.

The increase in available digital services means that there are more security risks and threats we should be managing. There are many security challenges that arise from the rise of edge computing:

  • End-user Identity – Technologies such as MFA will help to ensure that only the owner of a device will be able to control it. Mutual authentication of edge devices will help guarantee that the flow of information will be controlled and managed only between trusted devices.
  • Data – Encryption in transit and at rest must be guaranteed, protecting the key used and using an open standard for encryption.
  • Device OS and software – While new vulnerabilities are discovered, device OS- and software level patching are complex activities to be performed on a huge number of devices.
  • Physical tampering – Guarantee that data cannot be extracted, or the device tapered is crucial while data and computation are on the edge.

These challenges are the new security edge that we will face henceforth, and we must balance the accessibility of digital services with the related risk trying to find the right mix between digital transformation and preservation of our privacy.

The security journey to the edge is like the shift from the Middle Ages to the Renaissance. Medieval cities resembled fortified castles that protected their citizens, resources, and gold. Similarly, legacy IT with perimeter firewalls, intrusion detection, web application firewalls protect the boundaries of legacy IT.

The digital renaissance of cloud and edge computing is destroying the barrier between the datacenter and the external world. The Golden Treasures of 2020 are our data – both our personal and our business data that we need to protect.

We need to understand that:

  • Expectations that perimeter defenses will be overcome
  • Current garrison- based controls are inhibiting missions that require collaboration and omnichannel communication.
  • The diffusion of an organization’s perimeter requires the implementation of data, applications, and service smart- centric controls.
  • Renewed emphasis on inappropriate exponential propagation and derivation, using smart, proactive cybersecurity detection systems, differentiating between admission and access, securing applications and services in addition to the infrastructure.

The security of our data must be guaranteed not only inside the cloud but also on the edge using innovative technologies such as machine learning, predictive security, IoT and endpoint protection, secure API gateways, etc. The new world of the edge will provide an infinite mesh of secure services that will allow us to satisfy all our needs, be they personal or business-related, passing from the bastions of the late medieval city to the open city of the digital renaissance.

This article takes inspiration from the discussions I had during the Italian Conference round table.

Visit the Capgemini website to learn more about  Capgemini’s Cybersecurity Services.

Follow me on Twitter andLinkedIn

Field services and maintenance

Capgemini
February 3, 2021

Quality of work and productivity of the field technician at the client site influence customer satisfaction and profitability – the two tenets of a great business.

Service can be defined as “all actions that have the objective of retaining or restoring an item to a state in which it can perform its required function.” In today’s competitive world, operational efficiency, asset ROI, and safety considerations demand focus from manufacturers to ensure that their product is functional and utilized to its potential.

The aftersales services division ensures operational efficiency of the assets by performing preventive, routine and break-fix maintenance activities. For complex products, maintenance is as challenging as product development because both require high precision. In most cases, instructions are available as text-based procedures or manuals, which in turn demand skilled technicians who understand the sequence of activities. leave much room for improvement, especially in the case of complex products.

Challenges with the traditional way of organizing Field Services

  • Text-based, complex maintenance procedures are difficult to interpret for inexperienced technicians.
  • Translations of maintenance procedures are often misinterpreted.
  • Novices train hands-on with real equipment leading to longer downtime.
  • With a short product lifecycle come more frequent design changes. This demands more frequent updates to technical publications. A programming-based development approach would be financially unviable and time-consuming.
  • The product design is completely disconnected from the service view.

Technology as an enabler

With ubiquitous connectivity and millennials joining the field force, organizations are exploring viable use cases for adopting new-age technologies for their field service business. Many have adopted mobile technology to assist field technicians and move towards paperless operations. They access work order- and equipment-related information through mobile devices. Few organizations have pushed beyond the obvious digital choices to explore more complex implementations using AR and VR.

Based on Capgemini’s research, one aerospace company explored new technologies to adapt existing virtual reality program to meet the needs of their operations engineers. They were seeking more from the validation activities in aircraft maintenance. For many years, the OEM offered a full-scale, immersive experience based on the aircraft’s digital mock-upcreated using camera’s and sensors set up on the body. With VR technology, the OEM has created a portable kit that includes a virtual reality mask, touchpads, and two infrared cameras, allowing users to work in a similar immersive environment without leaving their desks.

Similarly, technicians at an automotive organization are using AR glasses to project step-by-step bulletins and schematic drawings across the line of vision while allowing remote experts to see what the technician sees and provide feedback.

What comes next?

At Capgemini, we help our clients explore integrated solutions for their maintenance operations. Our NextGen Maintenance Platform digitizes the maintenance, repair and operations activities using cutting-edge technologies for smart authoring, advanced planning and simulation, AR/VR, AI, and model-based V&V.

To learn more about this solution or see it live in action, contact Roshan Batheri, Seema Karve, or Rahul Pandhare.