Skip to Content

The next big thing to boost your innovation – the venture client model

Capgemini
December 22, 2020

Key takeaways

  • External and hybrid sources of innovation will become increasingly important – by 2025 startups will rank as a top innovation source for companies with great potential for sustainability.
  • Venture client units are a breakthrough corporate venturing vehicle, which if operated with a specialized venture client process and with dedicated resources, enables corporations to benefit strategically from top startups.
  • For corporates, the main advantage of a venture client unit is that it enables the entire organization to gain measurable competitive advantage from more and better startups – at lower risk and without any capital requirements than possible through traditional corporate venture capital programs.
  • For startups, the main advantages are to quickly gain high-profile reference clients whose expert user feedback is critical to iterate and improve their products as well as through an increased valuation that results from increased traction and revenues – all without additional dilution.

Working towards a sustainable future

In today’s fast-moving world, the imperative to stay relevant forces companies to continuously reinvent themselves. New competitors transform from promising startups to unicorns in lightning speed, while rapidly shifting customer expectations and political regulations and global trends determine the market of tomorrow. Moreover, the ongoing global COVID-19 pandemic is changing our world massively, requiring companies to make the right bets today in order to secure a sustainable future.

Startups will become chief sources of innovation by 2025 – particularly focused on sustainability.

According to a joint study from MIT and Capgemini from 2020, external sources of innovation are becoming more and more important for companies. Over the next five years, startups will rank as top innovation sources for companies. Moreover, hybrid forms such as innovation labs are especially important as an interface between internal and external innovation. This is critical for companies that do not have adequate innovation capability internally and are therefore forced to source externally. As a result of this development, traditional R&D and internal business units’ employees will become less relevant as innovation sources within the next five years (see Figure 1).

Sustainability tops the agenda of many customers and therefore of corporates. Corporates often rely on startups, which are particularly good at accelerating (digital) innovation in the area of sustainability. According to a European-wide survey by TechFounders, sustainability is a priority for around 90% of startups. Moreover, a new study found that, even though climate-tech startup investments by VCs and corporates are still low compared to overall investments, the growth rate is five times higher. For corporates, startups’ innovation potential poses a great chance to support their sustainability initiatives and targets. However, with startups gaining importance and corporates heavily focusing on sustainability, this still leaves the question “How.” How can corporations gain strategic benefits from the world’s best startups, quickly and at measurable risk?

A new model of corporate venturing has emerged.

The concept of the venture client model is new, yet simple. Instead of acquiring non-controlling equity stake, the company buys the startup product (see Figure 2). The corporate becomes, hence, a venture client rather than a venture investor with the objective to harness a strategic benefit. The strategic benefit emerges from applying the startup product to improve an existing or create a new product/service, process, business model, or even entire business. The startup solution is applied in a real business environment immediately, without being incubated or accelerated.

Obviously, to identify top startups and to buy and transfer their solutions so that they generate a measurable positive impact is highly complex. Hence, just being a venture client is not enough. The corporate needs to implement a sophisticated model, i.e., organizational venture client capabilities. Such a venture client model was first established at BMW by Gregor Gimmy (see Harvard Business Review) in 2015 with the BMW Startup Garage. Over the years, many global corporations have been following suit, including Bosch, BSH Home Appliances, LafargeHolcim, German insurer Signal Iduna. Beyond Germany, other corporations have adopted the Venture Client approach, such as the Italian energy company Enel and Spanish Telefónica.

The venture client proactively solves problems

Here is an example of how a good venture client unit operates: A business unit or functional unit raises a request for support from the venture client unit to resolve a complex challenge via a startup, as it cannot solve the problem with internal or incumbent external solution providers. These challenges can be manifold (e.g., growth, efficiency) and may emerge anywhere in the company (e.g., R&D, IT, manufacturing, or logistics). The problem is strategic, as it impacts the competitiveness of the company, such as a sustainability challenge to measure CO2 emissions in real time caused by intra-factory logistics.

The venture client unit is engaged by the venture client (the factory logistics manager) to enable the whole process from identifying to adopting the best startups, like an HR department enables recruiting.

The process starts with an in-depth analysis of the challenge, to make sure that it is strategic and that startups may indeed have suitable solutions. If this is the case, the venture client unit sources for startups with relevant solutions. In our example, the venture client unit would source for startups with sensor technology, hardware and software, that detects CO2 levels inside a factory. If the problem has relevance in the startup ecosystem, as is probably the case in sustainability, the team will likely identify over 100 startups.

The venture client unit team then filters the best startups from that list. This results in five to 20 startups that are then analyzed in depth. This assessment, led by the venture client unit, involves the venture client, for example an R&D engineer or logistics manager. Once the best startup has been selected by the venture client, the company buys a small sample of the startup product. In our example, the company would buy a sample of sensors large enough to monitor CO2 levels in a small part of the factory.

The purchased products are applied in a real setting over the next two to four months. This is essential to generate real data to validate if the startup technology delivers the expected results. The data now serves to confirm if the startup solution meets the KPIs required for the last step: adoption, which is realized via partnership or M&A. The results from piloting the startup product in a real use case, are now key to enable the adoption decision.

In most cases, adoption will be a kind of partnership (e.g., licensing the startup technology). However, the venture client may also choose to acquire the whole startup, if control is a condition to generate and defend the intended competitive advantage. In addition to the just described active solving of known problems, the venture client unit also constantly looks out for strategically relevant startups to anticipate problems the corporation is unaware of. In case a top startup is detected, the venture client unit proactively contacts one or more business units that could potentially benefit from its solution. Once the relevance has been confirmed, the process as explained first, repeats itself.

The venture client model brings significant benefits for both corporates and startups (Figure 3).

Our view on innovation and the venture client model.

At Capgemini Invent, we believe that startups are a key, but not the only solution. A state-of-the-art venture client unit with high-quality specialized processes and dedicated resources will make a big difference in enabling companies to benefit strategically from the best startups. To ultimately succeed however, most companies will have to continuously scrutinize, transform, and adopt their corporate innovation systems as such. As shown in Figure 4, our innovation operating model involves three tiers: purpose, approach, and tactics. Each tier’s components serve as gears in the innovation machine, and every one of them is an important piece of the puzzle. The venture client model has the potential to significantly influence several of these components and serve as catalyst to the whole innovation system. Our innovation experts at Capgemini Invent are more than happy to support you in establishing your venture client unit and skills. Feel free to get in touch.

Thanks to the co-authors Manuel Wiener and Phillip Schneider.

Authors

Jens Hofmeister Head of Central Region
Fahrenheit 212
Part of Capgemini Invent
Olivier Herve Vice President – Innovation and Strategy
Capgemini Invent
Kevin Loeffelbein Director – Smart Mobility & Business Model Innovation
Capgemini Invent

Would vaccination passports guarantee data privacy?

Capgemini
December 22, 2020

One of my friends recently drew my attention to an article in Time magazine, in which the International Olympic Committee (IOC) President Thomas Bach has said that COVID-19 vaccinations could be required for athletes and fans to attend the postponed Tokyo Olympics. This is set against a backdrop where vaccines to inoculate against COVID-19 are being developed and (at the time of this writing) set to be given to the public. To limit the spread of the disease at an event vast numbers of people are expected to attend, drastic measures are being considered to not risk another massive increase in cases worldwide.

Given that there are several global events planned for 2021 and assuming that vaccination passports provide a solution, how could they be implemented appropriately?  What regulations should be complied with to protect personal information and reduce the likelihood of the infringement of human rights?

There are many questions to be answered, some of which focus on the governance of personal data.

Reasoning

IATA recently announced that it was creating a digital platform to facilitate the sharing of vaccination information called the IATA Travel Pass. The reasoning for this is: “to re-open borders without quarantine and restart aviation governments need to be confident that they are effectively mitigating the risk of importing COVID-19. This means having accurate information on passengers’ COVID-19 health status.

It seems prudent that a collective definition of why the data is being gathered across the world should be adopted. If the reason is simply to present proof of having had a vaccination, that in itself is quite different from requiring presentable proof of immunity. Such a requirement should, at the minimum, include a follow-up test to prove that the individual has produced the required protective antibodies.

Compliance

The concept of data sovereignty means that personal information (including health data) is usually governed by regulations that afford some protection to the citizens of the region where the data is stored. Examples of this include:

  • HIPAA (USA)
  • PIPEDA (Canada)
  • GDPR (EU)
  • Data Privacy Act (Philippines)

However, how do you apply the principles of health data governance internationally? What standards should be used to protect the data? How should it be stored, and what should happen to it when it is no longer needed? The standard requirements of asset management and data governance must be observed when processing personal data, even in a global context.

Integrity

In order to have a trusted worldwide system that can prove that an individual has had a vaccination, it would seem logical that such a system should have traceability built in. This would imply that an assertion that an individual has had a vaccination can be traced back to a point in time where the injection was administered (and, potentially, which type of vaccination it was – especially given that different vaccines have different efficacy rates).

Administration

How should such a system be administered? Should it be on a country-by-country basis, given that each nation could claim ownership of said data and how it should be used? If the aviation industry (IATA) is setting up its own system, should this be a process that is extended to travel across land borders? How would such a system be applied consistently in different countries, with varying levels of social and technical infrastructure, so that travelers around the world have equal access to transport?

Conclusion

In the next 12 months, the world has an expectation (or hope) to return to business as usual, including international travel. That includes the following sporting events postponed from 2020:

  • Football European Championships (Europe)
  • Copa América (Argentina and Colombia)
  • Ryder Cup (USA)
  • Olympic Games (Japan)

If we are going to reduce the likelihood of a return to the levels of infection seen throughout 2020, a number of measures will have to be implemented. Ideally, these should enable equal access to travel, irrespective of the economic background of one’s country.  A vaccination passport may well be one of these, but to keep the pandemic in check it will require a truly collaborative approach to the governance of data that matches to that seen by the global medical community to make a real difference.

To learn about Capgemini Data Protection and GDPR Services, visit: https://www.capgemini.com/service/digital-services/gdpr-readiness/data-protection-gdpr/

References

http://www.bbc.com/travel/story/20200831-coronavirus-will-you-need-an-immunity-passport-to-travel

https://time.com/5912335/tokyo-olympics-vaccine/

Contact lite Healthcare

Capgemini
December 22, 2020

Over the last few months, we have seen many sectors respond to the cross-infection risk of COVID-19. What can we learn from them? How can our new focus on emerging technologies such as vocal interfaces, facial recognition, and mobile-based applications support improved healthcare in a pandemic era?

Infection prevention and control is a vital aspect of healthcare. The use of good handwashing and appropriate personal protective equipment has always been paramount. But the focus was previously on preventing cross infection between patients or between carer and patient. We have never before had the same intense focus on transfer between carers.

In other sectors, the importance of the hierarchy of hazard controls is widely recognized – elimination, substitution, engineering, administrative controls, PPE. This was developed in the chemicals industry where it was a one-way flow, protecting the worker from a hazardous substance. In healthcare, it is more complex. The hazardous substance, for instance a virus, still exists but we need to protect individuals from the virus that is carried by people as well as physical objects.

In shops, we regularly see physical barriers; in restaurants, there are often screens between tables; in hospitals, there are often screens between patients – but I rarely see them between staff at workstations. Every time a person makes contact with a surface, there is the potential to deposit or pick up an infectious agent. A key concept in retail has been “contactless” – to me one of the biggest lifestyle changes is no longer carrying cash. A single patient will come in to contact with many staff and many objects touched by staff, can this be reduced? Devices can warn you of your proximity to others and can analyze the locations to assist redesign of one-way flow systems, used in manufacturing plants but not in hospitals.

Touchless delivery of supplies from storerooms to the patient could reduce human contact; autonomous trollies could carry drugs from pharmacy, linen for the laundry, food from the kitchens, directly to the patient’s room. Ordering these supplies can mirror online ordering with which we are so familiar, with minimal human contact.

Can we deliver touchless care? Since the time of Osler, medical assessment has depended on taking a history, examining, and then undertaking tests. Taking the history can be contactless (videoconsults and phone consults are now commonplace). But can the examination be replaced? Contactless vital sign observations are now possible, Bluetooth stethoscopes that can be held by the patient already exist, but can we replace palpation?   Many tests could be remote or robotic. Robots are already used for remote surgery but how long before we have bedside robots that can put up a drip, change your catheter, give you a wash?

Staff who have been with a patient then contact equipment that will be used by other staff. Can we copy the example of banking with contactless payment systems? For example, instead of a dirty keyboard each individual uses a personal device, or drug cupboard keys are replaced with smartphone proximity access.

Even the process of washing your hands can be made safer. Far more hotels and airports have contactless taps, toilets, hand dryers than I have seen in hospitals. I suspect short-term cost takes precedent over safety.

But there is one big problem. Healthcare is more personal than shopping. You feel better if you have human company; you recover better with personal conversations about your care;  you want to have conversations and physical contact with your family. A simple hand on the shoulder or holding a hand is a powerful clinical treatment. Could we not be smarter than just refusing people access to their relatives?

We know that the public support a move to touchless technologies. How do we best use technology to reduce cross infection without isolating the individual physically and mentally? This is one of big challenges of healthcare design in the era of the pandemic.

I would love to hear from you if you are thinking about how you can shift to contact lite healthcare and think we might be able to help you develop your ideas.

Matthew Cooke is Chief Clinical Officer at Capgemini. He spent most of his career working in the NHS as an emergency physician and was the National Clinical Director for Urgent and Emergency Care.

Data is a value driver, not a cost driver

Capgemini
December 22, 2020

Introduction & observations

In an ever-changing market cost/income ratio of financial institutions is under heavy scrutiny. Linked with the ever-increasing push to support end-customers in the right manner, at the right moment and through the right channel. The key is data; which is too often seen as a cost driver. Proper attention to data is often only paid after regulatory pressure or fines. Let’s see data as a value driver, not a cost driver.

Value?

Data needs to be valued as an enterprise asset. The challenge is that there are not many generally accepted ways to put value on data, which leads to some questions:

  • What would it cost to rebuild your full data collection?
  • What would it cost to buy your data on the market (if available at all)?
  • How much more revenues could you generate if you have additional (quality) data?
  • How much cost could you save if you would have perfect information at the right time?

The incremental revenues is the most interesting question to ask. It will help you think much more in the lines of value rather than cost.

Key examples of data as a value driver for the Financial Services industry are:

Risk management

At all financial institutions data challenges are imminent: KYC, CDD, GDPR, Basel and other key topics on the executive board agenda are driven or led by data. To address these challenges, and to comply with these regulations data needs to get in order and underpin value by data based, trustworthy evidence.

In this process financial institutions are already creating a lot of value, showcasing the right information, no double work, single sources of truth finally start to exist, and a single view of the customer will slowly be created. Resulting in value from data.

Cost reduction

Poor data quality can lead to additional cost, more waste, rework in business processes and undetected risks. Poor data quality in many cases is also an inhibitor for unleashing the potential of Artificial Intelligence.

In many situations some of the consequences of risk/cost-based approach lead to additional collateral damage. For example, the cost of resolving technical debt on data ecosystems is becoming increasingly high as risk avoidance keeps the system unaltered, resulting in a deteriorating competitive position. Moving to a data value centric approach can help turn this around.

Customer focus

Moving away from the risk driven, fine based approaches to programs like GDPR into a holistic overview and the opportunity to bring a better personalized and improved offer to individual customers. Making it a completely different business case; seeing where early movers in digital transformation are and will be focussing on the hyper personalization the GDPR programs could be a value driver with a positive focus.

This also applies to the current programs in KYC/CDD a defensive mode looking in solving the issues, showcasing that you are in control regarding who is in your books, who is your customer and caring that your customers are doing legal and compliant financial transactions. Whilst this also is a next step towards a pristine path to serve your customer even better.

And market agility

Market developments are increasing in speed and impact, disruptors are around the corner. Not in the last place from companies which are most data savvy: techfins. Although they might not have the in-depth financial services knowledge, they do have an abundance of data and the passion to drive value from it.

With a more value-based approach on data, financial services companies can increase the speed and quality of operations, supporting reduced time to market of new offerings and be at the forefront of (re)inventing financial services.

With innovations thriving on data

Outlining a number of topics that can only be done if you let go of the risk/cost-based approach of data and approach it from a (business) value perspective, foundational for the survival in the future.

  • Hyper personalization: shifting from (generic) customer journey thinking to hyper personalized thinking, be at the heart/mind of your client.
  • Embedded Banking/Insurance: financial services as a seamless part of other services, API based business models
  • Open X: expanding business models, shifting from single products to full services/experiences
  • License to Operate: regular demand for data (level of detail, frequency, coverage) will increase, want to do it ad-hoc every time or choose a flexible data ecosystem approach that is able to deal with any future demand swiftly and without friction?

Now what?

Change your mind set about data: it harbours huge business value, but harvesting this value is many times blocked by working in a cost/risk-based approach. Data is often referred to as the new oil or water, look at it differently:

  • If data is the new oil, why not do additional refinement, creating much more valuable products for your clients from the raw material?
  • If data is the new water, why not make sure that it is treated as the primary source of your organization’s life?

Choose a data valuation approach that fits your organizational culture, make the value of data tangible and widely acknowledged. Define initiatives to improve the value of data and use the incremental business value to reinvest in the next steps of the journey.

Additionally, ensuring your enterprise data liquifies, achieving a situation where it flows freely through the enterprise, without friction to the point where it is needed for decision making and help achieve ultimate stakeholder value.

Finally, expanding horizons by considering data as a value driver. The platform economy continues to be a big opportunity for Financial Services companies that embrace it, sharing data with platform peers, aggregating client value offering a tailored family of adjacent services.

So

No matter the angle data is a value driver, and it is time to acknowledge and manage it accordingly and harvest its yields!

Authors

Erwin Vorwerk Vice President – Insights & Data
Capgemini
Vincent Fokke Chief Technology Officer (CTO)
Capgemini FS Benelux

Agency sales model Part 4

Capgemini
December 21, 2020

In our first blog post we highlighted the opportunities and challenges of introducing the agency sales model and presented the Capgemini Invent Agency Sales Model Framework that we have developed. Our second article focused on the importance of retail, and our third article showed how to successfully scale the agency sales model.

This time, we will show which factors lead to sales increases and cost reductions in the long term, and when break-even can be expected. This discussion is based on the Step up dimension of our framework.

Figure 1: The Capgemini Invent Agency Sales Framework

New drive systems, decarbonization, autonomous driving, new forms of mobility, and new competitors: The challenges of the coming years will be manifold. At the same time, automobile manufacturers are in the middle of the digital transformation. The investments they are having to make in their own economic sustainability are immense. But can the current business model keep up with the challenges?

It is a fact that established car manufacturers today still maintain a very cost-intensive three-step sales and distribution model in which approximately 25-30% of the costs of buying a new car are attributable to sales. This is a major competitive disadvantage compared to new market entrants such as Tesla, Byton, Genesis, and Nio, or even spin-offs of established manufacturers such as Polestar or Cupra. From the very beginning, these companies have been relying on new sales models such as agency sales and are realizing significantly lower sales expenses with their greenfield approach. But it is not only in terms of costs that these new concepts differ from the traditional sales approach. The direct access to customer data in the agency sales model also enables new earnings potential to be realized on the revenue side.

Of course, there are two sides to the coin here as well: To transform an established sales organization sustainably, considerable investments in organization, processes, and IT systems are required. In addition, there are usually organically grown, heterogeneous structures in the individual national sales entities. The introduction of the agency sales model can be seen in this context as an opportunity to establish harmonized processes and systems. However, this requires additional effort. Decision-makers should examine both sides and evaluate the long-term cost reduction and sales potential in addition to the one-time investment costs.

The agency sales model as a lever to increase sales

We estimate that the introduction of the agency sales model will lead to a long-term increase in sales of 1-4%.

Increase in transaction prices

Transaction prices, i.e. the actual sales prices achieved, are one of the biggest profit levers for car manufacturers. An increase in transaction prices has a 1:1 effect on the profit of the sales organization. In the classic three-step sales model, the dealer acts as an independent vendor and ultimately determines the transaction price. The maximum level for the lower price limit is determined by a fixed retailer margin plus situational sales promotion measures for a specific model or customer group.

Every year, OEMs and importers (markets) invest hundreds of millions of euros in sales promotion measures and thus have a negative influence on the transaction prices in the markets affected. Due to a lack of data, however, it is not possible to systematically monitor success and optimize the measures. This leads to the fact that sales promotion measures are usually used reactively to achieve short-term sales stimulation.

In the agency sales model, the importer determines the transaction price in a market. In addition, centralized sales systems and data management across all sales levels make it possible to strategically plan transaction prices and sales promotion measures dynamically in order to achieve the highest possible transaction price. In addition to the optimization of sales promotion measures, uniform prices in a market prevent intra-brand competition. In the long run, both effects increase transaction prices.

Higher sales volumes

The wealth of data provided by the agency sales model makes it possible for the importer to move away from short-term measures aimed at selling certain vehicle models, towards holistic customer lifetime value management. The new data can be used to evaluate, in a targeted manner, how customers behave, how high their willingness to pay is, or how loyal they are to the brand, in order to generate the decisive motivation to buy at the right moment. As a result, new customers can be acquired, the turnover rate can be increased, and the churn rate reduced. All this adds to customer lifetime value and increases sales volumes in the long term.

Upselling potential

In addition to increasing transaction prices and sales volumes, upselling potential can be realized, and new business models can be implemented more efficiently. Whereas vehicle sales and digital services were separated in the traditional sales model, they are offered centrally from one source in the agency sales model. This can create a closed ecosystem in which customers can be retained throughout the entire customer life cycle. This ecosystem can also make it much easier for importers to offer customers additional products and services, digital services, and new mobility formats.

The agency sales model as a lever for cost reduction

As already mentioned, almost a quarter of the costs of car sales can be associated with distribution. The costs here are split across all three steps of the value chain. Altogether, the agency sales model can save approximately 4-6% of costs across all stages.

Cost reduction through centralization

In the traditional sales model, many functions have so far been organized in a decentral manner, with each dealer having its own dedicated resources for marketing and customer service. Cost-intensive online sales solutions are also operated in a decentralized manner by the individual dealers. While the importer’s focus has so far been more on the administrative management of the dealer network, in the agency sales model the importer takes on an operational role requiring more diverse competencies. This includes, for example, the operation of central online stores, central lead generation, and the establishment of central customer service and marketing departments. These shared service centers reduce redundant functions in the sales organization and realize economies of scale. In addition, quality can be increased by bundling competencies and setting standards.

Cost reduction through lean and digital processes

In addition to the centralization of competencies, the leaner sales processes in the agency sales model and the harmonization of the IT system landscape also contribute to cost reduction in the sales organization. In our experience, a digital and lean agency sales process can eliminate approximately 40% of administrative tasks at dealers and importers. This is due to the fact that time-consuming price negotiations and complex manual approval processes and system changes are no longer necessary. The expected increase in online sales and the provision of self-service functionalities will also reduce the burden on retailers.

Cost reduction through resource relief

Transparent and uniform prices also indirectly contribute to a reduction of effort in the retail trade. As our study revealed, an average of 2.5 retailers are visited due to price negotiations before the purchase is concluded. This results in resource expenses for the trade organization, which are avoided by the provision of uniform prices in the agency sales model. In a medium-sized market, a potential saving of €20m per year can be realized through this alone.

Break-even relevance

Figure 2: Long-term cost and revenue overview after introduction of the agency model

In order to make a sound investment decision, the initial implementation costs and the potential long-term costs and sales must be compared. In our experience, the investment for a medium-sized market pays off after approximately four to five years.

Fair distribution of the expected profits is crucial for the success of the investment. To this end, decision-makers should develop a transparent remuneration and bonus model together with retailers when designing agency sales and setting up the investment case. Only with an equal partnership can the agency model successfully contribute to mastering the challenges of the coming years.

What’s next?

In addition to significant investment costs, the introduction of the agency sales model requires, above all, a deep commitment from all partners across all sales channels. A long planning horizon is essential for a sustainable and successful introduction of the agency sales model. The main drivers of the investment case are summarized below:

  • Increase in transaction prices through standardized pricing across sales levels and dynamic sales promotion measures
  • Increase in volumes through data-based and customer-specific approaches throughout the entire customer lifecycle
  • Utilization of upselling potential through the customer data acquired and the offer of vehicles and services from a single source
  • Reduction of costs by centralizing functions and minimizing redundancies
  • Use of leaner sales processes to relieve the burden on the sales system and resources

This blog was co-authored by Fabian Piechottka, Oliver Straub, and Nepomuk Kessler. Please get in touch if you have questions or need further information. We look forward to exchanging ideas on this particularly current topic.

For more insights, please also read our recently published Agency Sales Model Point of View.

What’s so spatial about asset systems for network operators?

Capgemini
December 18, 2020

What is the most appropriate information technology to maintain and store the master asset system model? We have to acknowledge the need for a network operator to access different variants of such a model. To improve consistency, a clear master-slave model is recommended, and master-slave information flows should as much as possible follow the natural lifecycle flows of a network modification.

In an earlier paper, the GIS-centric Enterprise, we argued that geographic information system (GIS) software is a key component of the enterprise ICT infrastructure for network operators, specifically for its ability to manage topological relationships via a graphical interface. Let’s rephrase the arguments in the perspective of an asset system model lifecycle.

  1. Strategic network planning requires insight in actual and projected asset system performance (power quality measurements, faults, incidents, outages) in relation to (projected) location of capacity demand or production sites and the asset system model itself. The underlying physical asset performance (condition measurements, systematic faults, repairs) may affect overall performance of the asset system and is part of the analytical model.
  2. Asset system design conceives network extensions and enhancements based on a well-documented as-built network model and its internal or external constraints. Apart from the electrical or hydraulic characteristics, the system design is strongly geographically determined (rights of way, environmental and safety regulations, soil types, slopes, etc.). High level cost estimates can be derived from an initial asset system breakdown and balanced against initiative value on portfolio level. The investigation of alternative routes is a specific geographic analysis step included in many investment planning studies.
  3. The validated asset system designs are engineered in detail as the construction projects are being defined and prepared for construction (bills of materials, compatible units, technology choices, detailed placement, etc.). Here, the projected network model must be translated to a precise topographic linear placement design for authorities to approve and contractors to execute.
  4. Projects and construction actors plan their work based on the detailed engineering specifications and there is frequent exchange of geographical information between parties (engineering companies, civil contractors, government bodies). Co-ordination of construction work (often imposed by government) includes the exchange of information (work polygons; construction site location and timing of work) with other utilities operating on the public domain. Further optimization (in timing and location) is sought in shared trench work for multi-utility projects.
  5. As construction proceeds, as built network records are documented to enable traceability (welding information on gas mains, equipment installed or replaced, configuration settings, initial pressure or voltage measurements) and linked to the functional segment of the asset system or to the right equipment. Commissioned network modifications are promoted from the “as-it-will-be-built” status (terminology taken from Network Model Manager Technical Market Requirements: The Transmission Perspective) and integrated in the operational network.
  6. Network operations use an abstracted (schematic or geo-schematic) asset system view of the same network to take operational decisions (switching, planned outages, flushing).
  7. Outage and incident management processes need insight on network connectivity to identify the origin of a problem as well as its impact (customer minutes lost). The field workers receive detailed location information to perform an intervention in a timely and safe way.
  8. Results of patrolling and surveying activities have to be reported back and associated to the network including their locations and the equipment they are related to. Correctly located observations and measurements are essential to the performance monitoring of the system.
  9. Customer service agents evaluating the feasibility of an access demand look at network characteristics in the vicinity of the premises to be connected and maintain the vital customer-network link.

By counting the number of occurrences of the terms location, placement, geographic, route, vicinity, etc. in this lifecycle overview, it is obvious that asset systems – at least for network operating companies – are and must be spatial.

Spatial solutions provide essential functionalities to manage networks:

  • Mapping and visualization services because the map is a natural entry point to geographical information, and through a geographical representation, it can provide insight into complex information.
  • Graphical tools as available in GIS support intelligent editing of a (versioned) network model, an interconnected structure where network facilities, equipment and customer connections are linked through cables, pipes, switches, and valves. Intelligent connectivity rules support the management of network structures according to business guidelines.
  • And finally, there is the power of geoprocessing tools to relate and combine phenomena that have no specific relationship except that they are located close to one another. Spatial analysis and decision making based on “proximity” and inference is essential for utilities in critical processes such as strategic planning or risk management.

Alternative models derived from the master will remain necessary but in a clear master-slave model as depicted here below.

The transformation to “smart” and “digital” will therefore require fundamental renewals of enterprise GIS systems to provide a solution to the linear problem statement in a smart world. Large corporate initiatives will be required, far beyond the inevitable technology upgrades to systems that are in many cases 20+ years old.

“Wearables, Whereabouts, and Roundabouts” is the title of our next article that will add the event-dimension to asset management and asset system management. In other words: how do we locate event information on the network model in order to use it in network performance and risk management activities?

This blog is part of a series of 6 under the theme “Asset Systems for the Smart Grid”:

  1. What is an asset system for a network company?
  2. Smart grid or silly tube maps?
  3. What’s so special about asset systems?
  4. What’s so spatial about asset systems?
  5. Wearables, whereabouts and roundabouts
  6. What happens on the asset system, stays on the asset system

I have more than 20 years experience in GIS (Geographical Information Systems) and asset management projects in utilities (water, electric, gas), telecom and the public sector (transportation, cadastral services). You can connect with me here.

Role of service virtualization in DevOps

Capgemini
December 18, 2020

As per a DevOps survey, most delays occur during the testing phase of SDLC. Some of the reasons are:

1. System constraints: 80% of teams experience delays because of unavailable dependent systems
2. Parallel development and testing: 56% of the critical dependencies are unavailable when the development and the testing teams need to work on them in parallel
3. Third party: 79% of teams face restrictions, time limits, or access fees on third-party services.

Service virtualization helps companies achieve DevOps goals by eliminating barriers that impede access and responses from key systems during testing.

A GARTNER SURVEY of 500 companies indicates:

  • A dramatic increase in test rates using service virtualization
  • 33% of companies reduced their test cycle times by at least 50%
  • Nearly half of respondents saw a reduction of total defects of more than 40%

Service virtualization: What is it?

Service virtualization is the concept of simulating the behavior, data, and performance of the dependent systems and then creating a virtual service of the dependent systems. The virtual services thus created can be used instead of the live systems which behave very much like the live systems. Developers, testers, and performance teams work in parallel, leading to faster delivery, lower costs, and higher quality of applications. Behavior simulation is carried out using virtual services, which are pieces of software that mimic application behavior in some way.

Service virtualization: How does it work?

Service virtualization has three steps:

  1. Capture: In this step, the SV listener is deployed between the application under test (AUT) and the dependent systems. The listener then performs the below steps between the AUT and the downstream dependent systems:
    1. Record traffic between existing systems or
    2. Create from engineering specifications
    3. Draws data from sources such as log files, sample data, packet capture, etc.
  2. Process: In the process step, the service virtualization solution performs these steps:
    1. It evaluates the data and the protocol details that was captured in the capture step.
    2. Co-relates them into virtual services which are live-like models.
    3. The virtual services are the conversations of the request and response pairs, which can be further used for development and testing.
  3. Model: The virtual services created in the process step are deployed in this step for the teams to use instead of the dependent systems. The model step enables:
    1. Living, breathing, “live” model
    2. Contextual and sophisticated behavior
    3. Automatic handling of dynamic behavior.

Context diagrams – before and after service virtualization

The figure below shows the context diagram before and after service virtualization.

Service virtualization benefits

Reduction in release cycle time

Virtualizing the frequently failing third-party services improves the overall test-and-release cycle time by reducing testing wait time.

Cost reduction

  • Virtualizing the transactional services reduces the cost involved in using them during the testing phase
  • Eliminates environment sharing and hence drastically reduces cost.

Quality improvement

  • Virtualizing third-party services early in the delivery cycle enables shift left of integration and NFR testing to improve quality and reduce risk
  • Allows negative test coverage
  • Provides stable test environment.

Productivity savings

  • Potential savings up to 50–60% on total productivity by applying service virtualization
  • Reproduces production defects early in the lifecycle
  • Enables parallel development and testing.

Service virtualization implementation approach

Apps NA DAQE practice has the required expertise in implementing service virtualization and has come up with the approach below to conduct initial assessment and pilot in the customer landscape which can help them evaluate the return on investment leveraging service virtualization.

For further queries and discussion, please reach out to Deepa Talwaria (deepa.talwaria@capgemini.com) and Anil Kumar (anil.j.kumar@capgemini.com) from Apps NA DevOps COE team.

Schrems II – an overview on how to proceed

Capgemini
December 18, 2020

Recently, the European Data Protection Board (EDPB) published two recommendations explaining how organizations should act on the CJEU’s Schrems II ruling. In short, this ruling invalidated the EU-US Privacy Shield and clarified multiple requirements that must be met before processing personal data outside the European Economic Area (EEA) (also referred to as cross-border data transfers).

In this blog, we have summarized the view of the EDPB and provided you with an easy-to-use handout that you can use in discussions within your organization.

Requirements for cross border data transfers

With Schrems II, the court introduced – among others – the following high-level requirements for data controllers when considering cross-border data transfers:

  • Controllers must know where personal data is processed and what (legal) mechanisms are relied on to ensure adequate protection of personal data.
  • Controllers must have a good understanding of the (legal) risks in third countries, by assessing the level of protection offered by the laws and regulations of that country and knowing whether this undermines the level of protection offered by the mechanisms on which they rely.
  • Where laws and regulations in third countries do have a negative impact on the protection of the personal data and fundamental rights of the data subject, the controller should either implement additional controls, limiting these risks to an acceptable level or suspend, end, or refrain from transferring the data to third countries.

Finally, organizations must be able to demonstrate compliance with these requirements to supervisory authorities or data subjects. This requires organizations to ensure they document all decisions, assessment results, and other relevant information that justifies their decisions.

EDPB recommendations

In “Recommendations on the European Essential Guarantees for surveillance measures,” the EDPB sets out four elements they consider to be essential guarantees that must be present in the third country when assessing the interference with rights to privacy and data protection, in the light of entailed surveillance measures.[2]

In “Recommendations on measures that supplement transfer tools to ensure compliance with the EU level of protection of personal data,” the EDPB introduces a six-step approach that may assist organizations in taking appropriate actions to maintain compliance with data protection regulations. In addition, the EDPB provides multiple use cases and examples demonstrating the EDPB’s interpretation. As these steps are the foundation for an organization’s approach, we highlighted them below.[3]

The six-step approach

The six-step approach defined by the EDPB enables organizations to tackle the challenges they were confronted with after the Schrems II ruling. The EDPB defines the following steps:

Step 1: Map your transfers

You need a good view of the scope. As such, you should start by identifying the geographic locations of the processing activities. This enables you to draft a list of third countries and the categories of data processed. There are a few challenges you should be aware of during this step:

  1. Do not forget onward data transfers. This is especially relevant for suppliers that have multiple subsidiaries in third countries, or vendors using sub-processors in third countries.
  2. Be aware that information in your data record or data processing agreements may not be up to date anymore.

NB: Going through data processing agreements? Make sure you also collect the information in step 2. Prevent double work.

Step 2: Identify data transfer mechanisms

You should identify the mechanisms you rely on for each of your data transfers. The most common mechanisms are:

  • Adequacy Decision
  • Standard Contract Clauses (“SCC”)
  • Binding Corporate Rules (“BCR”).

Regarding the derogations in art. 49 GDPR, the EDPB notes that these have an exceptional nature, and must therefore be used restrictively.

At this moment, Adequacy Decisions are generally considered to be the most reliable. As such, the EDPB notes, for data transfers to countries for which an adequacy decision has been adopted, no further action is needed.

NB: If you choose to collect the geographic locations under step 1 primarily from data processing agreements, we suggest directly collecting information on the mechanisms as well.

Step 3: Carry out the TIA

This step requires you to carry out a transfer impact analysis (TIA) to verify the effectiveness of the means used to protect the personal data in the third country. For example, by checking whether the provisions in an SCC protect data subjects not only on paper but also in practice. The level of protection provided must be practically equivalent to the protections guaranteed in the European Economic Area (EEA).

In practice, this means you must assess whether the law, or practice in the third country may impinge on the effectiveness of the mechanism you rely on under art. 46 GDPR. To carry out such an assessment, the following information can be useful:

  • General information on processing: categories of data, purposes, data flows, file type, etc.
  • Mechanism organizations rely on for the respective data transfer.
  • Recommendations on the European Essential Guarantees for surveillance measures.
  • Information on the legal system in the third country.
  • Applicable rules and regulations.
  • Information made available by international organizations and NGOs, such as the UN.
  • Information shared by stakeholders, such as receiving party in the third country.
  • Investigations published by supervisory authorities.

Carrying out a TIA may require the involvement of many internal and external stakeholders, including Legal, IT, the CISO, and the recipient within the third country.

Step 4: Implement supplementary measures

The outcome of the assessment may demonstrate that the level of protection for individuals in the third country is lower than in the EEA. If so, you need to implement additional measures to increase the level of protection or suspend, end, or prevent transfer of the data. EDPB distinguishes three types of measures: (i) contractual, (ii) technical, or (iii) organizational nature.

Step 5: Procedural steps if you have identified effective supplementary measures

This step details the procedural steps you should follow if you have identified effective supplementary measures. So far, the information is primarily limited to SCCs. The EDPB exemplifies in this step that stakeholders should ensure that implementation of these supplementary measures does not contradict, directly or indirectly, and undermine the level of protection offered.

Step 6: Procedural steps if you have identified effective supplementary measures

In the final step, the EDPB addresses the importance of implementing controls that ensure ongoing compliance. In practice, it is important to keep a sharp eye out and continuously monitor the effectiveness of the mechanisms. If laws and regulations in a third country change, you need to (i) be made aware of this, (ii) reassess whether the supplementary measures are still effective, and (iii) be able to respond to the matter appropriately.

Conclusion

With the publication of these two recommendations, the EDPB provides us with a first view on how to interpret the consequences of Schrems II. The recommendations clearly increase the burden for data controllers with respect to documentation. Also, carrying out TIAs and defining effective measures is very challenging and requires a good understanding of all the laws and regulations in scope. Also, the recommendation show that many discussions are still ongoing, including the adoption of new SCCs.

What’s your organization’s strategy? Have you already agreed upon an approach internally? Use the guide below when discussing this internally or when developing your approach.

Naturally, we are also happy to assist you in developing a strategy together and finding a custom solution for your organization.

Find out more about Capgemini’s Data Protection and GDPR services by visiting: https://www.capgemini.com/service/digital-services/gdpr-readiness/data-protection-gdpr/

[1] Third countries are countries outside the EEA.

[2] Website EDPB, Recommendations 02/2020 on the European Essential Guarantees for surveillance measures, <https://edpb.europa.eu/our-work-tools/our-documents/recommendations/edpb-recommendations-022020-european-essential_en>

[3] Website EDPB, Recommendations 01/2020 on measures that supplement transfer tools to ensure compliance with the EU level of protection of personal data, <https://edpb.europa.eu/our-work-tools/public-consultations-art-704/2020/recommendations-012020-measures-supplement-transfer_en>

Author


Joost Christians

Joost is a Senior Consultant in Data Protection and Cybersecurity. He operates on the intersection of Legal and Cybersecurity. He advises clients on how to comply with legal requirements such as the GDPR and Trade and Export legislation. His knowledge and experience in Privacy, Data Protection and Cybersecurity making him an excellent partner in developing and implementing strategies on how to comply with these complex regulatory requirements

Eliminating friction drives O2C transformation

Caroline Schnider Business Outcomes Office, Capgemini's Business Services
Caroline Schneider
December 18, 2020

Over the last year, we have seen a renewed focus on removing roadblocks slowing down business functions, frustrating customers, or negatively impacting sales and costs. Finance departments often play a crucial role in contributing to, leading, and designing these initiatives to improve business outcomes.

The global health crisis has accelerated transformation plans, focusing on speedier cash collections or increased working capital. This shift pushes back-office teams into a more strategic position, especially when delivering significant outcomes within order-to-cash (O2C) functions. However, many finance teams struggle when it comes to identifying how to start the journey.

Understanding how frictions happen

Before starting a transformation journey, you need to understand how friction happens. Think about how many people in your finance department touch a single invoice or order. The process may include the pricing team, the order fulfillment team, collections, or dispute teams, along with sales or finance teams. By the time all is said and done, more than 25 people could have interacted with a single order or invoice. With each issue, each validation, query, and approval needed, business slows down and runs the risk of being incomplete or full of errors that require rework. This results in customer frustration, order delays, lower sales, and higher costs.

Unfortunately, one thing that is often overlooked is how much frictions cost in the long run. Frictions, work around exceptions, and process issues are accepted because this is how it has worked for years. With every point of friction in your O2C operating model, your organization loses money; every exception is a wasted effort and pushes your customer closer to your competition. You may have designed processes and teams around friction without devising a plan to eliminate friction for teams, customers, and partners.

Identifying where to start your transformation journey

However, by asking a few key questions and comparing your key metrics with industry benchmarks, you can identify where to begin your transformation process. Are you achieving 95% or greater cash automatch rate? Are you at 100% invoice accuracy? Is your billing 100% automated? Are your accounts receivable less than 10% past due? Is your day’s sales outstanding (DSO) within a few days of your average terms? If you answer no to more than one of these questions, the more critical it is you review where you may have process friction. Process friction is anything that restricts process flow. The best place to start is to figure out where process friction negatively impacts results or customer experience.

When creating a successful O2C transformation initiative, you will start to see a faster flow of information with fewer exceptions while enabling your teams to focus on higher-value tasks instead of repetitive tasks. Within O2C, the speed you can collect cash changes dramatically if your level of process friction is high. The best initiatives start with a clear vision and understanding that less friction means better flow, higher sales, and faster cash conversion. Stay tuned for more on how to achieve frictionless O2C!

To learn more about how Capgemini’s Finance Powered by Intelligent Automation can help you start your frictionless finance journey towards enhanced O2C processes and improved customer satisfaction, contact: caroline.schneider@capgemini.com

Caroline Schneider has been delivering and designing O2C solutions for clients for over 18 years. She is passionate about delivering solutions to clients to maximize their working capital through technology, automation, and industrialized process design.

The Distribution Transformation Voyage: Leveraging the Open Insurer Architecture

Capgemini
December 17, 2020

The insurance industry is at an inflection point: clients’ sales and service experience expectations are rapidly rising, while new technologies—if leveraged properly—enable new levels of service quality, operational efficiency, and consumer/agent/employee experience. In addition, the industry is being challenged by digitally native InsurTechs that have entered both B2C and B2B markets and compete with compelling products. The trust equation is increasingly driven by digital rather than by face-to-face interactions. The world is fundamentally changing around the insurance industry in a rapid and radical manner, creating a tremendous opportunity for innovative companies to emerge and improve the way insurance is produced, bought, and experienced. Underlying forces – transparency, customer-centricity, connectivity, and artificial intelligence – are leading the drive towards strategies and solutions that are driven by digital, data and cloud in global insurance markets.

Consumer expectations of institutional knowledge (of products, customer data, and more) and consistently high service levels that have been set by market leaders in other industries (e.g., Amazon, Apple, SoFi, etc.) have long spilled over into the insurance marketplace. Disrupters such as Lemonade, Hippo, Ethos, and Ladder Life have emerged and are setting the innovation agenda across the insurance value chain. A “digital, cloud- and data-driven insurer” mindset has the potential to deliver step change operating performance improvements through better risk selection, pricing, effective claims management, and experience. The “Inventive Insurer” mindset can serve as a “North Star” for a transformation journey from a traditional insurer to a digital insurer (the “Digerati”). It has a strong emphasis on improving business and operational performance by using technology as an enabler and catalyst for change.

Based on “Leading Digital” (a joint study conducted by Capgemini Invent and MIT Sloan), the “Digirati” have the right balance of digital mindset, skillset, toolset, and dataset to deliver superior operational performance and results. The following graphic shows how emerging technology is starting to empower the digital experience across the ecosystem:

Figure 5.   Emerging technology

The digital journey is not just about IT or operations. It is a collective voyage, with aspects of both technology and operations. As businesses look to transform into a “blue ocean” or “greenfield” model, IT partners should consider an architecture such as the Open Digital Insurance Platform, which is an API layer that allows insurers to quickly add new capabilities from insurtechs, big tech, and other third-party or internal systems. While no solution is truly plug and play, with open APIs integration is faster, cheaper, and easier. This ease of integration is critical for new distribution channels, data sources, modernized systems, and much more. This architecture enables insurers with two critical advantages: agility and an interconnected ecosystem. By creating an open, digital insurance platform, insurers can move much faster, leveraging interconnected systems to bring new products to market and change existing products. They can respond to customer needs faster than ever before by putting information at the CSR’s fingertips, or better still, the customer’s fingertips. Also, through an interconnected ecosystem, insurers can enable faster digital adoption while also giving new life to legacy systems without necessarily requiring a full transformation or replacement.

Life insurers are beginning to adopt new, adaptive technologies, especially with regards to distribution. Through extensive use of data, carriers can increase the volume of automation via straight-through processing, enabling the business to underwrite and price better. Through data enrichment, carriers can understand vital data and gather additional information to help customers enhance their digital experience, capturing or even prefilling data as it relates to lifestyle, general health, wellness, critical illnesses, auto, or other personal risks, helping the insurer assist the customer to create a more personalized experience and better-matched product despite less human interaction.

As insurers look at their overall strategy for a digital transformation, they will need to look at key components of the buying process, such as content, experience, and platforms. Customers and agents have a broad spectrum of expectations; the content and products will provide insight into the portfolio of the insurer and its values with regards to being customer centric. The platforms enable an insurer to provide a more seamless experience across the customer lifecycle, including areas such as:

  • Policy, billing, and claims Information
  • Insight into data across the customers’ value chain
  • Product information with necessary updates.

The brand experience is vital to providing a customer journey that is differentiated and is uplifting for the brand despite potentially being 100% digital. It is essential to manage the journeys for each distribution channel strategically in order to define the journey and processes that are required for each unique channel. Distinguishing how and when customers should be engaging – and how they are engaging – is essential to ensuring that the process is driving customers from education through quoting and illustration through sales and issuance while minimizing not-taken policies, not-in-good-order (NIGO) policies, etc.

Finally, deciding whether to re-platform onto new digital solutions, companies will need to select a packaged solution (home grown solutions rarely make sense with many packages now available in the market) and determine the level of configuration required to get the solution to provide essential functions aligned to business objectives (i.e., a minimum viable product, or MVP). Selecting the right solution will be vital to achieving the digital roadmap and enhancing your company’s digital strategy for the next several decades.

This blog was co-authored with Lawrence Krasner. To continue this conversation, connect with Lawrence or me on social media. You can also write to insurance@capgemini.com