Skip to Content

10 tips to learn effectively in the digital economy

Capgemini
December 30, 2020

In my secondary role as the Learning and Development lead for Capgemini Insights and Data I’ve been thinking a lot about how to help our team get the most out of the many resources available to keep pace with the rapidly changing landscape and to have the right knowledge at the right time. I have some ideas on what the potential challenges could be and some tips and techniques that I use to maximize the return on my learning hours investment.

The best way to gain knowledge is through experience. However, gaining experience is a slow process. Luckily, we can get the next best thing by reading about the experiences of others. Supposedly the most successful people read a lot. Statistics are often quoted that CEOs read 3–4 books a month. Warren Buffet, one of the most successful investors of all time and fourth-richest person in the world, is known for reading 500 pages a day. Bill Gates is another voracious reader and reads 50 books a year. However, even at this amazing pace there are so many books and articles out there and the number is growing. According to Wikipedia, there are around 500,000 new books published every year in the UK and US alone. We have no hope of keeping up. So what do we do?

As Mark Twain purportedly said, “History does not repeat itself, but it does rhyme.” In Ray Dalio’s recent book, Principles, he attributes repeating the phrase, “Is this another one of those?” a lot, as one of the keys to his success. If you can identify what the previous pattern was, you can learn which mistakes to avoid and the ways to succeed.

Without an understanding of history, a library of mental models and an understanding of patterns that have happened in the past in similar areas, it is impossible to identify whether we can learn from someone else’s experience and take a short cut. This is especially true in IT, when we see very quick evolution of trends, markets, tools, and languages.

So, we often face the following potential challenges:

  • There is effectively a near-infinite amount of knowledge out there and even Bill Gates or Warren Buffet would take 10,000 years to read the books published in this year alone.
  • We have shorter attention spans thanks to social media, which is getting us used to instant gratification. According to the BBC, our attention span reduced from 12 seconds in 2000 to eight seconds in 2017.
  • Most of the time when we read something or watch a video, we do it in a passive way and forget most of what we learned.
  • We forget to apply what we learned.
  • Even if we do remember what we read, there is a great difference between knowledge and wisdom and knowing how to apply the learnings to new situations.

Here are my top-ten solutions that I have found to work well and have refined over the years. I hope you find them useful along with the other lessons and patterns I have to share:

  1. Choose carefully what to read. Spending several hours on a book or half an hour reading an article is a serious investment in time, so spend a bit of time upfront on selection and preparation. If you want to learn about a subject then find the best books on this area. Read reviews of the books, see if you can work out the central theme or big idea of the book. Read the chapter list. Is this the best book of the genre about this subject? Is the author respected? Use Twitter and LinkedIn to follow other like-minded people and get recommendations on what to read and learn.
  2. I favor books over articles and newspapers as more care and time has been spent in curating and editing the arguments and content, and they are usually more thought through. Think of the difference between a delicious, home-cooked meal vs a quick takeaway that leaves you feeling hungry.
  3. Focus on getting deep knowledge about an area when you need it and don’t just read for the sake of it. It is easier to focus if you need the knowledge. It is even easier to forget stuff if you don’t use it.
  4. Work out why you are reading. If you are reading for pleasure, then of course, savor every word and take your time. If you are reading for knowledge, don’t be afraid to skip pages. Take notes. Look for themes. Try to summarize. Try and think of counter arguments.
  5. Don’t obsess about reading new content. Classics are called classics for a reason. The best book on management I’ve ever read, High Output Management by former Intel Chairman and CEO, Andy Grove, is from 1995.
  6. Focus on reading as a skill and work on increasing your reading speed.
  7. Look for alternative views. A great example of this is the article I found to counter my view that CEOs read a lot. The basic ideas in the article are: CEOs read slightly more than the average person. Correlation does not equal causation. Rich people drive more Ferraris than poor people so to get rich do I need to drive a Ferrari? Likewise do successful people read more or do people who read more become successful? Books are important as a tool for social mobility, they open doors and are medicine for the brain.
  8. Try to summarize the article or book in 3–4 bullet points. When reading the above article, I first read it passively, then I tried picking out the key ideas I needed to focus on and read it more actively. Most books have a core idea.
  9. Use technology to maximize the time you can learn. Listen to audio books and podcasts on double speed. A few months ago I signed up for Blinkist which provides short summaries or blinks of books. You can read the summary in around 15–20 minutes and get a pretty good overview of the key ideas in a book. Use Kindle or similar e-readers and highlight the key sections. Look at other people’s highlights, think about why they were highlighted and go back over the highlights.
  10. Lastly, if you think you’ve got the message and have worked out what the book is telling you OR if it is falling short, don’t feel guilty about skipping a chapter or stopping reading.

For more info, reach out to the author, Giles Cuthbert, Head of Microsoft Data and Visualization Practice, Insights and Data UK.

Edge Computing: Leading the new wave of disruptions

Capgemini
December 29, 2020

Cloud-infused scalability into business

In the last two decades, rapid inventions in the telecommunications industry resulted in increased speed, bandwidth, and penetration. This has led to the internet becoming a bridge between customers and businesses, disrupting traditional models and increasing the efficiency of the survivors. Digital Inclusion is all about connecting billions of people globally through the internet. Software intelligence and telecom inventions led to a profound effect called the ‘Asset-light Model’. From the 90s, organizations have been planning to exploit globalization and achieve topline growth. In scaling up, firms face serious bottlenecks in buying and managing the assets.

Spotting efficiency issues in the cloud model

From there on, organizations started leasing or outsourcing assets, so the entire capex burden transformed into periodic variable costs. This solved the problem of scale-up and asset upgradation. In the technology industry, ‘Cloud computing’ is a classic case of ‘Asset-light Model’. I expect an increase in IT spend on cloud infrastructure from $419m in 2019 to $909m in 2023, CAGR of 21%. Cloud was possible with a wider telecom bandwidth as huge amounts of data need to traverse between multiple communication nodes globally. However, there is an inherent problem with cloud – companies need to accept the additional bandwidth costs. Though the network speed and coverage increased drastically, there is some latency built into the system owing to cloud storage.

Introducing cutting ‘edge’ computing technology

To overcome the operational difficulties of cloud computing, the processing or storage capability needs to be moved from centralized to decentralized topology, leading to the rise of Internet of Things (IoT). As a few million devices are going to get added to the internet ecosystem, I expect data processing at origination becoming a new normal. The distributed computing and storage at edges of the network is called ‘Edge computing’, where data collection and computation happens close to the origination place and time in a decentralized manner. Tesla deployed edge computing in its cars – Machine learning algorithms run at the central cloud, passing outcomes to entire fleet; Cars equipped with processing capability will make decisions with the live data feed.

Edge computing leads to cost optimization– a two-pronged advantage. Processing capability shall be trimmed and data flow traverses in multiple directions, decreasing bandwidth needs. The inherent latency in the cloud, while responding to customers, is reduced and the distributed storage capacity reduces the risks of concentrated cyberthreats at scale.

What do marketers gain from ‘edge computing’?

Edge computing extends the usability of data to unimaginable dimensions. By fragmenting the analytics process, the central cloud shall hold the core algorithms to process one-level preprocessed information from the edge networks. The first processing happens at the level of consumer devices or aggregation of devices. Rather than pipelining the crude data collected from customers, the network passes on usable information to the central cloud. In the absence of decentralized edge processing, the cloud would pick up only fragmented customer data leading to loss of key insights from real-time data. For instance, when a banking customer initiates payment for a biller and if the payment process breaks down, the customer can log into the application again and continue the payment from a quick link retrieving the data. This highlights two important outcomes:

  1. Customer engagement is initiated right at data generation
  2. Personalized content curation is managed locally

From an operational perspective, tracking the customer journey across multiple channels is tedious. By locating edges across the network, journey details can be stored and retrieved from the edges on time, without losing any information. For customer experience (CX), the ulterior goal is to get customers hooked live in the brand’s digital ecosystem. With edge computing, data processing centers get closer to the customers, resulting in faster and engaging CX. If a wealth management client searches for private assets in the mobile app, edge processes this information and passes it to another edge, which can connect this client to the right wealth advisor in seconds, along with sharing the client’s profile with the advisor. Marketing teams will get relieved from operational aspects of data management. For marketers, edge computing helps in discovering the true sense of real-time data analytics. Businesses will identify data-based outcomes that resonate well with customers.

Going forward…

From a marketing perspective, edge computing creates value for businesses through personalization, contextualization, and timeliness, thereby scoring well on CX and reducing customer attrition rate. Edge computing is a coordinated effort. For instance, AT&T partnered with cloud enterprises, helping businesses to adopt edge computing along with 5G networks. The benefits maximize when the device clusters, co-located close to the data origination, start sharing information to optimize decision making. Edge computing is a potential disruptor and will gain momentum when IoTs proliferate at higher velocity.

How legacy systems are holding businesses back

Capgemini
December 29, 2020

What is the situation with legacy systems and practices? Can this hold back businesses when it comes to making the most of their data?

The most common practices we see that hold organizations back are when businesses have not embarked on building a modern data platform. A modern data platform must be business value driven and provide trusted data, by design, from event to effective action. It must be repeatable and extendable – and therefore scalable. This is really hard to do and some of the main challenges include dealing with multiple legacy systems covering ERP (enterprise resource planning) such as Oracle, SAP Peoplesoft, which help people manage their assets, procurement processes, projects, HR, etc. Legacy systems can also be CRM (customer relationship management) such as Siebel, Salesforce, etc. which help organizations manage and profile their customers to enable them to analyze and drive up loyalty and purchasing and retention.

These systems, which typically cost millions to implement and support for FTSE 250 businesses or public sector bodies, are essential but in many cases so complex and difficult to get data in and out to join up that it makes it very challenging for businesses to be able to denote cause and effect between an action and reaction. They usually come with their own reporting system, which comes with best practice dashboards but only show what is in the single system. Most large organizations have multiple ERP and CRM systems which have come about through merger and acquisition activity or organically.

The difficulty of joining up the data between systems and of having common definitions and meaning of data is holding back progress. The unique identifier of a customer in the CRM system rarely matches the unique identifier of the customer in the billing system or credit control system.  Therefore, organizations end up being limited to asking the questions of the data that they have access to instead of being able to ask the questions that could drive meaningful change. For example, without a linked set of data, how can you find out what the net lifetime value of a customer is if you include marketing spend and cost of acquisition? Which customers are profitable and which are loss making? When do you break even?

The recent drive to Software-as-a-Service solutions has actually made this problem even harder.  So-called legacy systems were typically running on servers owned by the organization and their IT teams could get access to all the data in the ERP and CRM systems and could build data warehouses to allow them to join the data and to perform analytics. The recent trend for cloud-based SaaS services such as Oracle Fusion, Salesforce, MS Dynamic ERP, and CRM means that the system is run and managed by the vendor for you but it also means you do not have access under the hood to all the data. Data needs to be extracted through APIs and can be costly and slow.

How aware are businesses of this issue? Do they accept there is a problem?

“Businesses are highly aware of how difficult it is to join up data, get common definitions, analyze cause and effect, and predict the future.

“It is not optional to manage data and GDPR regulations mean all businesses have to have good data management practices to avoid fines. The GDPR (General Data Protection Regulation) sets a maximum fine of €20 million (about £17.8 million) or 4% of annual global turnover – whichever is greater – for infringements.

“This change is here to stay, though, and the increasing drive to the cloud and the pay-as-you-go model and Opex over Capex is also here to stay. The key is what to do about it and whether the investment is put in to build the tools and organization change.”

How can businesses go about updating these? What skills and investment would they require to build platforms and processes that can help them make better use of data?

Most organizations we see are migrating from the expensive, on-premises, monolithic systems to more agile, cloud-based, Software-as-a-Service packages to manage their processes. Our most frequent conversations with customers are about how to migrate from the old to the new cloud-based technology.

Enabling organizations to make better use of data requires four things:

  1. Building a modern data platform that enables agile delivery and supports self-service, reporting, and analytics. When a global soft drink manufacturer wanted to increase its insights and analytics capabilities and make AI viable at scale, we helped establish a single data lake for organization-wide use. Now, data is always available to business users within a 30-minute timeframe. The solution’s capabilities are being extended to a range of business units and to external partners, such as bottlers. As a result, the client is now much better positioned to realize business value through collaboration around data.
  2. Adopt a data-driven culture and Mindset. Traditional methods are too slow to deliver value.  Collaboration across key roles involved in delivery of data pipelines and analytics must be driven up. Streamlining the flow from requirements definition, development and tracking value
  3. Invest in data trust services. Data management means enabling a strong data catalogue to ensure people know what is available, embedding data quality practices into the platform, automating data lifecycle management so only the most useful data is used, having a strong reference and master data management solution to allow data to be joined between systems and data privacy and security.
  4. Focusing on automation. Data management teams need modern interoperable tools to acquire, organize, prepare, and analyze/visualize data. With large organizations having hundreds and sometimes thousands of frequently changing data sources, tools that can unify data using machine learning, apply fuzzy matching algorithms to identify patterns, and apply data quality rules are becoming more and more important and are maturing.

Enabling artificial intelligence and realizing that the patterns in the data are far too complex for a human to identify and to use AI to drive insights.

What can organizations do to develop consistency in their working practices while being flexible in adapting to different environments, and technological or economic changes?

“The key to the success of such an initiative is the creation of the correct operating model and organization structure to define and embed best practices into the different parts of the company.

“In 2001, the analyst firm Gartner started recommending that organizations create BICCs (BI competency centers). A BICC would coordinate the activities and resources for an organization. It has responsibility for the governance structure for BI and analytical programs, projects, practices, software, and architecture.

“In recent years, this has transformed to analytics competency centers (ACCs). ACCs follows a more strategic objective and follow the strategic objective to transform the company towards a data-driven company, build analytics expertise, formulate a data strategy, identify use cases for data mining, establish a manage a platform and drive the general adoption of analytics across the organization. The focus of an ACC is the adoption of self service and empowering the business. The combination of an ACC and DataOps provide the way to ingest, unify and analyze the data for modern organizations.”

How is this likely to shape up in the future? How important is it that companies adapt and evolve, and what are the risks for those that can’t?

Not being data driven is quite simply not an option in the digital economy. We have all seen how industries can be completely transformed by new digitally native entrants. Look at Uber, Airbnb. We have all seen how these digitally native businesses have transformed customer expectations as well. Look at how Amazon Prime allowing next day or same day delivery has made people used to that service and how we now expect the same from other legacy businesses.

“COVID-19 has accelerated the decline of the high street and the consumer shift to online. Fifty-nine percent of consumers worldwide said they had high levels of interaction with physical stores before COVID-19, but today less than a quarter (24%) see themselves in that high-interaction category.”

Managing the supply chain, driving down costs, managing human capital and assets are all critical to the survival of legacy businesses. Data must be treated as an asset and managed accordingly. Businesses that fail to meet this challenge will have a higher cost base and be less competitive than businesses that do.

For more info, reach out to the author, Giles Cuthbert, Head of Microsoft Data and Visualisation Practice, Insights and Data UK.

Decision frameworks help make smart choices when migrating business applications to serverless services

Capgemini
December 23, 2020

From the pandemic to political uncertainties, today’s business environment is unprecedented, and companies are under increasing pressure to cut costs while increasing agility. This has made cloud adoption and serverless architectures more important than ever. The promise of the serverless paradigm includes high availability, easy scalability, no need to manage infrastructure, and the ability to pay only for services they require and only as they consume them. Taken together, these benefits result in organizations that are better able to respond to changing business scenarios – whether that’s seizing new opportunities or mitigating risks.

There are many choices to be made when developing a serverless solution. Not only is there a functional and non-functional range of requirements, but there are several cloud services. For example, in addition to Lambda, AWS  offers more than 50 serverless services, plus more than 175 other cloud services – and that’s just one vendor.

The benefits of a decision framework

When I speak with decision makers, I’m often asked how they should decide which serverless services to use to build the business solutions they need and when to use them. Our recommended approach is to create a decision framework. A properly crafted decision framework is meant to help organizations take the right approach to migrating applications to the cloud. When done well, it:

  • Recognizes that one size does not fit all. A decision framework will help identify which cloud-based solution is right for the needs at hand to help organizations go beyond Platform-as-a-Service (PaaS) or Infrastructure-as-a-Service (IaaS) solutions and embrace the benefits of serverless. For example, while single page applications will be a good fit for serverless if all actions and events are handled via API, a NodeJS-backed application may be the best fit on a PaaS platform.
  • Drives architecture maturity. Serverless is the best-in-class architecture for cloud-based solutions. A decision framework helps organizations uncover opportunities to embrace serverless that they may not have recognized, thereby driving organization-wide maturity.
  • Enables a forward-looking architecture. Serverless allows organizations to quickly build out new capabilities using solution building blocks.  Developers can focus on building out functional capabilities while the cloud provider handles everything else.

Key considerations for a serverless decision framework

A decision framework helps an organization address key issues, including an organization’s:

  • Functional requirements, which dictate the structure and composition of the technology solution, including user experience applications, the business layer, and analytics.
  • Non-functional requirements, which help an organization drive differentiation in the market and inorganic growth. They include usability, availability, scalability, accessibility, maintainability, and fault tolerance.
  • Technical constraints for the technology used such as platform, processor, memory, and storage requirements.

A comprehensive decision framework should go beyond just the application layer of a solution, that is from application to data analytics layers, as per the below.

A serverless decision framework in action

By taking the above requirements into consideration, a decision framework helps organizations determine when to use serverless. It also helps organizations determine the best approach when serverless requirements are not met. The below diagram illustrates how to determine whether or not serverless is the right fit as well as which alternatives to consider.

Capgemini has built an exhaustive decision framework covering all the layers of an IT solution to help organizations decide the right mix of AWS serverless cloud services and build a serverless solution that will allow them to succeed in today’s business environment and be ready for the future.

Please contact me for more information about serverless or learn more about our experience, case studies, and thought leadership in this area.

Author


Prafull Surana

Prafull is a technology leader, an architect, and a seasoned technologist with 20 years of IT experience.

The next big thing to boost your innovation – the venture client model

Capgemini
December 22, 2020

Key takeaways

  • External and hybrid sources of innovation will become increasingly important – by 2025 startups will rank as a top innovation source for companies with great potential for sustainability.
  • Venture client units are a breakthrough corporate venturing vehicle, which if operated with a specialized venture client process and with dedicated resources, enables corporations to benefit strategically from top startups.
  • For corporates, the main advantage of a venture client unit is that it enables the entire organization to gain measurable competitive advantage from more and better startups – at lower risk and without any capital requirements than possible through traditional corporate venture capital programs.
  • For startups, the main advantages are to quickly gain high-profile reference clients whose expert user feedback is critical to iterate and improve their products as well as through an increased valuation that results from increased traction and revenues – all without additional dilution.

Working towards a sustainable future

In today’s fast-moving world, the imperative to stay relevant forces companies to continuously reinvent themselves. New competitors transform from promising startups to unicorns in lightning speed, while rapidly shifting customer expectations and political regulations and global trends determine the market of tomorrow. Moreover, the ongoing global COVID-19 pandemic is changing our world massively, requiring companies to make the right bets today in order to secure a sustainable future.

Startups will become chief sources of innovation by 2025 – particularly focused on sustainability.

According to a joint study from MIT and Capgemini from 2020, external sources of innovation are becoming more and more important for companies. Over the next five years, startups will rank as top innovation sources for companies. Moreover, hybrid forms such as innovation labs are especially important as an interface between internal and external innovation. This is critical for companies that do not have adequate innovation capability internally and are therefore forced to source externally. As a result of this development, traditional R&D and internal business units’ employees will become less relevant as innovation sources within the next five years (see Figure 1).

Sustainability tops the agenda of many customers and therefore of corporates. Corporates often rely on startups, which are particularly good at accelerating (digital) innovation in the area of sustainability. According to a European-wide survey by TechFounders, sustainability is a priority for around 90% of startups. Moreover, a new study found that, even though climate-tech startup investments by VCs and corporates are still low compared to overall investments, the growth rate is five times higher. For corporates, startups’ innovation potential poses a great chance to support their sustainability initiatives and targets. However, with startups gaining importance and corporates heavily focusing on sustainability, this still leaves the question “How.” How can corporations gain strategic benefits from the world’s best startups, quickly and at measurable risk?

A new model of corporate venturing has emerged.

The concept of the venture client model is new, yet simple. Instead of acquiring non-controlling equity stake, the company buys the startup product (see Figure 2). The corporate becomes, hence, a venture client rather than a venture investor with the objective to harness a strategic benefit. The strategic benefit emerges from applying the startup product to improve an existing or create a new product/service, process, business model, or even entire business. The startup solution is applied in a real business environment immediately, without being incubated or accelerated.

Obviously, to identify top startups and to buy and transfer their solutions so that they generate a measurable positive impact is highly complex. Hence, just being a venture client is not enough. The corporate needs to implement a sophisticated model, i.e., organizational venture client capabilities. Such a venture client model was first established at BMW by Gregor Gimmy (see Harvard Business Review) in 2015 with the BMW Startup Garage. Over the years, many global corporations have been following suit, including Bosch, BSH Home Appliances, LafargeHolcim, German insurer Signal Iduna. Beyond Germany, other corporations have adopted the Venture Client approach, such as the Italian energy company Enel and Spanish Telefónica.

The venture client proactively solves problems

Here is an example of how a good venture client unit operates: A business unit or functional unit raises a request for support from the venture client unit to resolve a complex challenge via a startup, as it cannot solve the problem with internal or incumbent external solution providers. These challenges can be manifold (e.g., growth, efficiency) and may emerge anywhere in the company (e.g., R&D, IT, manufacturing, or logistics). The problem is strategic, as it impacts the competitiveness of the company, such as a sustainability challenge to measure CO2 emissions in real time caused by intra-factory logistics.

The venture client unit is engaged by the venture client (the factory logistics manager) to enable the whole process from identifying to adopting the best startups, like an HR department enables recruiting.

The process starts with an in-depth analysis of the challenge, to make sure that it is strategic and that startups may indeed have suitable solutions. If this is the case, the venture client unit sources for startups with relevant solutions. In our example, the venture client unit would source for startups with sensor technology, hardware and software, that detects CO2 levels inside a factory. If the problem has relevance in the startup ecosystem, as is probably the case in sustainability, the team will likely identify over 100 startups.

The venture client unit team then filters the best startups from that list. This results in five to 20 startups that are then analyzed in depth. This assessment, led by the venture client unit, involves the venture client, for example an R&D engineer or logistics manager. Once the best startup has been selected by the venture client, the company buys a small sample of the startup product. In our example, the company would buy a sample of sensors large enough to monitor CO2 levels in a small part of the factory.

The purchased products are applied in a real setting over the next two to four months. This is essential to generate real data to validate if the startup technology delivers the expected results. The data now serves to confirm if the startup solution meets the KPIs required for the last step: adoption, which is realized via partnership or M&A. The results from piloting the startup product in a real use case, are now key to enable the adoption decision.

In most cases, adoption will be a kind of partnership (e.g., licensing the startup technology). However, the venture client may also choose to acquire the whole startup, if control is a condition to generate and defend the intended competitive advantage. In addition to the just described active solving of known problems, the venture client unit also constantly looks out for strategically relevant startups to anticipate problems the corporation is unaware of. In case a top startup is detected, the venture client unit proactively contacts one or more business units that could potentially benefit from its solution. Once the relevance has been confirmed, the process as explained first, repeats itself.

The venture client model brings significant benefits for both corporates and startups (Figure 3).

Our view on innovation and the venture client model.

At Capgemini Invent, we believe that startups are a key, but not the only solution. A state-of-the-art venture client unit with high-quality specialized processes and dedicated resources will make a big difference in enabling companies to benefit strategically from the best startups. To ultimately succeed however, most companies will have to continuously scrutinize, transform, and adopt their corporate innovation systems as such. As shown in Figure 4, our innovation operating model involves three tiers: purpose, approach, and tactics. Each tier’s components serve as gears in the innovation machine, and every one of them is an important piece of the puzzle. The venture client model has the potential to significantly influence several of these components and serve as catalyst to the whole innovation system. Our innovation experts at Capgemini Invent are more than happy to support you in establishing your venture client unit and skills. Feel free to get in touch.

Thanks to the co-authors Manuel Wiener and Phillip Schneider.

Authors

Jens Hofmeister Head of Central Region
Fahrenheit 212
Part of Capgemini Invent
Olivier Herve Vice President – Innovation and Strategy
Capgemini Invent
Kevin Loeffelbein Director – Smart Mobility & Business Model Innovation
Capgemini Invent

Would vaccination passports guarantee data privacy?

Capgemini
December 22, 2020

One of my friends recently drew my attention to an article in Time magazine, in which the International Olympic Committee (IOC) President Thomas Bach has said that COVID-19 vaccinations could be required for athletes and fans to attend the postponed Tokyo Olympics. This is set against a backdrop where vaccines to inoculate against COVID-19 are being developed and (at the time of this writing) set to be given to the public. To limit the spread of the disease at an event vast numbers of people are expected to attend, drastic measures are being considered to not risk another massive increase in cases worldwide.

Given that there are several global events planned for 2021 and assuming that vaccination passports provide a solution, how could they be implemented appropriately?  What regulations should be complied with to protect personal information and reduce the likelihood of the infringement of human rights?

There are many questions to be answered, some of which focus on the governance of personal data.

Reasoning

IATA recently announced that it was creating a digital platform to facilitate the sharing of vaccination information called the IATA Travel Pass. The reasoning for this is: “to re-open borders without quarantine and restart aviation governments need to be confident that they are effectively mitigating the risk of importing COVID-19. This means having accurate information on passengers’ COVID-19 health status.

It seems prudent that a collective definition of why the data is being gathered across the world should be adopted. If the reason is simply to present proof of having had a vaccination, that in itself is quite different from requiring presentable proof of immunity. Such a requirement should, at the minimum, include a follow-up test to prove that the individual has produced the required protective antibodies.

Compliance

The concept of data sovereignty means that personal information (including health data) is usually governed by regulations that afford some protection to the citizens of the region where the data is stored. Examples of this include:

  • HIPAA (USA)
  • PIPEDA (Canada)
  • GDPR (EU)
  • Data Privacy Act (Philippines)

However, how do you apply the principles of health data governance internationally? What standards should be used to protect the data? How should it be stored, and what should happen to it when it is no longer needed? The standard requirements of asset management and data governance must be observed when processing personal data, even in a global context.

Integrity

In order to have a trusted worldwide system that can prove that an individual has had a vaccination, it would seem logical that such a system should have traceability built in. This would imply that an assertion that an individual has had a vaccination can be traced back to a point in time where the injection was administered (and, potentially, which type of vaccination it was – especially given that different vaccines have different efficacy rates).

Administration

How should such a system be administered? Should it be on a country-by-country basis, given that each nation could claim ownership of said data and how it should be used? If the aviation industry (IATA) is setting up its own system, should this be a process that is extended to travel across land borders? How would such a system be applied consistently in different countries, with varying levels of social and technical infrastructure, so that travelers around the world have equal access to transport?

Conclusion

In the next 12 months, the world has an expectation (or hope) to return to business as usual, including international travel. That includes the following sporting events postponed from 2020:

  • Football European Championships (Europe)
  • Copa América (Argentina and Colombia)
  • Ryder Cup (USA)
  • Olympic Games (Japan)

If we are going to reduce the likelihood of a return to the levels of infection seen throughout 2020, a number of measures will have to be implemented. Ideally, these should enable equal access to travel, irrespective of the economic background of one’s country.  A vaccination passport may well be one of these, but to keep the pandemic in check it will require a truly collaborative approach to the governance of data that matches to that seen by the global medical community to make a real difference.

To learn about Capgemini Data Protection and GDPR Services, visit: https://www.capgemini.com/service/digital-services/gdpr-readiness/data-protection-gdpr/

References

http://www.bbc.com/travel/story/20200831-coronavirus-will-you-need-an-immunity-passport-to-travel

https://time.com/5912335/tokyo-olympics-vaccine/

Contact lite Healthcare

Capgemini
December 22, 2020

Over the last few months, we have seen many sectors respond to the cross-infection risk of COVID-19. What can we learn from them? How can our new focus on emerging technologies such as vocal interfaces, facial recognition, and mobile-based applications support improved healthcare in a pandemic era?

Infection prevention and control is a vital aspect of healthcare. The use of good handwashing and appropriate personal protective equipment has always been paramount. But the focus was previously on preventing cross infection between patients or between carer and patient. We have never before had the same intense focus on transfer between carers.

In other sectors, the importance of the hierarchy of hazard controls is widely recognized – elimination, substitution, engineering, administrative controls, PPE. This was developed in the chemicals industry where it was a one-way flow, protecting the worker from a hazardous substance. In healthcare, it is more complex. The hazardous substance, for instance a virus, still exists but we need to protect individuals from the virus that is carried by people as well as physical objects.

In shops, we regularly see physical barriers; in restaurants, there are often screens between tables; in hospitals, there are often screens between patients – but I rarely see them between staff at workstations. Every time a person makes contact with a surface, there is the potential to deposit or pick up an infectious agent. A key concept in retail has been “contactless” – to me one of the biggest lifestyle changes is no longer carrying cash. A single patient will come in to contact with many staff and many objects touched by staff, can this be reduced? Devices can warn you of your proximity to others and can analyze the locations to assist redesign of one-way flow systems, used in manufacturing plants but not in hospitals.

Touchless delivery of supplies from storerooms to the patient could reduce human contact; autonomous trollies could carry drugs from pharmacy, linen for the laundry, food from the kitchens, directly to the patient’s room. Ordering these supplies can mirror online ordering with which we are so familiar, with minimal human contact.

Can we deliver touchless care? Since the time of Osler, medical assessment has depended on taking a history, examining, and then undertaking tests. Taking the history can be contactless (videoconsults and phone consults are now commonplace). But can the examination be replaced? Contactless vital sign observations are now possible, Bluetooth stethoscopes that can be held by the patient already exist, but can we replace palpation?   Many tests could be remote or robotic. Robots are already used for remote surgery but how long before we have bedside robots that can put up a drip, change your catheter, give you a wash?

Staff who have been with a patient then contact equipment that will be used by other staff. Can we copy the example of banking with contactless payment systems? For example, instead of a dirty keyboard each individual uses a personal device, or drug cupboard keys are replaced with smartphone proximity access.

Even the process of washing your hands can be made safer. Far more hotels and airports have contactless taps, toilets, hand dryers than I have seen in hospitals. I suspect short-term cost takes precedent over safety.

But there is one big problem. Healthcare is more personal than shopping. You feel better if you have human company; you recover better with personal conversations about your care;  you want to have conversations and physical contact with your family. A simple hand on the shoulder or holding a hand is a powerful clinical treatment. Could we not be smarter than just refusing people access to their relatives?

We know that the public support a move to touchless technologies. How do we best use technology to reduce cross infection without isolating the individual physically and mentally? This is one of big challenges of healthcare design in the era of the pandemic.

I would love to hear from you if you are thinking about how you can shift to contact lite healthcare and think we might be able to help you develop your ideas.

Matthew Cooke is Chief Clinical Officer at Capgemini. He spent most of his career working in the NHS as an emergency physician and was the National Clinical Director for Urgent and Emergency Care.

Data is a value driver, not a cost driver

Capgemini
December 22, 2020

Introduction & observations

In an ever-changing market cost/income ratio of financial institutions is under heavy scrutiny. Linked with the ever-increasing push to support end-customers in the right manner, at the right moment and through the right channel. The key is data; which is too often seen as a cost driver. Proper attention to data is often only paid after regulatory pressure or fines. Let’s see data as a value driver, not a cost driver.

Value?

Data needs to be valued as an enterprise asset. The challenge is that there are not many generally accepted ways to put value on data, which leads to some questions:

  • What would it cost to rebuild your full data collection?
  • What would it cost to buy your data on the market (if available at all)?
  • How much more revenues could you generate if you have additional (quality) data?
  • How much cost could you save if you would have perfect information at the right time?

The incremental revenues is the most interesting question to ask. It will help you think much more in the lines of value rather than cost.

Key examples of data as a value driver for the Financial Services industry are:

Risk management

At all financial institutions data challenges are imminent: KYC, CDD, GDPR, Basel and other key topics on the executive board agenda are driven or led by data. To address these challenges, and to comply with these regulations data needs to get in order and underpin value by data based, trustworthy evidence.

In this process financial institutions are already creating a lot of value, showcasing the right information, no double work, single sources of truth finally start to exist, and a single view of the customer will slowly be created. Resulting in value from data.

Cost reduction

Poor data quality can lead to additional cost, more waste, rework in business processes and undetected risks. Poor data quality in many cases is also an inhibitor for unleashing the potential of Artificial Intelligence.

In many situations some of the consequences of risk/cost-based approach lead to additional collateral damage. For example, the cost of resolving technical debt on data ecosystems is becoming increasingly high as risk avoidance keeps the system unaltered, resulting in a deteriorating competitive position. Moving to a data value centric approach can help turn this around.

Customer focus

Moving away from the risk driven, fine based approaches to programs like GDPR into a holistic overview and the opportunity to bring a better personalized and improved offer to individual customers. Making it a completely different business case; seeing where early movers in digital transformation are and will be focussing on the hyper personalization the GDPR programs could be a value driver with a positive focus.

This also applies to the current programs in KYC/CDD a defensive mode looking in solving the issues, showcasing that you are in control regarding who is in your books, who is your customer and caring that your customers are doing legal and compliant financial transactions. Whilst this also is a next step towards a pristine path to serve your customer even better.

And market agility

Market developments are increasing in speed and impact, disruptors are around the corner. Not in the last place from companies which are most data savvy: techfins. Although they might not have the in-depth financial services knowledge, they do have an abundance of data and the passion to drive value from it.

With a more value-based approach on data, financial services companies can increase the speed and quality of operations, supporting reduced time to market of new offerings and be at the forefront of (re)inventing financial services.

With innovations thriving on data

Outlining a number of topics that can only be done if you let go of the risk/cost-based approach of data and approach it from a (business) value perspective, foundational for the survival in the future.

  • Hyper personalization: shifting from (generic) customer journey thinking to hyper personalized thinking, be at the heart/mind of your client.
  • Embedded Banking/Insurance: financial services as a seamless part of other services, API based business models
  • Open X: expanding business models, shifting from single products to full services/experiences
  • License to Operate: regular demand for data (level of detail, frequency, coverage) will increase, want to do it ad-hoc every time or choose a flexible data ecosystem approach that is able to deal with any future demand swiftly and without friction?

Now what?

Change your mind set about data: it harbours huge business value, but harvesting this value is many times blocked by working in a cost/risk-based approach. Data is often referred to as the new oil or water, look at it differently:

  • If data is the new oil, why not do additional refinement, creating much more valuable products for your clients from the raw material?
  • If data is the new water, why not make sure that it is treated as the primary source of your organization’s life?

Choose a data valuation approach that fits your organizational culture, make the value of data tangible and widely acknowledged. Define initiatives to improve the value of data and use the incremental business value to reinvest in the next steps of the journey.

Additionally, ensuring your enterprise data liquifies, achieving a situation where it flows freely through the enterprise, without friction to the point where it is needed for decision making and help achieve ultimate stakeholder value.

Finally, expanding horizons by considering data as a value driver. The platform economy continues to be a big opportunity for Financial Services companies that embrace it, sharing data with platform peers, aggregating client value offering a tailored family of adjacent services.

So

No matter the angle data is a value driver, and it is time to acknowledge and manage it accordingly and harvest its yields!

Authors

Erwin Vorwerk Vice President – Insights & Data
Capgemini
Vincent Fokke Chief Technology Officer (CTO)
Capgemini FS Benelux

Agency sales model Part 4

Capgemini
December 21, 2020

In our first blog post we highlighted the opportunities and challenges of introducing the agency sales model and presented the Capgemini Invent Agency Sales Model Framework that we have developed. Our second article focused on the importance of retail, and our third article showed how to successfully scale the agency sales model.

This time, we will show which factors lead to sales increases and cost reductions in the long term, and when break-even can be expected. This discussion is based on the Step up dimension of our framework.

Figure 1: The Capgemini Invent Agency Sales Framework

New drive systems, decarbonization, autonomous driving, new forms of mobility, and new competitors: The challenges of the coming years will be manifold. At the same time, automobile manufacturers are in the middle of the digital transformation. The investments they are having to make in their own economic sustainability are immense. But can the current business model keep up with the challenges?

It is a fact that established car manufacturers today still maintain a very cost-intensive three-step sales and distribution model in which approximately 25-30% of the costs of buying a new car are attributable to sales. This is a major competitive disadvantage compared to new market entrants such as Tesla, Byton, Genesis, and Nio, or even spin-offs of established manufacturers such as Polestar or Cupra. From the very beginning, these companies have been relying on new sales models such as agency sales and are realizing significantly lower sales expenses with their greenfield approach. But it is not only in terms of costs that these new concepts differ from the traditional sales approach. The direct access to customer data in the agency sales model also enables new earnings potential to be realized on the revenue side.

Of course, there are two sides to the coin here as well: To transform an established sales organization sustainably, considerable investments in organization, processes, and IT systems are required. In addition, there are usually organically grown, heterogeneous structures in the individual national sales entities. The introduction of the agency sales model can be seen in this context as an opportunity to establish harmonized processes and systems. However, this requires additional effort. Decision-makers should examine both sides and evaluate the long-term cost reduction and sales potential in addition to the one-time investment costs.

The agency sales model as a lever to increase sales

We estimate that the introduction of the agency sales model will lead to a long-term increase in sales of 1-4%.

Increase in transaction prices

Transaction prices, i.e. the actual sales prices achieved, are one of the biggest profit levers for car manufacturers. An increase in transaction prices has a 1:1 effect on the profit of the sales organization. In the classic three-step sales model, the dealer acts as an independent vendor and ultimately determines the transaction price. The maximum level for the lower price limit is determined by a fixed retailer margin plus situational sales promotion measures for a specific model or customer group.

Every year, OEMs and importers (markets) invest hundreds of millions of euros in sales promotion measures and thus have a negative influence on the transaction prices in the markets affected. Due to a lack of data, however, it is not possible to systematically monitor success and optimize the measures. This leads to the fact that sales promotion measures are usually used reactively to achieve short-term sales stimulation.

In the agency sales model, the importer determines the transaction price in a market. In addition, centralized sales systems and data management across all sales levels make it possible to strategically plan transaction prices and sales promotion measures dynamically in order to achieve the highest possible transaction price. In addition to the optimization of sales promotion measures, uniform prices in a market prevent intra-brand competition. In the long run, both effects increase transaction prices.

Higher sales volumes

The wealth of data provided by the agency sales model makes it possible for the importer to move away from short-term measures aimed at selling certain vehicle models, towards holistic customer lifetime value management. The new data can be used to evaluate, in a targeted manner, how customers behave, how high their willingness to pay is, or how loyal they are to the brand, in order to generate the decisive motivation to buy at the right moment. As a result, new customers can be acquired, the turnover rate can be increased, and the churn rate reduced. All this adds to customer lifetime value and increases sales volumes in the long term.

Upselling potential

In addition to increasing transaction prices and sales volumes, upselling potential can be realized, and new business models can be implemented more efficiently. Whereas vehicle sales and digital services were separated in the traditional sales model, they are offered centrally from one source in the agency sales model. This can create a closed ecosystem in which customers can be retained throughout the entire customer life cycle. This ecosystem can also make it much easier for importers to offer customers additional products and services, digital services, and new mobility formats.

The agency sales model as a lever for cost reduction

As already mentioned, almost a quarter of the costs of car sales can be associated with distribution. The costs here are split across all three steps of the value chain. Altogether, the agency sales model can save approximately 4-6% of costs across all stages.

Cost reduction through centralization

In the traditional sales model, many functions have so far been organized in a decentral manner, with each dealer having its own dedicated resources for marketing and customer service. Cost-intensive online sales solutions are also operated in a decentralized manner by the individual dealers. While the importer’s focus has so far been more on the administrative management of the dealer network, in the agency sales model the importer takes on an operational role requiring more diverse competencies. This includes, for example, the operation of central online stores, central lead generation, and the establishment of central customer service and marketing departments. These shared service centers reduce redundant functions in the sales organization and realize economies of scale. In addition, quality can be increased by bundling competencies and setting standards.

Cost reduction through lean and digital processes

In addition to the centralization of competencies, the leaner sales processes in the agency sales model and the harmonization of the IT system landscape also contribute to cost reduction in the sales organization. In our experience, a digital and lean agency sales process can eliminate approximately 40% of administrative tasks at dealers and importers. This is due to the fact that time-consuming price negotiations and complex manual approval processes and system changes are no longer necessary. The expected increase in online sales and the provision of self-service functionalities will also reduce the burden on retailers.

Cost reduction through resource relief

Transparent and uniform prices also indirectly contribute to a reduction of effort in the retail trade. As our study revealed, an average of 2.5 retailers are visited due to price negotiations before the purchase is concluded. This results in resource expenses for the trade organization, which are avoided by the provision of uniform prices in the agency sales model. In a medium-sized market, a potential saving of €20m per year can be realized through this alone.

Break-even relevance

Figure 2: Long-term cost and revenue overview after introduction of the agency model

In order to make a sound investment decision, the initial implementation costs and the potential long-term costs and sales must be compared. In our experience, the investment for a medium-sized market pays off after approximately four to five years.

Fair distribution of the expected profits is crucial for the success of the investment. To this end, decision-makers should develop a transparent remuneration and bonus model together with retailers when designing agency sales and setting up the investment case. Only with an equal partnership can the agency model successfully contribute to mastering the challenges of the coming years.

What’s next?

In addition to significant investment costs, the introduction of the agency sales model requires, above all, a deep commitment from all partners across all sales channels. A long planning horizon is essential for a sustainable and successful introduction of the agency sales model. The main drivers of the investment case are summarized below:

  • Increase in transaction prices through standardized pricing across sales levels and dynamic sales promotion measures
  • Increase in volumes through data-based and customer-specific approaches throughout the entire customer lifecycle
  • Utilization of upselling potential through the customer data acquired and the offer of vehicles and services from a single source
  • Reduction of costs by centralizing functions and minimizing redundancies
  • Use of leaner sales processes to relieve the burden on the sales system and resources

This blog was co-authored by Fabian Piechottka, Oliver Straub, and Nepomuk Kessler. Please get in touch if you have questions or need further information. We look forward to exchanging ideas on this particularly current topic.

For more insights, please also read our recently published Agency Sales Model Point of View.

What’s so spatial about asset systems for network operators?

Capgemini
December 18, 2020

What is the most appropriate information technology to maintain and store the master asset system model? We have to acknowledge the need for a network operator to access different variants of such a model. To improve consistency, a clear master-slave model is recommended, and master-slave information flows should as much as possible follow the natural lifecycle flows of a network modification.

In an earlier paper, the GIS-centric Enterprise, we argued that geographic information system (GIS) software is a key component of the enterprise ICT infrastructure for network operators, specifically for its ability to manage topological relationships via a graphical interface. Let’s rephrase the arguments in the perspective of an asset system model lifecycle.

  1. Strategic network planning requires insight in actual and projected asset system performance (power quality measurements, faults, incidents, outages) in relation to (projected) location of capacity demand or production sites and the asset system model itself. The underlying physical asset performance (condition measurements, systematic faults, repairs) may affect overall performance of the asset system and is part of the analytical model.
  2. Asset system design conceives network extensions and enhancements based on a well-documented as-built network model and its internal or external constraints. Apart from the electrical or hydraulic characteristics, the system design is strongly geographically determined (rights of way, environmental and safety regulations, soil types, slopes, etc.). High level cost estimates can be derived from an initial asset system breakdown and balanced against initiative value on portfolio level. The investigation of alternative routes is a specific geographic analysis step included in many investment planning studies.
  3. The validated asset system designs are engineered in detail as the construction projects are being defined and prepared for construction (bills of materials, compatible units, technology choices, detailed placement, etc.). Here, the projected network model must be translated to a precise topographic linear placement design for authorities to approve and contractors to execute.
  4. Projects and construction actors plan their work based on the detailed engineering specifications and there is frequent exchange of geographical information between parties (engineering companies, civil contractors, government bodies). Co-ordination of construction work (often imposed by government) includes the exchange of information (work polygons; construction site location and timing of work) with other utilities operating on the public domain. Further optimization (in timing and location) is sought in shared trench work for multi-utility projects.
  5. As construction proceeds, as built network records are documented to enable traceability (welding information on gas mains, equipment installed or replaced, configuration settings, initial pressure or voltage measurements) and linked to the functional segment of the asset system or to the right equipment. Commissioned network modifications are promoted from the “as-it-will-be-built” status (terminology taken from Network Model Manager Technical Market Requirements: The Transmission Perspective) and integrated in the operational network.
  6. Network operations use an abstracted (schematic or geo-schematic) asset system view of the same network to take operational decisions (switching, planned outages, flushing).
  7. Outage and incident management processes need insight on network connectivity to identify the origin of a problem as well as its impact (customer minutes lost). The field workers receive detailed location information to perform an intervention in a timely and safe way.
  8. Results of patrolling and surveying activities have to be reported back and associated to the network including their locations and the equipment they are related to. Correctly located observations and measurements are essential to the performance monitoring of the system.
  9. Customer service agents evaluating the feasibility of an access demand look at network characteristics in the vicinity of the premises to be connected and maintain the vital customer-network link.

By counting the number of occurrences of the terms location, placement, geographic, route, vicinity, etc. in this lifecycle overview, it is obvious that asset systems – at least for network operating companies – are and must be spatial.

Spatial solutions provide essential functionalities to manage networks:

  • Mapping and visualization services because the map is a natural entry point to geographical information, and through a geographical representation, it can provide insight into complex information.
  • Graphical tools as available in GIS support intelligent editing of a (versioned) network model, an interconnected structure where network facilities, equipment and customer connections are linked through cables, pipes, switches, and valves. Intelligent connectivity rules support the management of network structures according to business guidelines.
  • And finally, there is the power of geoprocessing tools to relate and combine phenomena that have no specific relationship except that they are located close to one another. Spatial analysis and decision making based on “proximity” and inference is essential for utilities in critical processes such as strategic planning or risk management.

Alternative models derived from the master will remain necessary but in a clear master-slave model as depicted here below.

The transformation to “smart” and “digital” will therefore require fundamental renewals of enterprise GIS systems to provide a solution to the linear problem statement in a smart world. Large corporate initiatives will be required, far beyond the inevitable technology upgrades to systems that are in many cases 20+ years old.

“Wearables, Whereabouts, and Roundabouts” is the title of our next article that will add the event-dimension to asset management and asset system management. In other words: how do we locate event information on the network model in order to use it in network performance and risk management activities?

This blog is part of a series of 6 under the theme “Asset Systems for the Smart Grid”:

  1. What is an asset system for a network company?
  2. Smart grid or silly tube maps?
  3. What’s so special about asset systems?
  4. What’s so spatial about asset systems?
  5. Wearables, whereabouts and roundabouts
  6. What happens on the asset system, stays on the asset system

I have more than 20 years experience in GIS (Geographical Information Systems) and asset management projects in utilities (water, electric, gas), telecom and the public sector (transportation, cadastral services). You can connect with me here.