Skip to Content

5 ways to keep the project train on the tracks

Capgemini
2020-01-13

Projects are like trains: getting the train to the station on time is the goal of a good development team. In my experience, there are 5 key paths towards guaranteeing your passengers (users) and crew (dev team) have the best outcome possible.

Make time:

The easiest way to be on time is to give yourself enough time.

Typically scrums should be limited to 15 minutes and should determine the status of a project.  But what if the team updates uncover that a team member is blocked?  Stopping scrum because the 15 minutes are up may not be the best solution.

When scrum reveals the project is not going as planned, then and only then is the time to get things back on track.  A lack of time because of competing responsibilities like other projects will only heighten the issue and prolong the solution.  Roadblocks are best addressed the minute they arise rather than delaying for another scrum or for some later time.  Postponing a discussion and potentially a solution will leave a team member completely stalled unless they can move onto another task. The lack of time and, as a result, lack of commitment to a project will cause the project to go off the rails and fall behind schedule.

Solution:  Create more time to kill blockers! Stop scrums the minute a blocker is presented and discuss the problem(s) and the best way to a resolution of the problem(s).  Pull the team members that are blocked aside for further discussion, then with the limited members, determine a solution to unblock the project and allow development to continue unobstructed.

Consistent Resources:

A lack of resources at some point will force a project off the rails, resulting in missed deadlines and/or deliverables.  Identifying the lack of resources or that resources are constantly reshuffled on and off a project is of particular importance.  Shuffling resources on a project could potentially snowball into much larger issues such as loss of familiarity with the project and a constant need for new team members to ramp up on the project.

If you find that other more urgent projects pull your resources away, over time it will cause your project to derail.  When people are reshuffled in the middle of a project there will be a noticeable loss in the knowledge of the project.  Previous scrum discussions and side meetings hold the key to understanding project tasks comprehensively.

Solution:  Insist on keeping the project resources together on a project and committed to each other and the project.  The camaraderie from working together and shared knowledge of a project is extremely valuable to the success of the project. It is understandable that under certain circumstances resources may need to be shuffled, but minimizing the shuffle is most often the better option.

Complete Requirements:

Missing or incomplete requirements are analogous to broken train tracks and can effectively cause a derailment and delay.  Functional requirements established during the discovery phase of a software project establish the agreement between your team and the customer on what the resulting application is supposed to do.  The full and complete requirements play an important role in the success of your project.  If missing requirements are uncovered during the scrum, it should immediately be escalated to the Project Manager and the project’s Software Architect.  No time should be wasted to gather the needed data since it will have the potential to affect the status, pace, and scope of the project.

Solution:  Immediately schedule a meeting with the Project Manager and Software Architect to discuss missing or incomplete requirements.  It may be necessary to bring the client in as well. It can look poorly on a project for missed requirements to come up when work has begun and time is of the essence.

Open, Comprehensive Testing:

Quality Assurance (QA) and User Acceptance Testing (UAT) is a way to confirm the project’s destination.  If the result of the tasks that make up a sprint don’t get you where you were expecting to go, then you are going to be frustrated and extremely disappointed.  If your team is not provided with a complete set of testing scenarios and or you don’t have a fully allocated test environment, then you cannot be assured that you have met the real-world scenarios according to the project specifications.  Services that are not testable or have limited results often times will not provide a complete set of test results needed to confirm all test cases uncovered during discovery.  It is best to map out all stops along the route to confirm the destination before you step foot on the project train.

Solution:  Confirm during discovery that testing environments are properly set up and testable and can return the results needed before you pass off the project to UAT.  If you can’t test all user case scenarios, then document the limitation for the tasks so that the team is not held responsible for missed requirements.

Create a Dedicated Development Sandbox

Testing environments are just as important as QA and UAT testing.  Lack of dedicated testing environments is like a train without a track.  In order to fully test and merge code, the appropriate environments are critical.  Development should be completed on local sandboxes so that any testing can be locally restricted and not interfere with other development and testing tasks.  After coding is completed and reviewed, the code should be merged onto a development sandbox.   The development sandbox should be for testing and limited to code additions and changes that are targeted for a given sprint cycle.  Then a staging sandbox is next for UAT and regression testing to confirm new code doesn’t jeopardize the live production environment.  And lastly, the production environment is for fully-tested and client authorized code.  Separating these functions are critical for a smooth and secure software development environment. When these environments are not provided, i.e. development is not separated out to specific and timely releases, the result can be like riding on a train without an engineer.  No one is steering the train and we all know how that can end.

Solution:  Insist that your project has the correct and appropriate development sandboxes.  Don’t set up a project and team members up for failure and a potential train wreck when experience shows that all components, the train, the track, the crew and the passengers, are all guaranteed a safe and speedy destination.

Internet of Things: Security

Capgemini
2020-01-12

Many people are familiar with the talkative Internet of Things (IoT) devices that, by giving an order, you can have virtually anything shipped to you. B2B eCommerce already has devices that can self-order consumables for your printer.  The auto industry is also in the IoT commerce space with cars that can self-order replacement tires.

Currently, there are more than 20 billion IoT devices in existence today.  That staggering number of devices represents huge opportunities for eCommerce and the rest of the digital industry.  IoT and headless commerce is the next step on the path to unified commerce.  Poorly introducing IoT into your headless commerce plans could come a cost.

IOT SECURITY: HOW IT WORKS

From the context of eCommerce, when prompted, IoT devices act independently to communicate with a server and API to make a secure purchase transaction. Really, this isn’t much different than clicking “checkout” when shopping online, right?

So, you may think that IoT is secure given that we have decades of experience in purchasing products online.  This is the false sense of security that is making IoT so attractive to hackers.

IoT is often exploited at the device itself.  You may remember the news story about the Jeep that was shut down on the highway.  This sort of attack is at the device itself not the centralized location.  Hacking a consumer device is a topic that is constantly in the news and those reports will grow as IoT grows.

While we have not seen a specific eCommerce attack, the risk exists.  Hackers could place various orders from multiple IoT devices to a website causing a unique attack that could cost a company millions of dollars in fraudulent orders.

Much like a DDOS attack has many computers attacking a website, a similar style attack could submit orders from many consumers.  It could be days before it is noticed that product is shipping and customers are being charged.

It’s not just the cost of goods impacted. Businesses will spend time dealing with the issue attempting to recover product, backing out orders, etc.  Shipping costs alone could make smaller companies go out of business.  Other hacks could disable devices altogether causing impacts to revenue.

Businesses need to be cognizant of the issue and prepare upfront.  Here are some things to consider when implementing your IoT and headless commerce implementations.

  • How am I incorporating security during the design phase?
  • What is my strategy to update devices if a vulnerability is discovered?
  • What are my security practices overall, and how can I apply them to IoT devices?
  • What is my risk management strategy, and how might these security measures impact my business?
  • How do I make IoT transparent in my implementation?

Addressing security upfront won’t eliminate risk, but it will help mitigate it.  You will also have an established process and procedure for when the inevitable happens. IoT and headless commerce are growing and it’s critical your security grow with it.

Manufacturers must produce more affordable cars. Technology will help them do that.

Capgemini
2020-01-09

Automakers need to strengthen their economic competitiveness. In recent months, news of layoffs at General Motors, Ford, Jaguar Land Rover, Nissan, and even Tesla have underscored that the industry needs to restructure the way it builds vehicles and the investments it needs to do so.

In an email to employees announcing the 7% cut in fulltime employee headcount in January 2019, Tesla founder Elon Musk laid bare the stark reality facing his company – and why it needs to find cheaper ways to build cars. “ … we face an extremely difficult challenge: making our cars, batteries, and solar products cost-competitive with fossil fuels. While we have made great progress, our products are still too expensive for most people,” he warned. “Tesla has only been producing cars for about a decade and we’re up against massive, entrenched competitors. The net effect is that Tesla must work much harder than other manufacturers to survive while building affordable, sustainable products.”

Meanwhile, GM Chairman and CEO Mary Barra, in her announcement in late 2018 of job cuts to reduce salaried and salaried contract staff by 15% (including 25% fewer executives), emphasized that GM’s move was about making it more competitive in a changing environment. “The actions we are taking today continue our transformation to be highly agile, resilient, and profitable while giving us the flexibility to invest in the future,” she said. “We recognize the need to stay in front of changing market conditions and customer preferences to position our company for long-term success.”

Simply put, GM knows it must adapt quickly. In her announcement, she called out the retooling the company needs to double the resources it puts into electric and autonomous vehicle development.

She also pointedly committed to “expanding the use of virtual tools to lower development time and costs.” Those virtual tools will go a long way to helping take costs out of the auto industry supply chain. They include:

  • Digital twin – As we outlined in this story in 2018, this is a digital representation of a physical object. It includes the model of the physical object, data from the object, a unique one-to-one correspondence to the object, and the ability to monitor the object. Digital twins are at the core of interoperability, allowing machines and people to communicate with each other more effectively and fostering transparency by creating a virtual copy of the physical world through sensor data. In short, they provide a deep and comprehensive view of what is happening in a manufacturing facility.
  • IoT (internet of things) – IoT technologies can make a huge difference in manufacturing, using sensors, cameras, and other smart devices to provide timely intelligence on the effectiveness of all manufacturing processes and producing the data needed to fine-tune them to deliver the best results.
  • Machine learning and analytics – As these and other new digital technologies are applied to manufacturing, they provide huge amounts of data about the effectiveness of factory equipment that can be used by machine-learning platforms to enable predictive maintenance and optimize manufacturing processes.

The goal of all this work is to help the industry shift to faster cycle times and produce vehicles more efficiently and at a lower cost.

To learn more about Capgemini’s automotive practice, contact Mike Hessler, North America Automotive and Industrial Equipment Lead, at michael.hessler@capgemini.com.

Delight the people who matter to your business

Bibhakar Pandey
2020-01-09

Corporate success was long defined by building the best product and getting the best price for it. Achievement meant focusing on what your company can do. But that is no longer our world. Companies must now market to people who are more demanding, which means prioritizing lifetime value over any individual transaction.

That is one fundamental change. Another is realizing that, in addition to serving customers, you must also please the people inside your building: your employees. Let’s start there.

Optimize the employee experience – before they quit

Forbes magazine in December 2019 published an interesting article: 100 Stats on Digital Transformation and Customer Experience. There are all kinds of ideas about the state of customer experience (CX) but the most telling insight is what is absent: there is not one statement about the importance of the employee experience.

At issue is the gap between their Sunday-night and Monday-morning experiences. On Sunday night, they are treated like a customer. A business-development manager for a national fashion retailer visits the company’s online portal on a Sunday and the experience is fluid, with lots of graphics, good navigation, and significant functionality. That same person on Monday sits before an aging computer that accesses unattractive, clunky, and siloed systems.

Or imagine an account executive at an auto-parts company who can check stock on four winter tires with an iPad while relaxing on a couch but can’t get comprehensive inventory data sitting in an office chair.

It is extremely important, from a marketing perspective, to build a similar experience model for your employees as for your customers. Many employees – and especially up-and-coming millennials – will only struggle to bridge that Sunday-to-Monday gap for so long until they fix the problem by finding a new employer.

Don’t try to make everything digital

Digital systems are the future of business, but not everything should be all digital all the time.

Take Uber. It is a simple app: it knows your location and guides a car to you. It is a great experience, it’s simple, and it delivers a real-world benefit. This holistic, start-to-finish model can be applied to a retailer, a service company, a B2B manufacturer, etc.

Marketers often want to build elaborate new systems but, more often than not, a simple process that optimizes both the digital and the physical is the best approach.

The last mile matters

Nowhere is that more obvious than in the last mile of the customer experience. Customers will almost complete a purchase – almost enjoy the brand experience – and then can’t enter their credit-card information or discover that shipping a $15 item will cost $45.

I once watched a customer have her positive CX snatched away at the last moment. The customer chose self-checkout because the line was shorter. She swiped her card, grabbed her receipt – and then had to stand in the big main line for 20 minutes to get the security tag removed.

Keep this in mind: the part of the experience a customer is most likely to remember is the last mile. This store messed up and will never win back that customer.

All customers are local

Brands operate around the globe, and there is a tendency to think of customers as being the same everywhere. But what a customer wants in North America and in Asia Pacific may not be the same, so stop treating them the same.

The good news is the answer to this challenge is simply to do some research to understand the experiences expected in each area.

The true heart of CX is looking at experiences holistically. The employee sitting down to work, the client waiting for a car, and the person completing a checkout – in any country. Make that person’s experience easy, quick, and pleasant from start to finish. If you don’t, those people will remember that you failed and soon you won’t ever see them again.

To learn more about Capgemini’s Digital Client Experience practice, contact Bibhakar Pandey, North America Digital Customer Experience Lead, at bibhakar.pandey@capgemini.com.

Capgemini partners with junior achievement to celebrate MLK day of service

Capgemini
2020-01-09

January 20, 2020 marks the 25th anniversary of the MLK Day of Service. This day of service helps empower individuals, strengthen communities, bridge barriers, address social problems, and move us closer to Dr. King’s vision of a “Beloved Community.”

Capgemini Employee Resource Groups (ERGs) A3 and CARES are coming together to honor Dr. Martin Luther King Jr. by giving back to our communities. In partnership with Junior Achievement (JA), Capgemini will welcome students to our offices for a day of learning and job shadowing. Eight Capgemini offices across the United States will participate.   

JA is the nation’s largest organization dedicated to giving young people the knowledge and skills they need to own their economic success, plan for their futures, and make smart academic and economic choices. JA’s mission aligns with Capgemini’s digital-inclusion mission to leverage our digital and technical skills to help bridge the divide between technology and society. 

The students visiting our offices will learn more about Capgemini, how we use technology to help our clients solve problems, hear directly from employees about their career journeys, and participate in an interactive digital-learning activity.  

Without our ERGs, the coordination of these events would not be possible. It is our honor to make time to give back and engage with our community while recognizing the legacy of Dr. King.  

Please follow our MLK Day of Service journey via our Capgemini North America Twitter and Instagram, and with the hashtags #MLKDay, #DayON25, and #CapgeminiCares. 

New roles emerging as utilities become more data-driven

Randall Cozzens
2020-01-07

The amount of data being generated is growing exponentially. For utilities, digital innovation in operations is the future of the business, but core capabilities are shifting to adapt. As decarbonization, disintermediation, decentralization, and decreasing consumption are on the rise, digitization is the key to connecting them.

According to Capgemini’s World Energy Markets Observatory 2019, the path towards digital innovation and convergence and the move beyond a pure platform model requires that utilities enable huge transformation by connecting new products and services with key demand sectors such as transport, buildings, and industry. For many utilities, that means creating a role for a chief digital officer (CDO).

The trend towards technological convergence is taking place with maximum impact from developments in artificial intelligence, distributed ledgers or blockchain, and advanced control algorithms. To meet changing customer expectations and to increase operational efficiency, CDOs are focusing on data analytics at the same time as deploying agile techniques.

Businesses know they need to turn trusted data into real business insights and value but are unsure how to move beyond proof of concept to deliver actual results. CDOs can work to adapt data sets and emerging analytics and apply them to traditional use cases to drive business value. Machine learning and AI offer powerful possibilities but need solid data to deliver real results.

CDOs can also leverage technology to meet changing journeys, preferences, and engagement channels such as social media, text messages, and apps to improve the customer experience. Utilities can no longer afford to just send a bill at the end of the month. Customers have more options and expect more from their suppliers.

Digital technologies are also driving a shift to Agile workflows. Using agile management techniques, workflows, and processes will deliver positive ROI while dealing with volatile, uncertain, complex, and ambiguous generation, transmission, and distribution situations. Agile is no longer an option; it is a competitive edge. Leveraging agile means companies can move more rapidly, adapt to market conditions faster, and gain value.

CDOs can also lead the adoption of multispeed architectures or bimodal IT. These options enable CDOs to navigate predictable areas as well as make changes to legacy technology tools and processes to drive digital transformation. It allows them to constantly adjust to the ever-changing environment and regulatory conditions in the energy market.

There is no one-size-fits-all solution for managing data. Technology, data, and people will differ for every company, depending on the existing technology stack, appetite for intelligent decision making, innovative people culture, data governance and control of data, structured processes and workflows, data-driven approaches to business challenges, and team collaboration with educational community building.

To move forward with meaningful digital transformation, the foundation must be built on trusted data. The CDO takes on a central role in this, as the person dedicated to ensuring you are collecting good data and can access it to make data-driven decisions for your business.

Randy Cozzens is an Executive Vice President and North American Energy, Utilities, and Chemicals Market Unit Lead at Capgemini and he can be reached at randall.cozzens@capgemini.com.

AI is good, as long as we enact ethics controls

Capgemini
2020-01-07

The release of the movie War Games coincided with the start of my career in technology. The movie introduced many to the notion of artificial intelligence (AI) and the potential impacts it could have on our lives.

Fast-forward 36 years and we see intelligent algorithms playing prominent roles in everything from how we purchase products to how we defend our borders. Major advances in computing power and data storage, coupled with the increased digitization of formerly analog processes, have fueled unprecedented growth in computer intelligence solutions.

While most would argue these advances have greatly benefitted society, many are concerned over the ethical implications of machine-driven decision making. Just as we saw in War Games, machines will do what they are trained to do, even if that is detrimental to large segments of society.

Ensuring the safe and ethical operation of computer intelligence solutions is a significant concern for both corporations using these solutions as well as society in general. That means society must work on developing the necessary governance and control environment for AI solutions to ensure a safe and ethical state for all constituents.

As with any form of software development, outcomes of AI projects are impacted by the development ecosystem, the processes needed to migrate to a production state of operation, and the continuous audit of the end solution. However, ensuring the ethical state of an AI solution requires additional controls at various steps of the solution’s lifecycle.

Maintaining the proper development ecosystem for AI solutions begins with the development of what I call the AI Code of Ethical Conduct. This code outlines the steps all AI developers must follow to eliminate bias, promote transparency, and be socially responsible. The AI Code of Ethical Conduct should contain standards and practices to guide developers on such topics as auditability, accessibility, data management, delegation of rights, and ethical/moral responsibilities. The code will be reinforced with mandatory training for all developers to ensure they understand the organization’s ethical responsibility.

Also, organizations should focus on the recruitment and hiring of a diverse set of developers to help eliminate “group think” and to reinforce a culture of inclusion of thought in the development ecosystem. Finally, in cases where the outcomes of AI efforts have the potential to impact large segments of society, organizations should hire ethicists. Ethicists are specialists that educate and work with developers on ethical development practices.

With a proper development ecosystem in place, the next area of focus is the process of migrating AI solutions to production. In IT, the concept of a Quality Review Board (QRB) or Architecture Review Board (ARB) is commonplace. For AI solutions, a new governing body, the Ethical Review Board (ERB), is required. While establishing the governance framework to ensure ethical practices in the development and use of AI, the ERB also acts as the gatekeeper for new AI solutions moving to a production state. New solutions that do not pass ERB review are not allowed to move into production.

Once AI applications are in production, results must be continually audited to ensure compliance. These audits would review not only the algorithms but also the data feeding the algorithms. As AI algorithms learn through iteration, biases in the data would lead to biased “learning” by the algorithm.

While auditing and continuous testing to understand unexpected results is critical, it isn’t enough. In addition, feedback loops should be provided to users that operate outside the AI controls of the system. Feedback loops could be built into applications or accomplished using survey instruments.

In summary, establishing an operational AI ecosystem with the appropriate level of independence and transparency is mandatory for organizations building and operating intelligent solutions that have societal impacts.

AI ethics controls aren’t sexy or exciting and, let’s face it, had these controls been in place, War Games would have been a boring movie. But that’s what we want for society: nice safe outcomes.

Gary Coggins is an executive vice president at Capgemini and he can be reached at gary.coggins@capgemini.com

Accessibility compliance in eCommerce websites

Capgemini
2020-01-02

When referencing accessibility, most people think about ramps or elevators or physical objects to assist people with disabilities with access to everyday places and items. It is extremely important that accessibility is included in the way we develop websites. Not only is it the right thing to do, but there are legal requirements, which if not met, can expose our clients to lawsuits and penalties.

Not a small minority

The 2010 US Census showed that there are approximately 57 million people with a disability living in the United States. That number represents about 19% of the civilian noninstitutionalized population.1 Think about that. About 1/5th of the US population benefits from some type of assistance when navigating eCommerce sites we are building for our clients. Assistance comes in the form of alternate ways of navigating the site for users that may have mobility issues that limit their ability to use a mouse or keyboard, vision or hearing impairments that can benefit from alternate tagging or screen reading applications.

Legal Ramifications

According to a report released by Seyfarth Shaw,2 an international law firm specializing in employment and labor law, the number of website accessibility lawsuits in the US tripled in 2018 over the previous year.

Accessibility compliance

The Department of Justice (DOJ) has reaffirmed on several occasions that the Americans with Disabilities Act (ADA) applies to websites. Websites are considered to be a place of public accommodation, and therefore fall under the ADA protections. There is not an existing technical standard in the ADA and so the World Wide Web Consortium (W3C) has developed a set of standards, Web Content Accessibility Guidelines (WCAG) that are referenced most frequently as the accessibility requirements for websites. The WCAG is constantly being refined and in testing, we strive to ensure our websites are meeting WCAG 2.0 Level AA established guidelines. For more information on W3C, you can visit their websites at w3.org.

WCAG strives to meet 4 main principles for web accessibility, as defined below:4

  1. Perceivable – Information and user interface components must be presentable to users in ways they can perceive.
    • This means that users must be able to perceive the information being presented (it can’t be invisible to all of their senses)
  2. Operable – User interface components and navigation must be operable.
    • This means that users must be able to operate the interface (the interface cannot require interaction that a user cannot perform)
  3. Understandable – Information and the operation of user interface must be understandable.
    • This means that users must be able to understand the information as well as the operation of the user interface (the content or operation cannot be beyond their understanding)
  4. Robust – Content must be robust enough that it can be interpreted reliably by a wide variety of user agents, including assistive technologies.
    • This means that users must be able to access the content as technologies advance (as technologies and user agents evolve, the content should remain accessible)

Tools

As accessibility standards have been developed and the importance of accessibility has become more prevalent, many tools have become available to ensure accessibility standards. These tools are available as built-in apps on devices, such as mobile screen readers, or browser options or plug-ins, such as Wave. Other apps, such as JAWS, are available for download.

When planning the testing of websites, there are a couple of tools I have become reliant on. Wave is an excellent tool as a plugin for Google Chrome and Mozilla Firefox browsers. It is a simple tool that will automatically search for issues and errors based on the compliance level you set on the tool.

On mobile devices, there are built-in screen readers, such as VoiceOver on Apple devices. I recommend trying to use the built-in screen reader to understand how difficult it can be to navigate websites on a mobile device.

Summary

Be an advocate for accessibility in the websites we build for our clients. Adopt standards early in the development process. Make it a requirement/user story in your project backlog. Not only will it protect our clients from potentially costly lawsuits, but it will ensure the inclusion of a large group of accessibility-dependent customers for our clients. About one in four adult Americans has some kind of disability, and their disposable income is estimated at over 645 billion dollars annually.3

1(https://www.census.gov/newsroom/releases/archives/facts_for_features_special_editions/cb12-ff16.html).

2https://www.adatitleiii.com/2019/01/number-of-federal-website-accessibility-lawsuits-nearly-triple-exceeding-2250-in-2018/

3https://www.essentialaccessibility.com/blog/web-accessibility-lawsuits/

4https://www.w3.org/TR/UNDERSTANDING-WCAG20/intro.html#introduction-fourprincs-head

Webpack in Salesforce Commerce Cloud: Solving the need for speed

Capgemini
2019-12-29
The Case for Performance

In digital, every second matters. In fact, Mobify found that only a half second variance in page load times can significantly impact conversion rates: an impact that can easily equate to hundreds of thousands of dollars in lost revenue. If a mobile user has to wait more than three seconds for a page to load, there’s a fifty-fifty chance they’ve lost patience and abandoned the effort to load your page entirely.

These findings aside, it’s self-evident that page load speed and responsiveness are absolutely critical to providing an experience that drives deeper engagement and leaves the end-user coming back for more. However, it’s a difficult endeavor to design and develop eCommerce sites that provide a feature-packed and content-rich experience while also staying mindful to performance. How can we continue to focus on implementing complex features, integrating advanced front-end services, and provide high-quality site aesthetics without sacrificing speed and convenience?

Why Webpack Helps

The speed at which the browser is able to load, process, and render any page on the web today is dependent on the amount and size of the resources that are returned to its initial request.

In the past, it was acceptable to return a monolithic JavaScript and CSS file, i.e. main.js and global.css, containing the contents of the entire application for the browser to process on every page request. However, as complexity continues to increase, and thus the amount of code within a web application, this will severely impact performance.

To launch robust, content-rich experiences that also render and load quickly, merchants need a build system that generates optimized code for the browsers’ processing capabilities, intelligently splits code across multiple files to take advantage of modern networking protocols, and effectively identifies and eliminates unused code. Webpack steps in to fulfill the requirements of this system and more.

Webpack defines itself as “a static module bundler for modern JavaScript applications”. In other words, it scans over every JavaScript file (modules) within a web application, optimizes the code, and generates new files (bundles) containing those JavaScript files in a way that promotes top-notch performance for browser consumption.

Webpack doesn’t just stop at JavaScript files though. The tool is highly configurable and can be extended to support many different assets such as stylesheets, fonts, and even images.

Let’s cover the basics of a Webpack configuration in-detail and take a look at its advanced features that provide the boost to performance.

The Basics

There are four core configuration concepts behind Webpack: entry, output, loaders, and plugins. The first two concepts are autological but still deserve a proper explanation.

Entry

The entry configuration tells Webpack on which file to begin its bundling process and provides the contextual root of the web application. The magic behind Webpack comes from its idea of building an internal dependency graph from this entry point. Webpack does so by recursively identifying all dependencies within the web application starting from this file with the purpose of producing optimized output bundles. Recall that these dependencies are not limited exclusively to JavaScript files but can be any type of supported assets like stylesheets or images.

Output

The output configuration simply tells Webpack where to emit, and how to name its generated bundles. Note, that a bundle is simply the naming convention Webpack places on the JavaScript files it outputs. These bundles are creatively crafted to help the browser make light work when it comes time to parse them.

Loaders

Adding loaders to the mix extends Webpack’s ability to interpret and process different types of assets. By default, Webpack only understands JavaScript and JSON files. Loaders provide transformation logic to convert a given file type into a valid module that can be added to the dependency graph. For example, cutting-edge ES6+ JavaScript can be transpiled by a loader to produce JavaScript that will be widely supported by all browsers.

Plugins

Plugins are provided to the configuration to fill in any limitations of the loaders. They “plug into” Webpack’s compiler and can “listen” for the emission of certain events. Thus, they can be utilized to perform a specific task at a certain point in the build process, like fine-tuning bundle optimizations. Or, they can go to work over the entire build process to provide features like advanced logging and statistical analysis of bundle creation.

Code Splitting

With minimal changes to the out-of-the-box configuration, code splitting is a solution Webpack offers to further reduce the outputted bundles’ sizes beyond the default, built-in optimizations.

The key insight here is that the entry configuration of Webpack also accepts the passing of multiple entry points where each supplied entry point generates its own dependency graph and subsequent output bundles.

In a multi-page web application (the norm for large eCommerce sites), development efforts can manually divide the code into modules that are specific to a designated page. Each of these page-specific modules may then be passed as an entry point to Webpack to generate bundles unique to certain pages. This achieves smaller bundles and controls resource load prioritization per page, which, if used properly, can have a major positive impact on load time.

Combining this technique with the usage of the built-in plugin, SplitChunksPlugin, will take this optimization a step further. Enabling SplitChunksPlugin instructs Webpack to compare each internal dependency graph of these page-specific modules and extract shared application code amongst them into a single bundle. This effectively removes any possible duplication of logic and ensures minimal bundle sizing.

Webpack also supports dynamic imports by splitting dynamically loaded modules (logic declared unnecessary for the initial page load by a developer), into separate bundles to be lazy-loaded at a later time. This concept is coined as dynamic code splitting.

For instance, the code required to support the opening of a modal, and the generation of dynamic content within it when clicking a button, can be delayed past the initial page load as it doesn’t contribute to the initial experience. Essentially, there is no reason for the page to try and load the data if it is an optional, later interaction.

Webpack can identify this code, separate it into a small, singular bundle, and only retrieve it when the button is clicked. If dynamic imports are implemented for all components on a page that do not contribute to the initial page experience, load times can be reduced significantly, thereby keeping users engaged as they move throughout the site.

Tree Shaking

Through the usage of ES6 modules, Webpack’s tree shaking abilities are enabled. Webpack has a nice, little metaphor in its online documentation on this functionality:

“You can imagine your application as a tree. The source code and libraries you actually use represent the green, living leaves of the tree. Dead code represents the brown, dead leaves of the tree that are consumed by autumn. In order to get rid of the dead leaves, you have to shake the tree, causing them to fall.”

Tree shaking is simply the process of eliminating unused dependencies, dead code, from a project.

In certain situations, this can greatly reduce the overall size of a web application. For example, if a developer pulls in multiple large, third-party libraries to aid in the development of advanced site customizations, but only uses a fraction of each library to accomplish the task only the code utilized remains in the application after Webpack emits its bundles. Again, Webpack grants the assurance that only the code contributing to a user’s experience is shipped to the browser.

The supply chain – pain points in master data

Capgemini
2019-12-24

In my first article, I outlined some of the challenges that organizations face in running their supply chain functions. This time, I’ll consider the difficulties specifically with master data, and how to address them.

The pain points in master data

There are many pain points across the supply chain, some of which are specific to the function, while others span broader process or technology issues. Some of these pain points relate to incorrect or incomplete master data used by the business processes. Supply chain processes are tightly integrated with reference and transactional master data, so it’s very important that the information used in business processes and day-to-day operations is of the highest quality. The success of an efficient and agile supply chain begins with complete and accurate master data.

In my experience, most MDM issues in an organization stems from the basic fact that there is no single owner of the entire end-to-end lifecycle of master data. Instead, data creation and maintenance are fragmented within the organization. For example:

  • Data governance is not defined – the underlying issue with master data management in organizations is the lack of data ownership and governance. In most of organizations, the MDM is decentralized and spread across multiple functions
  • Multiple ways of working – because MDM is not centralized, the definition of data varies across different regions or product groups. This leads to inconsistency and duplicity in master records
  • Missing the visibility – in absence of a dedicated MDM process, no one in the organization has a clear view of the availability and correctness of master data records. In some organizations, the rigor is there when a new master record is created, but subsequently the attributes of a master record is not reviewed throughout the lifecycle, resulting in inconsistent and inaccurate data.
  • Data roles and responsibilities are not clearly defined – there are no clear data roles and responsibilities defined within an organization. Typically, in MDM a clear segregation of roles is required across data owners, data stewards, data creators and data users, to ensure appropriate MDM accountability. There is no set standard which fits all organizations for master data governance. It varies, in line with the size and desired level of maturity for master data management.

…and the answer lies in…

The best way to avoid issues is to set up an integrated master data management system, with dedicated governance over the correctness, completeness and on-time availability of master data. Conducting a mature assessment of current master data with regards to people, processes, technology, and governance will help organizations to set a realistic goal on what needs to be done with a time horizon in mind.

  • Define a MDM operating framework:
    • Create a global target operating model for the new MDM organization
    • Achieve clarity in roles and responsibilities
    • Create workflow-based master data, and keep it updated
    • Establish a standard input template.
  • Rule-based master data processing:
    • Implement a global data definition
    • Maximize rule-based derivation of data attributes for bringing efficiency.
  • Data quality and process control:
    • Validation of MDM request against data definition and business rules
    • Duplication checks
    • Accurate classification of records
    • Periodic data checks for any discrepancies
    • SLAs and KPIs to measure MDM performance and its business impact.

As with so many things in life, an external perspective can sometimes identify issues and potential solutions more readily than can be achieved from within. What’s more, that same external view can also bring with it a broad range of relevant experience that simply isn’t available inside the organization. A knowledgeable and seasoned service provider can help to unleash the full potential of master data, and perhaps even provide an end-to end service, from advice, to implementation, to managing the services.

It’s worth considering. In an increasingly competitive business world, the efficiency of the supply chain is crucial not just to margins but to customer goodwill – and a great supply chain is built on great data.

To learn how Capgemini’s Digital Supply Chain  solution can help ride the emotional rollercoaster of procurement, contact: abhishek-bikram.singh@capgemini.com

Abhishek Bikram Singh has over 12 years of industry experience (CPG, Manufacturing, Chemical, Retailers) in managing different supply chain functions. He has worked with clients across industries to define their current MDM maturity with respect to people, process, technology, and governance, and to develop their target operating model.

To learn more about how organizations across consumer products, manufacturing, and retail understand the digital initiatives they are adopting, the benefits they are deriving, and the way they are transforming their supply chain.

First Name   * Last Name   * Company   * Email (Business Email only)   * By submitting this form, I understand that my data will be processed by Capgemini as indicated above and described in the Terms of use. * https://www.google.com/recaptcha/api2/anchor?ar=1&k=6Lc3OpIUAAAAADY4ErEHF0HF6Tr7vuknduDU-fbl&co=aHR0cHM6Ly93d3cuY2FwZ2VtaW5pLmNvbTo0NDM.&hl=en&v=QENb_qRrX0-mQMyENQjD6Fuj&size=normal&cb=1bxc8sq5thyo Thank you for downloading. A copy will be sent to your preferred email. We are sorry, the form submission failed. Please try again.The Digital Supply Chain's Missing Link: FOCUS