The article below is written by Sarah Conway on August 15, 2011
Spend Analysis is undertaken by most organizations in order to provide spend visibility information on suppliers, spend and compliancy and is considered a continuing point of attention of our CPO’s.
Spend analysis tools
Increasingly, tools are becoming available on the market which highlight the ease and speed that can be delivered by automating the process but does this change our approach to Spend Analysis? Over the last few years an abundance of vendors has entered the Spend Analysis market ranging from those for whom the move into data enrichment was a logical step in their respective SRM suite such as Ariba, Oracle and SAP, to more recent entrants into the Spend Analysis arena such as Emptoris, Zycus and SynerTrade. Our survey findings have highlighted that both sets of players have a part to play in managing the demand for the market, particularly if some organizations wish to focus purely on a one-off spend analysis (e.g. Zycus) or have this linked into end to end procurement visibility with an ongoing opportunity to track and manage spend better. Each of these major players will be able to conduct detailed spend analysis however our experience indicates that none of them support entirely automated process.
Technological and digital automation is increasing in all areas of our lives. Will we ever reach the point where robots and computers will do away with the need for humans? As Supply Chain professionals we need to consider what technology and automation can mean to us and our roles. Thinking about this in the context of Spend Analysis we’re left with the question – Can we ever do away with the need for human intervention when conducting something as fundamental as Spend Analysis?
The Spend Analysis Process
Spend analysis is essentially a tool; it is the process of collating, cleansing, categorizing and analyzing expenditure information. The end goal of any Spend Analysis should be to provide clear, consistent spend information which can feed into the sourcing process, identify sourcing opportunities, enable benefits tracking and provide the visibility for compliance monitoring. As a concept it should be regarded as the fundamental foundation of sourcing however the process can start from many different data points and by leveraging different sources and quality of data. The level of complexity that this drives often results in a highly arduous and time consuming process. So how do the increasingly sophisticated software tools feed into this process? In order to make sense of data there are various levels of intelligence that can be applied. Data analysis tools use a variety of these methods to enhance data, by rationalization and consolidation.
At a basic level, spend data is extracted straight from the source system and displayed in reports. Usually this is the starting point for realizing the need for Spend Analysis as few organizations have the fundamentals of Master Data so thoroughly embedded that the data can be reviewed immediately and used to guide sourcing decisions let alone have this conducted dynamically. Often data sits in numerous systems so using one source system alone will not provide a fully comprehensive picture of the spend landscape. To get the most effective result on spend analysis at the very basic level, multiple extracts from different systems are needed to provide the full data set. So from this very first stage humans are needed in order to identify where the data should be gathered from; Accounts Payables, Purchase Order or from somewhere else.
Technology versus human
On their own the analytics tools cannot interpret the data so the tools must be programmed and algorithms developed which ‘tell’ the software how the data should be mapped. To get the most effective result on spend analysis at the next level the tool must be ‘told’ what data is synonymous and what isn’t. For example some vendors might have different names, but in reality are the same; Capgemini UK, Capgemini Plc, Capgemini Ltd, CapGem etc. The data analysis tool uses synonyms and associated probabilities to consolidate and recognize data before presenting it to the user. This also applies to misspelled data, or data with typing mistakes, which in both cases requires a list of possible alternative content for certain data, or some routine that checks alternative spelling of data. Whilst there are some tools which can recognize and highlight some of the basic inconsistencies it still requires human intervention. It requires someone to tell the system what the data looks like, where to find it, how to use it, how it could be misspelled or what the synonyms are or aren’t when incorrect synonyms are flagged and ultimately what rules to use. That said it is fair to assume that this type of logic and business rule based application can be built into spend analysis toolsets but still requires a degree of human involvement.
The next step in order to ensure the most effective result on spend analysis is to have a tool that can ‘learn’ how to make sense of data with minimal intervention of an end user. Using a tool that learns how to analyze data at first will not give great results, and requires the procurement professional to interpret the data and extract the reports required. The tool however tracks what the professional does to the data, and as usage of the tool progresses it can ‘learn’ from these actions what to do with it. If the professional has extracted data from two different systems to obtain purchase order and invoice data, the tool can do this automatically next time. If data for two product categories is put together manually, next time the tool can do this automatically. Further improvements to data analysis tools can come from methods like ‘fuzzy logic’, where the relationship between data is determined based on rules that are less clearly defined than conventional rules. As an example Google search can be used, where the search engine interprets the user’s input in multiple ways including words that sound like but are not spelled the same way as the search term. The logic is based on (seemingly) fuzzy rules that at first glance don’t make sense but do provide alternative interpretations of data that will make sense to the end user.
We have now reached a level where human interaction with a data analysis tool is diminishing more and more. Current technology provides fuzzy logic and learning capabilities, though arguably ‘learning’ in a very simple way. Human intervention is still required to tell software what can be learned.
With more powerful computers and ever larger databases data analysis tools can seem very advanced when using the methods mentioned above, but even ‘learning’ tools require a person to interpret and necessitate monitoring of any invalid data or even wrong results that a tool can produce.
All of these aspects highlighted assume a minimum level of information, that of supplier name and item categorization but what of the scenarios where the information available is almost non-existent? It may be that the only information is the supplier name and a spend total. Can a tool take this data and provide a reliable trusted output? Even the providers of spend analytics software would have to agree that this cannot be fully automated. For example the tool may be programmed to state that Capgemini should be classified as a Consultancy; however it could also be classified as an IT provider or an Outsourcing organization. With a recent client (a leading CPR Manufacturing company) there were over 3,000,000 lines of vendor and spend details which all had to be categorized. Conversations with several spend analytics providers stated that their approach would be to manually categorize in the first instance and then use the software to harmonize and cleanse the data clearly demonstrating the importance of human intervention in the process. There may also be an additional level of analysis needed to re-classify existing information where items have been mis-classified or simply identified as “miscellaneous”, the software can only interpret the data to a finite level before human intervention is needed to validate and confirm the categorization. A viable alternative is a hybrid approach where pending the quality of the data, spend analysis tools are used to drive accelerated categorization prior to human intervention.
Uncovering the subtleties of expenditure
The requirement for human validation of data leads onto the importance of understanding what level of data accuracy and information is needed by the end user in order to meet their sourcing requirements. Is it just to know how much is spent and with which Supplier? Does it need to go down to category level or should it be to sub-category level? It may become clear that spend with Capgemini is labeled as Consultancy however in order to be useful to the end user, they may need to know if the spend is Technical Consultancy, Strategy or Supply Chain – and this supplier is likely to be treated differently across organizations. It is at this point that the human eye is needed in order to uncover the subtleties of the real expenditure. This can also highlight the differences in interpretation, in some organizations the category Consultancy is used to identify any services provided by a third party be it temporary labor, project management, and advisory. In other industries and in different organizations ‘Consultancy’ may be restricted to Management Consultancy. The need at this point for the human eye to uncover these idiosyncrasies leads onto another key aspect of Spend Analysis which requires human intervention: that of Focus Interviews. All Spend Analysis should be supported by Focus interviews in order to put the data into context. As identified every organization is different and so the application of data rules will vary.
So in answer to the question can we ever do away with the need for human intervention in Spend Analysis? Well it varies on situation but human intervention will increase based on the quality of the data, the complexity of the business rule logic and ultimately the nature of the suppliers for the respective organizations. Will a procurement professional ever be able to wholly rely on a piece of software to perform a full and proper data analysis without second guessing the result? Again this almost seems a rhetorical question which can only be answered with an empathic “No”. Whilst the analytical tools on the market really do a play a part in acceleration of categorization for procurement professionals and can expedite what has historically been a time consuming process, the systems are not yet advanced or sophisticated enough to completely remove the need for human involvement. Spend Analysis continues to be a tool to enable us to buy better and importantly it is the input into the human generated process of sourcing, procuring and negotiation.