Our brain is similarly organized to respond to new concepts through test and learn and leverage a continuous process of accepting and responding to variations. Our ability to image process and act on (in real time) what we see with its complexity and continuous change is testimony to this. This ability to continuously adapt to uncertainty is a manifestation of our level of intelligence.
The IT and organizational model which underpins our society works from a different model. We are always trying to eliminate error. Industrialization is built on repeatability and this is exemplified by Lean and Six Sigma. Our IT platforms are built on a binary foundation and all our effort is focused on eliminating error. The net result is that when things go wrong they can be catastrophic. A great example of this conflict is the idea that AI and cognitive computing is a natural extension of RPA (Robotic Process Automation). RPA focuses on creating lean repeatable processes to eliminate error. For AI to work it needs variation and error to learn what the right answer is and then choose that answer when needed.
So what do we do about this? In practical terms we need to invest in and focus on leveraging change and error to our benefit rather on eliminating it. This means sensing and responding and learning as we go. Fix before fail not respond after fail. The DevOps world brings this opportunity to life but it is usually forgotten as there is more focus on Dev than Ops. To be successful we need to break down the organizational boundaries and have tight feedback between the real world as monitored and what was intended or designed. This needs processing of multiple sources of sensor data and interpreting the result. In this environment AI and ML then has a serious role. This is exemplified by NetFlix use of the “Chaos Monkey” algorithm to put errors into its live environment to ensure this feedback is always working.
At its core the Internet upon which we all rely was created from a platform designed to accommodate errors and failure. The underlying TCP/IP protocol was created by DARPA to ensure systems could be distributed and then continue to work in the event nodes were destroyed in an attack. It then works very well across unreliable networks. Unfortunately as networks have become more and more reliable we have overlaid applications and services (such as voice) which are ill suited to the core idea of coping with errors. Nonetheless we have a platform to build our distributed services on which can be leveraged in an environment of uncertainty and error.
If we take this model into the business process and decision arena we see that the bulk of applications of so-called AI are incremental additions to an existing business process. The net result is a slightly faster and more accurate business process aimed at eliminating error. A better mousetrap. Is this intelligence or a better way of eliminating error? So do you think big or evolve from the simple idea recognising there could be many different answers. As the systems models we use move towards services exposed via APIs then we have the additional overlay of uncertainty as to how an ecosystem of services will perform together.
Perhaps the answer is in the distributed model and the word ecosystem. If we assume each service will self-organize (and evolve) to a degree given its particular inputs then we need to think about how the overall ecosystem will behave. We would like to think the ecosystem could be managed and directed through some form of “meta service.” The danger is that we underestimate the level of sophistication of each individual service and how they could combine. There are mathematical models which are available to model possible behavior of such ecosystems or one could use Monte Carlo simulations to explore possible behaviours. We will explore some ideas in future blogs.
In summary, our world is full of uncertainty and change. Intelligence thrives on learning from and adapting to the consequence of errors and uncertainty. AI and ML then gives us a model for leveraging errors rather than eliminating them through industrialisation. However, the consequence is that we will end up with distributed networks and ecosystems of self organising services. How these service could behave together then needs very different thinking to our existing business and IT models.