The following is a guest post from a good colleague and friend Sam Ceccola, who is effectively the CTO for the USA. I am sure it will promote some good posts ! Andy
I have been reflecting over Christmas on my experiences in the IT community serving clients that are constantly pushing the envelope through the use of computer systems to gain intelligence in ways that will really drive a business forward. These clients range in type from those that sell insurance to those that fight the war on terrorism.
There has been an ongoing struggle over the past thirty years on just how intelligent can our computer systems become… For the most part the pragmatists have won out, the perception is that we should run for the hills any time someone even mentions any type of artificial intelligence. I want to discuss the reality of the current situation and propose a different way of looking at the business objectives at hand…
Let’s start with a basic economic principle, which has governed business for quite some time. The law of diminishing returns as defined by Wikipedia as the point where each additional unit of input yields less value than the cost of an additional unit of output” This basic and well-accepted economic principle is the key point in the understanding of why AI concepts will be a successful part of delivering business innovation in today’s technology. Twenty-Five years ago, Artificial Intelligence was all but destroyed by a series of movies; “do you recall the movie war games” that made it seem as if AI concepts were impractical, unaffordable in fact fictional.
Twenty-Five years ago when computers where unknown to the masses there was an overwhelming perception, that computers were always correct. We ask a question of a computer and we would get the correct answer. After all how could a computer be wrong? Today of course this is a standing joke used widely by comedians to get a laugh, and in this we see a very different perception of IT systems today. As computer scientists and idealists we tried to engineer systems that were always right, that took us down the path of needing more processing power in effort to consume every piece of data we had (remember the data warehousing efforts?) and every single permutation of logic.
Pursuing this approach, based on the Moores Law and ever more available cheap power, caused the industry to perceive AI was just an academia and R&D exercise, leaving them to pursue the path of creating a single version of the truth where one input equals one output. Again today this is a further standing joke as we now understand the futility of trying to keep pace with the rate at which data, and in lots of new forms too, is being created.
Back to my earlier point about the “Law of Diminishing returns” and the concepts of edge computing. The Law of Diminishing Returns applied to computing effectively says the cost of considering one more piece of data or one more instruction of logic outweighs the value and impact of considering that new piece of data. But we need at least one new factor in our definition; cost should also be defined in time. In an increasing number of cases considering another piece of data, adds to the processing time, causing the decision to do something to be delayed until it’s too late. That’s why we have to reconsider our approach to fit the new demands that Businesses have for ‘right time’ information.
If we accept the premise that we could find a way not to have to consider all pieces of data or all permutations of logic then this allows us to accept the premise of edge computing and trust. Further it allows us to move from the traditional hub spoke architectures (Data Warehousing) to edge, peer to peer and federated models that apply the law of diminishing returns. And the connection with the whole Web 2.0 model where external data is of as much value as internal, and the explosion in data volumes and types that this produces still further reinforces my argument.
However this by itself does not ensure success, we must jump another hurdle. In past year we have realized that a one size fits all approach does not work. The power of personalization and end user collaboration on our business results is being demonstrated by the value of Web 2.0 technologies. As we build more intelligent systems, we need to embrace this paradigm shift and not rely on the computing logic itself to define our intelligence, We can not code every single permutation into the logic, nor can we assume that on generic version fits all. So how do we provide edge services that are generic enough for the masses but specific enough for individual?
In other words, how do we provide edge services that are relevant to the end user? In the spirit of Web 2.0, let’s allow the users to define the relevance. In my opinion this is best done, by providing edge services that support a model driven architecture. Where that architecture is a vocabulary model defined by the end user that defines relevance, relationship and such. A Semantic Model, something that some call Web 3.0 arguing that without this approach we will be in capable of making use of the Web 2.0 environment in much the same way as search engines are struggling with Web 1.0.
Let’s look at an example; I need to find information on the Internet about a person, in order to determine if this person has a relationship with Osama Bin Laden. One first may decide to use Google to do that search. Google defines relevancy by the number of links to a web page. If I do the search per Google, I may return 120 million results with the ones near the top of the list the most relevant by Google’s definition. However, maybe the most relevant by Google’s terms are not by my terms. Google provides the most relevant to the masses. Not the specifics of the individual’s context..
Again, the answer lies in another increasingly popular concept that is becoming widely understood; The Long Tail. If we apply a basic economic theory of long tail economics, the results I am looking for are in the long tail. I should also define relevancy by trust and for this search, involving a known terrorist, the web sites that end with .mil or .gov are the most relevant. If I could apply a model of relevancy to the same search and therefore have the most relevant be a the top of the result set for me as well as the search engine the facts will be available quicker and in a better structure so I will be able to make my decision quicker.
So my conclusion is that we are and will be driven towards a change in our approach, call it AI, Semantics, or whatever, we need it. Partly driven by scaling issues, but increasingly driven by time and needing contextual understanding. Increasingly we can’t afford to either wait for, or pay for a 100% correct analysis, (and in all probability the time to handle the task would mean that new data would have been created so the result can never be 100% accurate), when an 80% solution will answer my business needs. The pieces are pretty much there today and you can expect to see new players offering products that combine Semantics (Vocabulary Modeling), Artificial Intelligence and the law of diminishing returns as industry specific business driven solutions.
That is why the industry whether its Military Systems, Finance Systems, or Healthcare Systems are now moving towards the ‘Intelligent Enterprise’. The New Business Intelligence is not the static stale data that resides in the BI Systems of today. But are moving towards the federated real-time model driven Systems of the future.
-Sam Ceccola-