Nowadays, the world accosts us with a veritable tsunami of stimuli, a wall of information that constantly demands our attention. It’s too much to keep up with, let alone remember. Too many stimuli make our amygdala overheated and stressed. The amygdala is a small part of the brain, very close to the brain stem. It used to be our survival mechanism, saving us from the tiger in the bushes by making us run first and ask questions later. Instead of the other way around, when it would have already been too late. But nowadays, this tool may be unsuccessful as almost everything around us sends out stimuli.
Because of this tsunami of stimuli, we have to be more selective. We have to funnel all that information that is flooding our brains. We have to filter. Do you know what is best suited for this? Artificial intelligence (AI). As the information tsunami will only continue to swell, we have to use AI to get only the convenient information – convenient as in what is in our interest and reflects our needs. This can be very personalized or more blended in a group.
Thanks to the Cambrian explosion of data, infinite computing power and everything gets connected, algorithms are in their element – for example, machine learning and deep learning. AI can now form neural networks, realizing some awesome capabilities. To see some examples, check:
A negative response to this type of artificial neural networks is the black-box effect. Are algorithms going to determine everything for us instead of supporting us? Will we wind up in a Matrix-style world where everything is determined by algorithms or can we actually put AI to good use? Please note that biochemistry is also algorithms. As organisms are biochemistry, organisms are algorithms too. What will this do to designer species? We may be the last homo sapiens. In fact, we may already be transhumans, moving from homo sapiens to another kind of species.
Please be aware that the human influence on AI is bigger than we can currently foresee. AI is designed and/or trained by humans (depending on the type of AI used). Although these humans may not have the intention to discriminate or actively put in a preference, unwittingly they make choices based on their upbringing, principles, and culture. And that’s what the AI learns from. Remember the predictive policing trial on all (American) Senate members? The AI identified a few senators who, according to its algorithm, should be arrested.
Currently, AI fails to bring us truly, honest, sincere information. Most AI is built so that the information displayed gets our attention. It creates filter bubbles. These bubbles create an uneven distribution of information. A Muslim and a Christian once swapped their Facebook accounts for two weeks. They were shocked at the information the other received about their respective religions. Much of it wasn’t true at all, some was only partly true, but just a small portion was really true. Algorithms help keep things manageable. They can filter very well. The next step is to ensure that we are happy with the results, that the filter is sincerely personalized.
Do you want to have an extended trend reading or find an interactive session about this topic? Reach out to us for more information, training, or to arrange a workshop.
This blog is part of the “Re-envisage” series.
Check the others here: