The boundaries between machine and human appear to be blurring. Those distinctly human characteristics of adaptability, creativity, emotional awareness, and abstract thinking are being built into new technologies at an accelerating rate. Mechanical tools so separate from ourselves have morphed into constant digital companions – like the smartphone – becoming an ever more intimate part our everyday lives. For some, with advances in prosthetics, wearables, and experimental implants, technology has become an intimate part of their very bodies.
When combined with the masses of data each of us generates, are we not already becoming cyborgs of sorts? And could these dumb machines now being imbued with increasing ability to mimic human abilities become more human-like? What might happen should these developments of human and machine converge?
The thought might not be as far-fetched as you may think. Here are some reasons why:
- Ideas once ridiculed – humans flying, for example – are now taken for granted.
- Are you not already a kind of cyborg, enhanced by a powerful ever-present smartphone?
- Technological advancement is logarithmically increasing over the years.
- Artificial intelligence has the potential to assist us in solving some of our more complex problems, removing barriers to progress and accelerating our technological advancement even further.
I would like to illustrate this with a time-series in chronological order. Dates when some biological aspect formed part of the technology are indicated in red. I wonder what conclusion you will reach.
150–100 BC The Antikythera mechanism is built. It’s believed to be a kind of mechanical analogue computer used to calculate the movements of stars and planets in astronomy.
1822 The construction of the Difference Engine – a mechanical calculator – is proposed. That’s nearly 2,000 years after the Antikythera mechanism!
1940 Project Pigeon attempts to develop a pigeon-guided missile by training the birds to peck at targets on a windowed screen in the missile. Cancelled in 1953 as electronic guidance systems’ reliability was proven.
1959 The world’s first artificial neural network, named MADALINE, was applied to a real-world problem. A neural network is a type of machine learning system that models itself after the human brain.
1965 One of the earliest brain-computer interfaces appears, using a musician’s electroencephalograph (EEG) signals from his brain to stimulate acoustic percussion instruments.
1997 IBM’s Deep Blue ® beats the world chess champion using brute force processing of all known chess moves. It’s a crushing defeat in an arena thought to be inaccessible to machines.
2002 A researcher controls a robotic arm using a micro-electrode implanted in his wrist. He’s also able to create an artificial sensation in another person implanted with a similar device.
2003 Paro the therapeutic baby harp seal robot appears in Europe and the UK as a therapeutic aid for those hospitalized or in care. The robot responds to touch and sound and can learn to behave in a way the user prefers.
2004 An organic neural network of rat brain neurons is grown on a 60-electrode array in a petri dish. It’s linked to a jet flight simulator and learns to maintain a level flight path autonomously.
2008 Scientists at Reading University build a rudimentary toy without a microprocessor for its control, instead depending entirely on a rat embryo’s brain cells grown on a 128-electrode array.
2011 IBM Watson ® beats two of the all-time most successful Jeopardy players using software able to process and reason about natural language, then rely on a massive supply of information built up prior to the competition.
2011 OpenWorm begins their attempt to create the world’s first whole digital animal – a digital twin – in a computer by modelling complex neuronal activity digitally.
2014 A paralyzed man is enabled to move his hand and fingers with his own thoughts. Using neural bypass technology an electrode array is implanted on the motor cortex of his brain to stimulate the muscles in his hand.
2014 A prototype of Pepper the robot is built. The technology develops to include an emotion engine making Pepper able to recognize people’s emotions by analyzing their speech, facial expressions, and body language, and then deliver appropriate responses.
2015 OpenWorm begins research for a biological robot body for their digital (twin) animal. This would no longer be a truly digital entity.
2015 A software design company and a media company collaborate on an ambitious project for AI to design a car. The AI – based on a generative algorithm – designs the Hack Rod, a prototype chassis which could never have been designed by a human.
2016 AlphaGo, a deep neural network artificial intelligence beats the reigning European champion 5–0, introducing highly inventive winning moves itself and teaching the world new knowledge about the game.
2017 An “emotional chatting machine” – chatbot – by a Chinese team developing emotionally sophisticated robots provides coherent answers imbued with emotions such as happiness, sadness, or disgust.
2017 AlphaZero learns to play the complex game of Go, starting from completely random play then playing a vast number of games against itself. The algorithm’s game play capability includes Go, chess, and Shogi.
2017 “Sophia” the robot built to closely resemble a human in appearance, conversational capability, and ability to mimic human emotions is granted citizenship – a human right – in Saudi Arabia.
2017 Neuralink launches with the long-term intent to enhance humans by implanting tiny electrodes into the brain – a brain-computer interface – with the intent to effectively merge human minds with artificial intelligence systems.
2018 DeepMind begins research to develop abstract reasoning for AI inspired by human IQ tests.
2018 IBM builds a mixed hardware–software neural network using software neural networks alongside circuitry closely resembling neurons on silicon chips. The question is raised whether it’s possible that analogue computation could work for deep neural networks, moving away from digital computing.
2018 CTRL-Labs unveils CTRL-Kit, a wireless, non-invasive electromyography device in a wristband, which translates neural signals so you can control devices with a thought and a twitch of a muscle.
2019 Google’s DeepMind team develops an AI agent which defeats human players in modified version of the open world game Quake III Arena, using reinforcement learning capable of unsupervised learning.
2019 Microsoft invests $1 billion in OpenAI with the intent of building artificial general intelligence, which in layman’s terms, means human-level or strong AI that can understand and reason its environment as a human would.
In 2019, the artificial intelligence employed in civil society is a level-one Narrow AI, which is great for specific tasks but cannot yet be applied beyond the purpose it was built for; so AlphaZero is awesome at games such as chess and Go but cannot drive a car. Humankind is versatile and adaptable but normally slower to reach conclusions from complex data sets and prone to fatigue and, well, human error. As technology becomes ever more capable, pervasive, and intimately entwined with our lives, the future could perhaps be so much more than just digital. The relationship, for now, appears to be a collaborative one.
Working with people and technology together is part of the ongoing professional service we provide at Sogeti (part of Capgemini). The developments in technology and its effects on business are something we all feel, and a conversation many organizations are having with us for our help. Ping us a message, tweet, or send an email and we can discuss how we could bring that experience to benefit your organization too.