Of the many hot topics at the moment, anything related to artificial intelligence is obviously right up there. Like the holy grail, AI has always been just beyond our reach. But technology is rapidly approaching the threshold beyond which it becomes easy to confuse artificial logic with human intelligence. This leads to ethical discussions about human rights for digital beings, but comes no closer to actually defining these digital beings. The Saudi Arabian government’s statement about giving a bot named Sophia human rights, is perhaps one of the most significant statements on this issue. But, what does that really mean?
Humans are good in seeing human faces and human traits based on the simplest features. We immediately attribute emotions and characteristics when see a human face. Two dots and a curved line is immediately recognized as an emoji and the direction of the curve is immediately recognized as happy or sad. Cars look friendly, aggressive, happy, or sad according to how their headlights and the grille are placed. Designers know this, and have been leveraging our human tendencies very effectively for a long time.
This anthropomorphization also applies to bots. Sophia is effectively a mechanized mannequin doll with a sophisticated chatbot behind it that runs somewhere in a cloud platform. This, in turn, calls on web-based services for whatever response is needed. So, who exactly in this contraption is the recipient of those rights: the physical doll, the machine code orchestrating the mechanic movements, the machine code calling the services, the services in general, or any service that Sophia provides in particular?
Anthropomorphization is a powerful tool in improving user interaction. The anime-style eyes in Pepper (the Japanese anime series) are immensely endearing to most people. The soft voices of Alexa and Siri provide an appealing conversational basis between human and machine. The recent launch of Google Duplex proved the natural acceptance of the machine by its human counterparts.
Humans are extremely gullible in attributing human traits based on simple tricks: eyes, voice, a tactical pause in a conversation and we are sold. From my perspective, it is critical to distinguish and separate any digital agent into two logical parts: the highly anthropomorphized user interface, and the digital logic driving the interaction. We must not mistake human appearance with human characteristics like morality, ethics, emotions, and human logic. Alexa isn’t your shopping buddy, and a self-driving car is not evil for running over a pedestrian.
The rapidly developing intelligence in machines might appear human because we are wired to believe it to be. That said, regardless of the how, digital intelligence will always be vastly more alien than the intelligence in primates or even cephalopods. I believe we should not strive to replicate humans. We should strive to build entities that are exceptionally good in what they were intended to do: drive cars better than humans ever could, make coffee for us before we know we want it, filter through vast amounts of data and find patterns we simply can’t even begin to grasp. And, yes, all of this should happen through a humanoid interface because it is the one form we immediately accept as a credible counterpart.
So tell me, how much of a friend is Alexa to you, really?