Skip to Content

Millions of sign-language users can’t communicate with your organization

Capgemini
2020-09-30

Many sign language users do not know any spoken languages, especially if they have been deaf since birth – they can’t speak them and they can’t write them. In many cases, giving a sign-language user a piece of paper and a pen is about as helpful as asking me to communicate using Chinese pictograms.

It’s a common mistake to assume that a sign language is just a way of communicating in French or English or some other language using hand gestures instead of sounds or written characters. In fact, sign languages are entirely different to spoken languages. A person able to communicate in French Sign Language, for example, does not automatically also know how to communicate in spoken or written French.

The sound of one hand clapping

There are no reliable statistics for the number of sign-language users globally, but extrapolating from the number of people born deaf suggests hundreds of millions of people use a sign language as their first and often only language.

Organizations can overcome spoken language barriers by providing written translations or automated text translation, but there is no such shortcut for sign languages. Right now, organizations that routinely provide multi-language services have no way of providing the same convenience for sign-language users.

We are not the first to notice this deficiency. Many attempts have been made over the years to automatically translate sign languages into spoken languages using gloves that sense sign-language users’ finger movements.

The problem with this approach is that it only works with a specialized part of sign languages called manual alphabets. Manual alphabets are used to spell out words from spoken languages in much that same way that anyone who knows the Latin alphabet can spell out a word in any language that uses that alphabet, even if they don’t speak it.

Just because a sign language user can spell a word, doesn’t mean he or she understands it, in the same way that an English speaker can spell out a word in Spanish without knowing what it means.

The bigger picture

Sign languages commonly include hand, arm, head, shoulder, torso, and facial movements as part of their articulation. A sign language translation system that only looks at hand movements would be like a spoken language translation that only translates nouns – you might get a broad sense of the communication, but you would miss a lot of the meaning.

This is why visual recognition systems are now being used in sign language interpretation. Properly trained AI-based software can identify the hand and body movements of a sign-language and translate them into a spoken language. We have built a solution that does exactly this, enabling more people to communicate seamlessly in retail businesses, schools, hospitals  and a wide range of other public places.

The solution consists of in-store kiosks equipped with video cameras and screens. The recognition platform, based on TensorFlow and PoseNet technologies, interprets the user’s signs to text or audio for a remote or in-store customer service agent. Their reply can be in signs, text, or audio.

Sign up

It’s important to note that translating from a sign language to a spoken language is just as difficult as translating from one spoken language to another, if not more so. We cannot claim to have built a solution that will allow sign-language users to communicate flawlessly with non-sign-language users, but we have built a solution that makes a good level of communication possible in a situation where otherwise it is often impossible.

Any business that recognizes the competitive importance of giving their customers the best experience possible should also recognize that this applies to all their customers, not just the ones who know spoken languages. If your business is committed to inclusivity, get in touch with me and we can talk about building a hassle-free experience for every one of your customers.

For more information, please contact Robert Engels

Author


Rohit Saproo
Head of AI – CPRD Scandinavia

An Experienced Delivery Director with a history of working in the information technology, Retail, Ecommerce, banking and services industry. Current Head of AI – CPRD in Scandinavia.
His core skills are focused on Engagement Management, Account Management, Stakeholder Management, Business Growth, Innovation and Enterprise Architecture.
He has experience which covers all the aspects of managing Ecommerce business, Digital and customer interactions.
Rohit is also responsible for managing AI portfolio for Capgemini Scandinavia, with focus on Consumer Products, Retail and Distribution. The responsibilities are around setting up the AI Chapter, Creating and driving Sales pipelines including lead generation, full sales cycle and eventual handover to delivery teams. He’s also responsible for defining AI, Insights and Data Strategy for AI adoption within Capgemini and client organizations.


Robert Engels
CTO Europe I&D

Robert has a long term and deep interest in topics and tangible things related to machine learning and artificial intelligence. His wider interests include topics like Semantics, Knowledge Representation, Reasoning, Machine Learning (in all its different colours & shapes) and putting it together in more (or less) intelligent ways.
Where technology meets people, a background in cognitive psychology comes in handy. That’s where the fun starts, and that’s where he wants to be. Utilizing, explaining, producing and creating scenario’s, solutions and understanding for new challenges and situations where AI & ML come around the corner.
Robert holds a PhD in Machine Learning from the Technical University of Karlsruhe (now KIT). He is a regular key-note speaker and published articles on various topics in artificial intelligence, machine learning, semantic web technology, information representation, knowledge management and computer linguistics.