The evolution of input devices within the world of personal computing has been well documented. From the dark days of punch cards, the era of the QWERTY keyboard, through to the current time of multitouch and voice recognition in our pockets. The jury is still out on whether any single one of the emerging input technologies of gesture, eye tracking or EEG neural control will dominate, or whether we will find ourselves in a mixed economy.
While the race for dominance in the personal computing world continues, it is becoming harder and harder for the enterprise to keep up with user expectations. While these technologies may be the beau of the world’s innovation labs, we have seen limited adoption in the enterprise outside of a few niche brand experiences.
For this blog we will focus on depth recognition and gesture technologies within the enterprise. I have recently been investigating the Intel RealSense camera. The combination of three different cameras and two microphones gives the device the ability to sense depth, detect gestures and recognise faces.
Currently you can find this device build into personal computers or get your hands on the stand alone developer version. With an OEM version expected soon, now is the time for organisations to think about the role this technology could play in their enterprises.
Here are a few use cases that jump to mind:
- For jobs that involve a manual operator, controlling a piece of machinery, in a dangerous environment. Using this technology it will be possible to practice using the machinery in a virtual environment reducing training cost and risk.
- Testing that measured components fit. With photos taken with a RealSense device, it is possible to accurately measure the length between two points within an image. This could be helpful for leak detection, identifying construction flaws or reducing the need to bring engineers to site to diagnose faults.
- Within branches that are space constrained and have a desire to increase the range of stock e.g. large white goods in high street stores. It would be possible to use a large format display to show an image of the stock. A RealSense camera that identifies the demographic and age range of the approaching customer would be able to select the most suitable product set to display. The camera could detect a swipe gesture to enable the customer to browse the complete range. An additional camera trained on the face would be able to register when the customer turns away and understand which products had been preferred. The analytics generated by this activity could be used to improve the range for future customers.
As the consumer adoption will certainly grow, it will be interesting to see which corporate use cases are deployed first. Intel will look to continue to develop this technology adding further natural interaction capabilities as well as facial recognition security. Within capgemini’s Applied Innovation Exchange we will be experimenting with the use cases above and looking forward to seeing the responses of our test groups.