Scientists From NTU Singapore Develop AI System For High Precision Recognition of Hand Gestures

NTU Singapore AI System recognizing hand gestures
Image credit: NTU Singapore

Scientists from Nanyang Technological University, Singapore (NTU Singapore) have developed an Artificial Intelligence (AI) system that recognizes hand gestures by combining skin-like electronics with computer vision.

The team comprising scientists from NTU Singapore and the University of Technology Sydney (UTS) published their findings in the scientific journal Nature Electronics in June.

The recognition of human hand gestures by AI systems has been a valuable development over the last decade and has been adopted in high-precision surgical robots, health monitoring equipment and in gaming systems, reports NTU Singapore.

AI gesture recognition systems that were initially visual-only have been improved upon by integrating inputs from wearable sensors, an approach known as ‘data fusion’. The wearable sensors recreate the skin’s sensing ability, one of which is known as ‘somatosensory’.

Related Tactigon One Wearable Gesture & Motion Controller with Artificial Intelligence

However, gesture recognition precision is still hampered by the low quality of data arriving from wearable sensors, typically due to their bulkiness and poor contact with the user, and the effects of visually blocked objects and poor lighting. Further challenges arise from the integration of visual and sensory data as they represent mismatched datasets that must be processed separately and then merged at the end, which is inefficient and leads to slower response times.

To tackle these challenges, the NTU team created a ‘bioinspired’ data fusion system that uses skin-like stretchable strain sensors made from single-walled carbon nanotubes, and an AI approach that resembles the way that the skin senses and vision are handled together in the brain.

The NTU scientists developed their bio-inspired AI system by combining three neural network approaches in one system: they used a ‘convolutional neural network’, which is a machine learning method for early visual processing, a multilayer neural network for early somatosensory information processing, and a ‘sparse neural network’ to ‘fuse’ the visual and somatosensory information together.

Scientists from NTU Singapore
Image credit: NTU Singapore

The result is a system that can recognize human gestures more accurately and efficiently than existing methods.

Lead author of the study, Professor Chen Xiaodong, from the School of Materials Science and Engineering at NTU, said, “Our data fusion architecture has its own unique bioinspired features which include a human-made system resembling the somatosensory-visual fusion hierarchy in the brain. We believe such features make our architecture unique to existing approaches.”

“Compared to rigid wearable sensors that do not form an intimate enough contact with the user for accurate data collection, our innovation uses stretchable strain sensors that comfortably attaches onto the human skin. This allows for high-quality signal acquisition, which is vital to high-precision recognition tasks,” added Prof Chen, who is also Director of the Innovative Centre for Flexible Devices (iFLEX) at NTU.

To capture reliable sensory data from hand gestures, the research team fabricated a transparent, stretchable strain sensor that adheres to the skin but cannot be seen in-camera images.

As a proof of concept, the team tested their bio-inspired AI system using a robot-controlled through hand gestures and guided it through a maze.

Related With Tap Strap 2 You Can Control Any Bluetooth-Enabled Device with Gesture

Results showed that hand gesture recognition powered by the bio-inspired AI system was able to guide the robot through the maze with zero errors, compared to six recognition errors made by a visual-based recognition system.

The NTU research team is now looking to build a VR and AR system based on the AI system developed, for use in areas where high-precision recognition and control are desired, such as entertainment technologies and rehabilitation in the home.

Previous articleAI System From MIT CSAIL Diagnoses Pneumonia And Knows When To Defer To A Radiologist
Next articleHuami’s Amazfit Smart Scale Will Monitor A Slew Of Health Metrics In Addition To Weight
Cathy Russey
Cathy Russey () is Online Editor at WT | Wearable Technologies and specialized in writing about the latest medical wearables and enabling technologies on the market. Cathy can be contacted at info(at)wearable-technologies.com.