Researchers at the Seoul National University (SNU) and KAIST developed a soft wearable hand robot that can help the disabled people use their hands by using machine learning algorithm and sensory hardware.
Related Harvard Researchers Designing Soft Robots for Patients with Spinal Cord Injury
Professor Sungho Jo of KAIST and Professor Kyu-Jin Cho of SNU worked in the Soft Robotics Research Center (SRRC), Seoul, Korea, to develop this robot. The wearable device predicts grasping/releasing intentions based on user behaviors, enabling the spinal cord injury (SCI) patients with lost hand mobility to pick-and-place objects.
The method is based on a machine learning algorithm that predicts user intentions for wearable hand robots by utilizing a first-person-view camera, reports SNU.
The machine learning model used in this study is called Vision-based Intention Detection network from an EgOcentric view (VIDEO-Net). It is based on the hypothesis that user intentions can be inferred through the collection of user arm behaviors and hand-object interactions.
VIDEO-Net is composed of spatial and temporal sub-networks, where the temporal sub-network is to recognize user arm behaviors and the spatial sub-network is to recognize hand-object interactions.
“This technology aims to predict user intentions, specifically grasping and releasing intent toward a target object, by utilizing a first-person-view camera mounted on glasses. (Something like Google Glass can be used in the future). VIDEO-Net, a deep learning-based algorithm, is devised to predict user intentions from the camera based on user arm behaviors and hand-object interactions. With Vision, the environment and the human movement data is captured, which is used to train the machine learning algorithm,” said Professor Kyu-Jin Cho.
“Instead of using bio-signals, which is often used for intention detection of disabled people, we use a simple camera to find out the intention of the user. Whether the person is trying to grasp or not. This works because the target users are able to move their arm, but not their hands. We can predict the user’s intention of grasping by observing the arm movement and the distance from the object and the hand, and interpreting the observation using machine learning.”
Related Hong Kong PolyU Develops Robotic Arm to Help Stroke Patients Move Upper Limb
People with loss of hand mobility, such as individuals who suffered spinal cord injury, stroke, cerebral palsy or any other injuries, can benefit from this technology, say the researchers.