Improvement of gestural recognition algorithms in robotics
There are various teams around the world who focused on development of robots able to recognize and respond to human gestures – abilities which make them suitable for their daily use around us. Researchers from the Agency for Science, Technology and Research (A*STAR) Institute for Infocomm Research in Singapore adapted a cognitive memory model named Localist Attractor Network (LAN) to bring this idea closer to realization.
You probably had a change to talk to a phone automaton with speech recognition at some point of your life, where you probably had a chance to witness how it can misinterpret some of your words (especially in early days of that technology). Enabling robots to interact with us based on our behavior and gestures is far more difficult because a very simple gesture such as waving a hand can vary a lot between different individuals.
“Since many social robots will be operated by non-expert users, it is essential for them to be equipped with natural interfaces for interaction with humans”, said Rui Yan, lead researcher of the study. “Gestures are an obvious, natural means of human communication. Our LAN gesture recognition system only requires a small amount of training data, and avoids tedious training processes.”
Yan and colleagues at the A*STAR Institute for Infocomm Research in Singapore have adapted a LAN cognitive memory model to develop a new system which requires very little training to recognize gestures quickly and accurately. You can train the system to learn recognition of new gestures simply by demonstrating a new control gesture for a couple of times.
In order to test their software without the need for complicated sensors, they used output recorded from the ShapeTape – a special jacket that uses fiber optics and inertial sensors to monitor the bending and twisting of hands and arms. They programmed the ShapeTape to provide data 80 times per second on the three-dimensional orientation of shoulders, elbows and wrists, and used velocity thresholds as triggers for detection of gestures.
Five different users wore the ShapeTape jacket and used it to test the system as they were controlling a robot by using simple gestures that represented commands such as forward, backwards, faster or slower. The results look promising since 99.15 percent of gestures were correctly interpreted by their system.
The next goal for researchers from A*STAR Institute for Infocomm Research is to introduce more complicated sensing system that would allow gesture recognition where users wouldn’t have to wear any special devices.
“Currently we are building a new gesture recognition system by incorporating our method with a Microsoft Kinect camera”, said Yan. “We will implement the proposed system on an autonomous robot to test its usability in the context of a realistic service task, such as cleaning!”
For more information, read the paper published in the IEEE Computational Intelligence Magazine: “Gesture recognition based on localist attractor networks with application to robot control”.
Leave your response!