Robot-School Develops Tech for Thin Sound-Based People Search

For safe coexistence with people, robots should be able to determine their presence and determine their location in order to avoid accidents and clashes. Until now, most robots have studied the localization of people using computer vision technicians based on cameras or other visual sensors.

A research group from the Georgia Technological Institute has developed an alternative method of localizing a person, which is based on subtle sounds that naturally occur when moving in a certain environment. This method can be applied to a wide range of robotics systems. (source: source)

The proposed acoustic localization method by the researchers is based on machine learning algorithms. To train their algorithms, the team created a special dataset consisting of 14 hours of high-quality audio recordings compared with video recordings.

The machine learning technique developed by the researchers is trained exclusively on sound for localizing people. Since it only requires audio recordings obtained using microphones, it can theoretically be applied to any robot with an integrated microphone.

The researchers taught their model to disregard external and unrelated noises. In initial tests, they evaluated their equipment on the Stretch RE-1 robot, a compact robot manipulator.

In the initial tests with the Stretch RE-1 robot, the technique developed by the team showed twice the effectiveness compared to other methods of acoustic localization. These results demonstrate the scalability and non-intrusiveness of acoustic localization when compared to camera-based localization methods.

In the future, this human localization technique developed by the researchers can help improve the safety and performance of robots designed for close cooperation with people, while also ensuring user privacy.

/Reports, release notes, official announcements.