A new “open source” tool called “Facebook spaces” has been launched. The tool is designed to help robots find their own direction more effectively by allowing them to analyze the sounds of their environment.

Facebook pushes new open source tools to allow AI to analyze the environment and navigate autonomously

Audio is very useful for navigation. For example, if a user asks a hypothetical robot home assistant to retrieve the ringing smartphone, it may be much faster to track the sound to its source than to visually inspect each room in which the device might be located.

Soundspaces is an audio rendering information set based on 3D environment acoustic simulation. Designed for Facebook’s open supply simulation platform AI habitat, the infoset provides a software program sensor that enables it to insert sound source simulations into a scanned real-world environment.

Soundspaces provides a series of audio files that AI developers can use to train AI models for sound perception in simulations. Facebook said the audio files were not simple recordings, but “geometric acoustic simulations.”. These simulations include information about how waves reflect off surfaces such as walls, how they interact with different materials, and some other data that developers can use to create realistic sounding simulations to train AI models.

Facebook research scientists Kristen Grauman and dhruv batra wrote on their blogs, “to the best of our knowledge, this is the first attempt to train deep reinforcement learning agents that can be seen and heard to map new environments and locate vocal targets. In this way, we can achieve faster training and higher accuracy in navigation than using single-mode peers. “

In addition, Facebook also said that they have open source a tool called “semantic mapnet”, which developers can use to provide some kind of spatial memory for models to improve navigation.

Leave a Reply

Your email address will not be published. Required fields are marked *