In order for robots to see objects and grab them, they are usually equipped with depth sensing cameras like Microsoft Kinect. Although the camera may be affected by transparent or luminous objects, scientists from Carnegie Mellon University have developed a solution. The function of the depth sensing camera is to shoot the infrared laser beam onto the object, and then measure the time required for the light to reflect back from the contour of the object and then to the sensor on the camera.

Although the system works well on relatively dim and opaque objects, it has problems with transparent objects, because most of the light can pass through transparent objects, or bright objects can scatter and reflect light. That’s where the Carnegie Mellon system works. They use a color optical camera, which can also be used as a depth sensing camera.

The device uses an algorithm based on machine learning, which can be trained on the depth perception and color image of the same opaque object. By comparing the two types of images, the algorithm learns to infer the three-dimensional shape of objects in color images, even if these objects are transparent or luminous.

In addition, although only a small amount of depth data can be determined by direct laser scanning of such objects, the collected data can be used to improve the accuracy of the system.

In the current tests, robots using the new technology perform much better in capturing transparent and luminous objects than only using standard depth sensing cameras.

Professor David Held said: “although we sometimes miss it, to a large extent it’s done well, better than any previous system that grabs transparent or reflective objects.”

Leave a Reply

Your email address will not be published. Required fields are marked *