Take a moment and think about all the things you picked up today: perhaps a coffee mug, your keys, a toothbrush, a spoon or fork. Now think about how much mental energy you put into each of those activities. Very little, right? For robots though, picking up a variety of shapes poses a significant challenge. Thanks to work carried out at the University of California Berkeley, at least one robot is handily overcoming that challenge.

While you might instinctively know the right place to grab your shoe so that it doesn't fall, robots don't. So the UCB researchers turned to deep learning to help two robotic arms successfully know how to grab odd-shaped items with 99-percent accuracy. Deep learning is a type of automated progression in which a computer is fed a large amount of data so that it can make decisions about new data.

In this case, the researchers built a database of contact points on 10,000 3D objects which contained 6.7 million points in all. They then used that data to create a neural network, a system in which a computer makes decisions much in the same way our brains process information. That network was then connected to a pair of robotic arms equipped with a 3D sensor. They called the entire system DexNet 2.0, as it is the next generation of a robotic system they previously built.

The sensor looks at each object placed in front of it and the neural network chooses the best place for the object to be grasped. Not only was the system nearly flawless in its execution of each grab, it also was three times faster than the previous version.

As robots increasingly move into our every day lives, their ability to manipulate a wide range of objects is critical. DexNet 2.0 joins other advances in robotic dexterity including the autonomous-learning robotic hand developed by researchers at the University of Washington last year; the robotic gripper from researchers at Swiss research institute EPFLthat relies upon electroadhesion to grasp delicate objects; and the inflatable robotic graspers invented by Disney.

An abstract of the paper describing the UC Berkeley advance, which will be presented at a robotics conference this summer, can be found here, and you can see the robot in action in the video below.