Depending on who you ask, robotic grasping has been solved for a while now. That is, the act of physically grasping an object, not dropping it, and then doing something useful is a thing that robots are comfortable with. The difficult part is deciding what to grasp and how to grasp it, and that can be very, very difficult, especially outside of a structured environment.

This is a defining problem for robotics right now: Robots can do anything you want, as long as you tell them exactly what that is, every single time. In a factory where robots are doing the exact same thing over and over again, this isn’t so much of an issue, but throw something new or different into the mix, and it becomes an enormous headache.

The background here will be familiar to anyone who has followed Abbeel’s research at UC Berkeley’s Robot Learning Lab (RLL). While the towel folding is probably the most famous research out of RLL, the lab has also been working on adaptive learning through demonstration, as with this robotic knot tying from 2013:

There are two important things that are demonstrated here. First, you’ve got the learning from demonstration bit, where a human shows the robot how to tie a knot without any explicit programming necessary, and then generalizes the demonstration to apply the skill that it represents to future knot-tying tasks. This leads to the second important thing: Since there are no fixtures, the rope (being rope) can start off in all kinds of different configurations, so the robot has to be able to recognize that and modify its behavior accordingly.

While humans can do this kind of thing without thinking, robots still can’t, which is why there’s been such a big gap between the capabilities of humans and robotic manipulators. Embodied wants to bridge this gap with robots that can learn quickly and flexibly.

The idea is that with a flexible enough learning framework, programming becomes trivial, because the robot can rapidly teach itself new skills with just a little bit of human demonstration at the beginning. As Abbeel explains, “The big difference is that we bring software that we only have to write once, ahead of time, for all applications. And then to make the robot capable for a specific application, all we need to do is collect new data for that application. That’s a paradigm shift from needing to program for every specific task you care about to programming once and then just doing data collection, either through demonstrations or reinforcement learning.”

Teaching the robot new skills is a process that has been evolving rapidly over the last few years. As you saw in the knot-tying video, the way you use to have to do it was by physically moving the robot around and pushing buttons on a controller. Most industrial robots work the same way, through a teach pendant of some sort. It’s time consuming and not particularly intuitive, and it also creates a void between what the robot is experiencing and what the human teacher is experiencing, since the human’s perspective (and indeed entire perception system) is quite different from that of the robot that’s being taught.

Based on some more recent research at RLL, Embodied is taking a new approach based on virtual reality. “What’s really interesting is that we’ve hit a point where virtual reality has become a commodity,” Abbeel says. “What that means is actually you can teach robots things in VR, such that the robot experiences everything the way that it will experience it when doing the job itself. That’s a big change in terms of the quality of data that you can get.”

Source: Spectrum IEEE