Ken Goldberg talk – Cloud Robotics

@ken_goldberg

Here are some of my notes from Ken’s awesome talk last week:
Please add more 🙂

The talk was centered on the advances in applications when robots can share information with each other via cloud computing. He is working on a “nominal grasp algorithm” which helps robots both identify objects and discover the best location to grasp the object. It uses “belief space” statistics which create probability distributions the shape of the object. Google image search also plays a role in this as robots can access that database to confirm what they are looking at.

A funny, yet hugely important point he made was: what happens when robots fail? Ken believes we will have human call centers to help robots through these situations and they will be able to download new data.

What about latency?
-on board processing is still important

He then went into the healthcare applications which are very exciting. Tele-surgery is one possibility where the world’s best surgeon can tele in to a surgery across the world, using a robot to follow his hand movements, another robot in the patient’s location could complete the actions.

They are still working out how a robot knows when to ask for help from a human?

To be continued….

2 thoughts on “Ken Goldberg talk – Cloud Robotics

  1. Ken Goldberg emphasized how limited robots still are. I think that we, the people of new-tech knowledge, take as a given the idea that robots are not as smart as advertised. However, Goldberg illustrated this quite starkly with a video image shot from a robot’s point of view, while training on a cup grasping task: The vision was sharp enough to discern objects and distances, but its blurriness would have made it hard to judge the close-range distances sufficient to grasp the cup accurately with the robot’s mechanical claw. When the video progressed beyond the first blurry image, lo and behold, the robot failed in this simple task.

    Natasha asked Goldberg the question, when will robots be able to teach us? For instance, a housecleaning robot might have a better cleaning technique available in its knowledge-base. When asked, it should be able to offer that superior technique, and perform accordingly. Goldberg allowed that this would probably be a fairly complex operation for a robot. We may see a google search and an alternate set of instructions for the machine (robot) but it’s much more involved. However, Goldberg did not satisfy in his assertion of the greater difficulty of robot recommendation. Perhaps the difficulty arises when the full behavior is actually the robot evaluating cleaning results and offering an unprompted recommendation to a homeowner.

    Robots’ advanced abilities in auto and machinery assembly, in surgeries and other complex tasks, make us forget how much effort goes into their training and programming. The notion of “cloud robotics” brings the need for large intelligence pools (among other shared resources) into sharper focus. It also opens the possibility for greater open sourcing of robotics knowledge, techniques, APIs, etc. My superficial impression at the moment is that while some open sourcing of this knowledge and experience takes place, corporations, academia and the government are racing to perfect and improve techniques that will reap large monetary rewards such that a lot of new information may remain proprietary, private or classified. In this way, is the pace of innovation slowed?

    “Cloud robotics” could be the top level of an “Internet of things.” One can only hope that proprietary and academic robotics knowledge will be shared, along with the knowledge of increasingly sophisticated human hackers, to give us a new knowledge base that integrates physical, digital, cognitive and behavioral code and possibilities. However, I’m speaking from a tech-optimist point of view when I say that. Given corporate and government abuses of digital powers, it is just as likely that such powerful knowledge will not come with the easy access implied by “cloud robotics” or “internet of things.”

  2. I believe what Ken was probably saying in response to Natasha’s question (having heard some of it before) is that the tasks we think of as basic are actually complicated because of the number of variables. Taking house cleaning as an example, the environment to be cleaned changes radically from a robot POV. The objects in the space and their relative locations change every day, so a robot has to not only navigate the basic layout, but also adjust to changes based on humans leaving things laying around. Then you throw in the complexity of simply grasping an object (see your post above), and you start to realize that the basic object recognition and motor skills of cleaning a house are very complex. Then add in the problem of coming up with a cleaning strategy, and you’re well beyond the software and sensor capability of most commercial robots.

    Robots used in industry are generally used in much more controlled situations than the average home, They have to focus only on a well-structured assembly line, and even minor things like a tool laid in the way can cause problems for them. When the environment is this controlled, they do fine. But the physical space of everyday human life is nowhere near as controlled.

Leave a Reply