Experts agree: There’s no real threat of a robot uprising anytime soon
The Twitterati have been freaked out for several days now over a video showcasing the creepy door-opening ability of the SpotMini, a dog-like robot from Boston Dynamics. The robot is so creepily life-like, it quickly became the top trending video on Tuesday and prompted a flurry of only slightly humorous tweets warning that the beginning of the end of the Age of Man had arrived.
Take heart, though, ye who fear the Singularity and the Rise of the Machines are at hand. David Held, an assistant professor in the Robotics Institute at Carnegie Mellon’s School of Computer Science, is an expert in robots and robotic manipulation. He’s also a modulating voice of reassurance that, at least for now, the field of robotics is only just beginning to grapple with how to help robots master even the simplest, most basic-level human motor skills and cognition. That there’s still a long, long way to go, in other words, before they get anywhere close to mastering perception and manipulation — sight and sound.
The tldr version of all that is as follows: Don’t lose your mind over the SpotMini yet, never mind how much that video calls to mind the scene in Jurassic Park where the raptors show they’re able to open doors.
“A set of tasks so general you can easily transfer that knowledge to new tasks — people have not really been able to figure out what those tasks are” yet for robots, said Held, whose work includes a focus on developing methods for robotic perception and control. Specifically, perception and control that allows robots to operate in the messy, cluttered environments and scenes of daily life, where things don’t always unfold according to a pattern or to a previously realized state.
To that end, he’s designing new deep learning and machine learning algorithms to understand things like how dynamic objects in the environment can move and affect the environment to achieve a desired task. That can improve a robot’s capabilities on two fronts — in object manipulation and autonomous driving.
That work is complicated in part, though, because a robot’s base of knowledge doesn’t expand upwards the way ours does. They have to be taught/programmed everything, whereas, for example, humans can use what they’ve learned and previously experienced to make assumptions and successfully navigate challenges and obstacles that are being experienced for the first time.
So, the ominous robot knows how to open the door — now what?
“People have come up with a few benchmark tasks, where everyone is working to at least create some degree of standardization to compare algorithms — like, can you teach a humanoid robot to walk?” Held said. “How do you learn one thing and transfer it to someone else in a robotics context is still a big challenge.
“Take object manipulation, most of which is done today by robots in a factory setting where it knows exactly what objects are going to come down the pipeline, what their orientation is, exactly where they’re going to be. And the robots are basically programmed to perform precise motion that they repeat over and over. But if you want to have robot caretakers for the elderly or robots that help in disaster zones, things like that — there’s so many different types of variations that the robot will have to be able to handle. And that’s a big challenge for developing new robotic methods.”
We’re getting closer to a world where that’s more prevalent, however.
One indication of why that’s the case comes via researchers from Brown University and MIT who’ve developed a means of helping robots plan tasks that take multiple steps to complete by building abstract representations of what they see around them.
Why projects like this are one are so critical to development in the field of robots is that, according to Brown assistant professor of computer science George Konidaris, a robot’s “low-level interface with the world makes it really hard to decide what to do.”
“Imagine how hard it would be,” he said, “to plan something as simple as a trip to the grocery store if you had to think about each and every muscle you’d flex to get there, and imagine in advance and in detail the terabytes of visual data that would pass through your retinas along the way.”
The researchers in the study introduced a robot to a room with a few objects: a cupboard, cooler, a light switch for inside the cupboard and a bottle that could go in the cooler or the cupboard. The researchers gave that robot a few high-level motor skills for interacting with the objects in the room. And then they watched the robot use the skills to interact with everything in the room.
One thing the researchers noticed the robot “learned” is that it needed to be standing in front of the cooler in order to open it — and to not be holding anything, since opening the cooler required both hands.
In general, Konidaris went on, problems are often simpler than they first appear “if you think about them in the right way.” And so it is with robots. Researchers are teaching them how to learn, how to think in the abstract — the better to learn, develop, become more sophisticated.
Just hopefully not opening the door, of course, to something that proves Elon Musk right.
For more on this story and video go to: http://bgr.com/2018/02/17/boston-dynamics-robot-dog-door-opening-video-robot-uprising-no/