IEyeNews

iLocal News Archives

There might finally be a driverless car with some common sense

By Will Knight, MIT Technology Review From Business Insider

Boston’s notoriously unfriendly drivers and chaotic roads may be the perfect testing ground for a fundamentally different kind of self-driving car.

An MIT spin-off callediSee is developing and testing the autonomous driving system using a novel approach to artificial intelligence.

Instead of relying on simple rules or machine-learning algorithms to train cars to drive, the startup is taking inspiration from cognitive science to give machines a kind of common sense and the ability to quickly deal with new situations.

It is developing algorithms that try to match the way humans understand and learn about the physical world, including interacting with other people. The approach could lead to self-driving vehicles that are much better equipped to deal with unfamiliar scenes and complex interactions on the road.

“The human mind is super-sensitive to physics and social cues,” says Yibiao Zhao, cofounder of iSee. “Current AI is relatively limited in those domains, and we think that is actually the missing piece in driving.”

Zhao’s company doesn’t look like a world beater just yet. A small team of engineers works out of a modest lab space at the Engine, a new investment company created by MIT to fund innovative local tech companies. Located just a short walk from the MIT campus, the Engine overlooks a street on which drivers jostle for parking spots and edge aggressively into traffic.

The desks inside iSee’s space are covered with sensors and pieces of hardware the team has put together to take control of ts first prototype, a Lexus hybrid SUV that originally belonged to one of the company’s cofounders. Several engineers sit behind large computer monitors staring intently at lines of code.

iSee might seem laughably small compared to the driverless-car efforts at companies like Waymo, Uber, or Ford, but the technology it’s developing could have a big impact on many areas where AI is applied today.

By enabling machines to learn from less data, and to build some form of common sense, their technology could make industrial robots smarter, especially about new situations. Spectacular progress has already been made in AI recently thanks to deep learning, a technique that employs vast data-hungry neural networks (see “10 Breakthrough Technologies 2013: Deep Learning”).

When fed large amounts of data, very large or deep neural networks can recognize subtle patterns. Give a deep neural network lots of pictures of dogs, for instance, and it will figure out how to spot a dog in just about any image.

But there are limits to what deep learning can do, and some radical new ideas may well be needed to bring about the next leap forward.

For example, a dog-spotting deep-learning system doesn’t understand that dogs typically have four legs, fur, and a wet nose. And it cannot recognize other types of animals, or a drawing of a dog, without further training.

Driving involves considerably more than just pattern recognition. Human drivers rely constantly on a commonsense understanding of the world. They know that buses take longer to stop, for example, and can suddenly produce lots of pedestrians. It would be impossible to program a self-driving car with every possible scenario it might encounter. But people are able to use their commonsense understanding of the world, built up through lifelong experience, to act sensibly in all sorts of new situations.

“Deep learning is great, and you can learn a lot from previous experience, but you can’t have a data set that includes the whole world,” Zhao says. “Current AI, which is mostly data-driven, has difficulties understanding common sense; that’s the key thing that’s missing.” Zhao illustrates the point by opening his laptop to show several real-world road situations on YouTube, including complex traffic-merging situations and some hairy-looking accidents.

A lack of commonsense knowledge has certainly caused some problems for autonomous driving systems. An accident involving a Tesla driving in semi-autonomous mode in Florida last year, for instance, occurred when the car’s sensors were temporarily confused as a truck crossed the highway (see “Fatal Tesla Crash Is a Reminder Autonomous Cars Will Sometimes Screw Up”). A human driver would have likely quickly and safely figured out what was going on.

Zhao and Debbie Yu, one of his cofounders, show a clip of an accident involving a Tesla in China, in which the car drove straight into a street-cleaning truck. “The system is trained on Israel or Europe, and they don’t have this kind of truck,” Zhao says. “It’s only based on detection; it doesn’t really understand what’s going on,” he says.

iSee is built on efforts to understand how humans make sense of the world, and to design machines that mimic this. Zhao and other founders of iSee come from the lab of Josh Tenenbaum, a professor in the department of brain and cognitive science at MIT who now serves as an advisor to the company.

Tenenbaum specializes in exploring how human intelligence works, and using that insight to engineer novel types of AI systems. This includes work on the intuitive sense of physics exhibited even by young children, for instance.

Children’s ability to understand how the physical world behaves enables them to predict how unfamiliar situations may unfold. And, Tenenbaum explains, this understanding of the physical world is intimately connected with an intuitive understanding of psychology and the ability to infer what a person is trying to achieve, such as reaching for a cup, by watching his or her actions.

The ability to transfer learning between situations is also a hallmark of human intelligence, and even the smartest machine-learning systems are still very limited by comparison. Tenenbaum’s lab combines conventional machine learning with novel “probabilistic programming” approaches. This makes it possible for machines to learn to infer things about the physics of the world as well as the intentions of others despite uncertainty.

Trying to reverse-engineer the ways in which even a young baby is smarter than the cleverest existing AI system could eventually lead to many smarter AI systems, Tenenbaum says. In 2015, together with researchers from New York University and Carnegie Mellon University, Tenenbaum used some of these ideas to develop a landmark computer program capable of learning to recognize handwriting from just a few examples (see “This AI Algorithm Learns Simple Tasks As Fast As We Do”).

A related approach might eventually give a self-driving car something approaching a rudimentary form of common sense in unfamiliar scenarios. Such a car may be able to determine that a driver who’s edging out into the road probably wants to merge into traffic.

When it comes to autonomous driving, in fact, Tenenbaum says the ability to infer what another driver is trying to achieve could be especially important.

Another of iSee’s cofounders, Chris Baker, developed computational models of human psychology while at MIT. “Taking engineering-style models of how humans understand other humans, and being able to put those into autonomous driving, could really provide a missing piece of the puzzle,” Tenenbaum says.

Tenenbaum says he was not initially interested in applying ideas from cognitive psychology to autonomous driving, but the founders of iSee convinced him that the impact would be significant, and that they were up to the engineering challenges.

“This is a very different approach, and I completely applaud it,” says Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence, a research institute created by Microsoft cofounder Paul Allen to explore new ideas in AI, including ones inspired by cognitive psychology.

Etzioni says the field of AI needs to explore ideas beyond deep learning. He says the main issue for iSee will be demonstrating that the techniques employed can perform well in critical situations. “Probabilistic programming is pretty new,” he notes, “so there are questions about the performance and robustness.”

Those involved with iSee would seem to agree. Besides aiming to shake up the car industry and perhaps reshape transportation in the process, Tenenbaum says, iSee has a chance to explore how a new AI approach works in a particularly unforgiving practical situation.

“In some sense, self-driving cars are going to be the first autonomous robots that interact with people in the real world,” he says. “The real challenge is, how do you take these models and make them work robustly?”

Get the latest Tesla stock price here.

Read the original article on MIT Technology Review. Copyright 2017. Follow MIT Technology Review on Twitter.

IMAGES:

Uber driverless Ford Fusions sit in the Uber Technical Center parking lot on Sept. 22, 2016 in Pittsburgh, Pennsylvania. Jeff Swensen/Getty Images

YouTube Screenshot

DigitalGlobe

Jason Paris / Flickr, CC

For more on this story go to: http://www.businessinsider.com/isee-driverless-car-with-common-sense-artificial-intelligence-2017-9?utm_source=feedburner&amp%3Butm_medium=referral&utm_medium=feed&utm_campaign=Feed%3A+businessinsider+%28Business+Insider%29

 

LEAVE A RESPONSE

Your email address will not be published. Required fields are marked *