Mitsubishi Electric Lab Applying Computer Vision to Driverless Cars

Add Mitsubishi Electric’s North American research group to the list of companies working on driverless car technologies in Cambridge, MA. The area around MIT has quickly asserted itself as one of the hotbeds of this emerging and increasingly competitive field.

Toyota Research Institute and MIT startup NuTonomy are among the local players who have made splashy announcements this year. Meanwhile, the autonomous driving initiative at Mitsubishi Electric Research Laboratories (MERL) has flown under the radar. The storied corporate research and development outfit is celebrating its 25th anniversary, and Xconomy recently visited its office near Kendall Square to learn more about its driverless car project and other endeavors.

MERL’s team of 63 researchers conduct basic research and advanced development for a variety of technologies in algorithms, electronics and communications, multimedia, data analytics, computer vision, and mechatronics. It’s an open lab that publishes papers and collaborates with outside research organizations, but it’s ultimately charged with advancing technology that might benefit MERL’s parent company, Japan-based Mitsubishi Electric. The manufacturing giant—which is separate from carmaker Mitsubishi Motors—produces automotive equipment, air conditioners, factory automation systems, network and satellite communications products, semiconductors, and more.

Driverless car systems are a new area of focus for Mitsubishi Electric. Part of its strategy involves taking the sensors and other equipment it developed for air-to-air missiles for the Japanese military, and repurposing that technology to help self-driving cars navigate obstacles, Bloomberg recently reported. Those products would reportedly hit the market by 2020.

At MERL, “we’ve been formally working on something with the title of automated driving for just a couple years,” president and CEO Richard Waters says. But, he adds, his organization has been making advances in computer vision—crucial to enabling self-driving vehicles—for about 20 years. One example is face-detection software research by Paul Viola and Michael Jones published in 2004, which MERL says has influenced smartphone cameras and photo organization software.

Around 10 to 15 percent of MERL’s research efforts are dedicated to driverless car technology, Waters says. He wouldn’t share many details about the work, but he says it’s focused on software that can quickly detect objects, plan a car’s maneuvers, and direct its actions.

Driverless car efforts by Google and others are dependent on the quality of the map and GPS data available, Waters says. The problem is that if the car enters “a dirt road in Idaho that’s not on Google’s map,” then it’s in trouble, Waters says.

Richard Waters

Richard Waters

“We’re working on [getting to] where a car can actually see well enough to drive without a map,” Waters says. “And this is a lot easier to say than to do.”

The key to good performance is accurate and lightning-fast computer vision software. The car must “be able to see you in a tenth of a second so I don’t ride over you,” Waters says.

The stakes may be higher with driverless cars, but these are the same sorts of problems MERL has been trying to solve in other areas. It has developed algorithms to make Mitsubishi Electric laser cutters operate more efficiently, Waters says. MERL also developed software for particle beam therapy machines that can speed up the process of planning a cancer treatment procedure.

“Algorithms that are doing that are the tail that is wagging the dog,” Waters says. “By making those algorithms better, it makes that piece of equipment much more valuable than it otherwise would be.”

Over the years, MERL made a name for itself with advances in collaborative computer interfaces, digital audio, digital typography, computer graphics, image recognition, and other areas. Time will tell if it can make any significant contributions to driverless car systems. This might require a long slog of methodical progress for the research group—and the overall field.

“I think we are in the midst of a phased introduction” of driverless car technologies, Waters says. He points to incremental features already made available to consumers, including cars that can parallel park by themselves and systems that alert drivers when the vehicle is about to drift into another lane.

But we’re at least a decade away from cars that are capable of driving completely solo, says Jones, the computer-vision research scientist who has worked at MERL for about 15 years.

“I think we can get 95 percent of the way there, but the last 5 percent is really difficult,” Jones says. “You almost have to solve all of A.I. to have a car that drives with no driver.”

For example, creating software that can understand and predict the behavior of other human drivers is a huge challenge, and one that is outside the scope of MERL’s research, Waters says.

Michael Jones

Michael Jones

Jones gives a hypothetical scenario in which a driverless car is perched behind a stop sign, trying to turn left onto a busy road that rarely has breaks in traffic. A human driver would do a sort of “social dance” where he or she inches forward, making eye contact with oncoming drivers and trying to find a safe enough opening to make the turn, he says. It would be hard to write software to handle that situation. “Really it’s reading the other drivers,” Jones says. “That kind of thing, I think, is very difficult to get a car to do safely.”

Another problem that must be solved, Waters says, is “how the car decides when it needs to stop being the autopilot—when it’s getting out of its depth. And how it notifies you fast enough and you take control fast enough to really be safe.”

Although things are moving fast in driverless car research, the field is still early and filled with hype. “We’re going to pass through valleys before we make it to the promised land,” Waters says.

Trending on Xconomy