Mobileye Sets Safety-Based Rules of the Road for Self-Driving Tech

(Page 2 of 2)

time, more and more Intel people are joining,” he explained. “One of the things we are getting from this acquisition is very fast scaling up in terms of manpower.”

Such growth has been “very difficult,” Shashua added, “because the kind of engineers we hire are very specific engineers”—the company specializes in computer vision and machine learning. Beyond the engineering challenges, though, Shashua said Mobileye has been working on all sorts of elements that go beyond developing technology—especially in terms of a formal safety model that Mobileye calls Responsibility-Sensitive Safety, or RSS.

Given that self-driving cars will be sharing the roads with human-driven vehicles for decades, Shashua said the underlying premise of RSS is that accidents involving self-driving cars are inevitable. He argued that such a model is needed to quickly determine whether or not the autonomous vehicle was at fault, “because when there is an accident, who is liable? It is not the passengers. It is the suppliers of the technology and the [auto] manufacturers…You need to have a model that clarifies the notion of blame, and formalizes the notion of what is dangerous, formalizes the notion of who is responsible.”

So, Mobileye has rolled out its formal safety model as the first step in establishing rules of the road for autonomous vehicles. Shashua said the model uses mathematical formulas to formalize a “common sense” approach to safety. The rules establish concepts of priority in various driving scenarios, setting parameters for maintaining speed and distance from other vehicles. For example, Shashua used a video demonstration to show how such rules would guide a self-driving car as it merges with freeway traffic.

“Merging into dense traffic requires significant ‘negotiations’ among [human] drivers,” he said. A self-driving car must emulate the same sort of assertiveness. If it waits for an opening, it can create a bigger problem by blocking other cars also attempting to merge.

With RSS, Shashua said, “We define in advance, what it means to be in a dangerous situation. What does it mean to have a proper response, and then guarantee that the actions our car would take would never cause an accident. So that if there is an accident, all you need to show is that you did not have a sensing mistake… If you show there is no sensing mistake, then the car’s actions cannot cause an accident. Therefore, the blame of the accident is on the other driver.”

Shashua said the company laid out its concept for RSS in an academic paper published last year, and it has been working with transportation regulators in a bid to establish RSS as an industry standard. In a speech last October in South Korea, Shashua called on industry leaders and policymakers to collaboratively construct standards that definitively assign accident fault when human-driven and self-driving vehicles inevitably collide.”

Shashua’s presentation was perceived in some quarters as an attempt to avoid liability. Rather, he told reporters the model is intended to address some basic questions about autonomous navigation, such as, “What does safety mean?” and “Can a self-driving car guarantee ‘safety?’”

“Without such a model,” Sashua said, “autonomous cars will remain a science project. It will never reach mass production.”

Single PageCurrently on Page: 1 2 previous page

Bruce V. Bigelow was the editor of Xconomy San Diego from 2008 to 2018. Read more about his life and work here. Follow @bvbigelow

Trending on Xconomy