What happens if a police officer wants to pull one of these vehicles over? When it stops at a four-way intersection, would it be too polite to take its turn ahead of aggressive human drivers (or equally polite robots)? What sort of insurance would it need?
These and other implications of what Google calls autonomous vehicles were debated by Silicon Valley technologists, legal scholars and government regulators last week at a daylong symposium sponsored by the Law Review and High Tech Law Institute at Santa Clara University.
As Google has demonstrated, computerized systems that replace human drivers are now largely workable and could greatly limit human error, which causes most of the 33,000 deaths and 1.2 million injuries that now occur each year on the nation’s roads.
Such vehicles also hold the potential for greater fuel efficiency and lower emissions — and, more broadly, for restoring the United States’ primacy in the global automobile industry.
But questions of legal liability, privacy and insurance regulation have yet to be addressed, and an array of speakers suggested that such challenges might pose far more problems than the technological ones.
Today major automobile makers have already deployed advanced sensor-based safety systems that both assist and in some cases correct driver actions. But Google’s project goes much further, transforming human drivers into passengers and coexisting with conventional vehicles driven by people.
Last month, Sebastian Thrun, director of Google’s autonomous vehicle research program, wrote that the project had achieved 200,000 miles of driving without an accident while cars were under computer control.
Over the last two years, Google and automobile makers have been lobbying for legislative changes to permit autonomous vehicles on the nation’s roads.
Nevada became the first state to legalize driverless vehicles last year, and similar laws have now been introduced before legislatures in Florida and Hawaii. Several participants at the Santa Clara event said a similar bill would soon be introduced in California.
Yet simple questions, like whether the police should have the right to pull over autonomous vehicles, have yet to be answered, said Frank Douma, a research fellow at the Center for Transportation Studies at the University of Minnesota.
“It’s a 21st-century Fourth Amendment seizure issue,” he said.
The federal government does not have enough information to determine how to regulate driverless technologies, said O. Kevin Vincent, chief counsel of the National Highway Traffic Safety Administration. But he added:
“We think it’s a scary concept for the public. If you have two tons of steel going down the highway at 60 miles an hour a few feet away from two tons of steel going in the exact opposite direction at 60 miles an hour, the public is fully aware of what happens when those two hunks of metal collide and they’re inside one of those hunks of metal. They ought to be petrified of that concept.”
And despite Google’s early success, technological barriers remain. Some trivial tasks for human drivers — like recognizing an officer or safety worker motioning a driver to proceed in an alternate direction — await a breakthrough in artificial intelligence that may not come soon.
Moreover, even after intelligent cars match human capabilities, significant issues would remain, suggested Sven A. Beiker, executive director of the Center for Automotive Research at Stanford University. Today, human drivers frequently bend the rules by rolling through stop signs and driving above speed limits, he noted; how would a polite and law-abiding robot vehicle fare against such competition?
“Everybody might be bending the rules a little bit,” he said. “This is what the researchers are telling me — because the car is so polite it might be sitting at a four-way intersection forever, because no one else is coming to a stop.”
Because of the array of challenges, Dr. Beiker said he was wary about predicting when autonomous vehicles might arrive.
“Twenty years from now we might have completely autonomous vehicles,” he said, “maybe on limited roads.”
Questions of legal liability and insurance are also unknown territory.
Potential liabilities will be huge for the designers and manufacturers of autonomous vehicles, said Gary E. Marchant, director of the Center for Law, Science and Innovation at the Arizona State University law school.
“Why would you even put money into developing it?” he asked. “I see this as a huge barrier to this technology unless there are some policy ways around it” — though he noted that there were precedents for Congress adopting such policies.
For example, liability exemptions have been mandated for vaccines, which are believed to offer great value for the general health of the population, despite some risks.
There will also be unpredictable technological risks, several participants said. For example, future autonomous vehicles will rely heavily on global positioning satellite data and other systems, which are vulnerable to jamming by malicious computer hackers.
Although they did not participate in any of the panel discussions, several Google engineers and employees attended the event. The company has declined to discuss what it might be planning to do with its autonomous vehicle research, and several participants said privately that they did not believe the company planned to become a provider of autonomous navigation systems to the automobile industry.
Several people with knowledge of the company’s plans said that Google’s lobbying for state laws to permit autonomous driving indicated that it hoped to introduce such vehicles soon — driverless delivery vans or taxis, as early as 2013 or 2014.
Several participants suggested that in addition to technological and legal challenges, autonomous driving could use a more consumer-friendly name. Some called the definition itself into question.
“It won’t truly be an autonomous vehicle,” said Brad Templeton, a software designer and a consultant for the Google project, “until you instruct it to drive to work and it heads to the beach instead.”