Reflections on legal studies of autonomous driving
Autonomous driving is no longer a hypothesis, but a reality. Photo: TUCHONG
One of the big focuses of legal developments is the development of technology. Several companies have removed safety drivers from their vehicles, much like climbing without a safety rope. This confident move introduces new risks and puts pressure on the legal framework.
Latest legal developments
At the international level, the UN is working on many fronts. The first is the development of international regulations on the design of automated systems, which can be incorporated into regulations by countries. This has the greatest impact on Europe. The US and Japan have also expressed interest in participating in rule-making that would have an international impact. Autonomous vehicles will eventually cross borders, involving cross-border data exchange and control issues, for example, the company of the vehicle may be in one country, and the remote agent it uses for monitoring systems may be in another country. Another example is when a vehicle arrives in another country, and local law enforcement wants to access its data or issue instructions. International dialogue can help to promote the exchange and coordination of ideas, which in turn can open up discussions on transnational issues. At the UN, countries are discussing the need to sign a new treaty. European countries generally support this approach, while others are hesitant.
In the US, at the national level, the National Highway Traffic Safety Administration (NHTSA) and the broader Department of Transportation have spent years thinking about regulation. Although they have been moving more slowly than expected, they have also done some work so far. First, they began to revisit the Federal Motor Vehicle Safety Standards in order to adapt them to autonomous driving — something that the U.S. government can do directly without the need for new laws in Congress. Second, NHTSA has used a great deal of enforcement and investigative powers to monitor and address safety issues in real time, including surveys of certain driver assistance systems, requiring all companies that are developing and deploying autonomous driving systems or specific driver assistance systems to report large amounts of data for analysis.
Congress hasn’t taken action yet, but that doesn’t mean much in practical terms, because the Department of Transportation already has power to facilitate and regulate autonomous driving. It’s interesting that Congress hasn’t acted for years. Perhaps the automotive industry wants to wait for a better outcome, or perhaps other interest groups, such as cities or trial lawyers, have different priorities. One big issue that prevents a law from passing is that trial lawyers want to ensure that autonomous driving companies can’t force customers to settle their claims through arbitration, i.e., victims can still go to court.
The U.S. operates a self-certification system at the federal level. The federal government mandates a range of vehicle standards, and automakers need to attest that their vehicles meet the standards. If there is no standard, it means that there is nothing to attest. State governments also have some legal authority on new vehicle safety, but it’s mostly focused on operational safety - is the vehicle registered, is it properly maintained, does the driver have a valid driver’s license, and states take different approaches to autonomous driving. I’ve long argued that autonomous driving may already be legal. This view is shared by many states, such as Arizona, where autonomous driving was implemented long before any specific laws were passed. California has a complex regulatory system, while Texas and Florida have taken a hands-off approach. Interestingly, there is a mixed relationship between what a state chooses to do legislatively and what kind of autonomous driving activities, if any, happen in that state. When California implemented regulations, there was a perception that car companies would avoid testing in California. In fact, to show investors that they have received government approval, many companies instead choose to come to California for testing.
Distribution of responsibility
The more automated the system, the simpler the responsibility. If humans are responsible for maintaining and driving their own vehicles, then the law may assume that humans are responsible in the event of a crash. When the system is fully automated, the human responsibility in the vehicle can be small, like the passengers in a taxi, and the company that develops and deploys the system needs to be held accountable. In the middle ground between these two scenarios, a number of conceptual questions and legal challenges arise: in this case, the human driver has a role, but the autonomous driving system may also have already been activated (i.e., the “third level” of the SAE definition of autonomous driving). For example, a human may need to decide when to activate a system, and while they don’t need to look at the road, they still need to pay attention to the vehicle’s prompts. The system prompts a human to take over, and the human needs to react quickly. The human may also need to be alert for obvious vehicle malfunctions and the sirens of emergency vehicles. This raises questions: what do humans need to do? can humans do anything else? how quickly do they need to react? who is the driver if humans don’t take over? Some countries suggest that humans are still responsible for ensuring that vehicles are maintained, that passengers wear seatbelts, and that they manage the aftermath of a crash.
This kind of discussion complicates things. At the end of the day, it’s either the person driving or the company driving. The company is driving through some combination of its human agents and its machine agents until a human actively resumes driving. Because this puts the focus on the entity that is best equipped to ensure security. This means that if the vehicle asks the human to continue driving and the human does not take over, then the company will still be the driver. This will spur companies to take necessary measures, including vibrating seats, turning up the radio volume, alarms, calling emergency responders, etc., and if someone is not wearing a seatbelt or the tire pressure is insufficient, the self-driving mode will not be activated.
Similarly, if we count on companies to respond to incidents, they will find ways to delegate responsibility to users or initiate automated alternatives, such as assigning drones to transport traffic cones, or mobilizing call centers, human operators, backup systems, or even remote drivers.
If the autonomous vehicles of the future involve multiple companies, such as maintenance, software development, hardware companies, then we will return to the basic principle of responsibility: responsibility is not binary. Multiple entities can be held accountable based on a variety of theories. But I believe the law should identify the companies that are primarily responsible, and that other companies can decide between them the question of attribution of responsibility, but that the victim can hold all these potential participants accountable.
Misleading marketing
Once the industry decides to roll it out to the public, they will bring together the strong marketing power and a lot of money from the automotive industry and the information technology industry to convince a lot of people that autonomous driving is valuable. Autonomous driving technology will not be available to everyone, and, unfortunately, without policy changes it might not available to those who need it most, but it will still have a significant impact.
Tesla named its assistance systems “Autopilot” and then “Full Self Drive” – and yet these systems require a human driver to supervise and drive the car more intently than ever before, which is paradoxical. Accidents that have already occurred can be the result of poor system performance, such as not being able to detect stationary objects on the side of the road. It could also be that the human driver thinks the system is smart enough and therefore becomes lax.
Tesla wants to have the best of both, using the gimmick of self-driving while shifting the blame to the user. Not long ago, a driver was criminally prosecuted for causing an accident due to his reliance on an autopilot system, and the outcome of the case was unfavorable to the driver. There are also crash victims in lawsuits who claim that Tesla’s system caused the crash. There were two early judgments in favor of Tesla, and likely other claims that were settled through Tesla’s efforts before going to court, which means we will never know how they were resolved. Some juries do seem more focused on human drivers and less on environmental factors that can lead to human laxity or problems. Some governments have also launched investigations into Tesla’s system performance and misleading marketing.
Trust and trustworthiness
The public is susceptible to marketing influences. Instead of asking the public if they trust a new technology, we should ask if a company can be trusted. This approach shifts the focus from technology to companies, from empirical issues (trust) to normative issues (trustworthiness). It’s key to discuss the issue of public trust separately from the credibility of the business. Just because the public trusts a technology doesn’t mean it qualifies for trust, and equating the two is dangerous.
Recently, there was an incident in California in which a Cruise unmanned vehicle dragged a pedestrian. Much of the focus has been on how Cruise’s sytem failed; almost no one mentions that it was a human driver who first struck this pedestrian—a driver who then illegally fled the scene of the crash. Tragedy occurs in the process of technology development. At a minimum, we should expect the crash rate caused by technology to be lower than the crash rate of human driving. Driving in the US is dangerous. About 100 people die every day in the US from ordinary traffic crashes. Hence, when talking about new technologies, all attention should not be focused on the shiny and new. More attention should be paid to ordinary crashes.
While technical failures are inevitable, companies still have a chance to make amends. In this case, however, Cruise failed egregiously by concealing the truth of the crash and misleading the public, showing reporters only a video of the car coming to a stop after the collision, but not showing the video of the vehicle restarting and dragging the pedestrian for about 10 meters.
This is unforgivable, because the only way for the public to judge the safety of a system before gaining millions of kilometers of driving experience is to assess whether the technology company is trustworthy or not. These technologies are extremely complex, constantly changing, and uncertain, and not every drive yields the same results. Even regulators lack the resources, data, and expertise to fully understand them. So trustworthiness becomes a proxy for safety.
Trustworthy technology companies
Three things that companies need to do to be trustworthy: Share their safety philosophy with the public, make commitments and keep them. First, explain in detail what the company is doing, why it reasonably believes this is safe, and why the public can trust it. This is the equivalent of developing a public safety case and supporting it with evidence. Use eveidence, not hype. Second, promise the public that the company only markets products that it has reason to believe to be safe, and be candid about the limitations and inadequacies of the products. When a product fails, the company corrects the error. Third, technology companies should appropriately limit public expectations while monitoring the product lifecycle to mitigate harm quickly, adequately, and openly. This means that the facts cannot be concealed and that private settlements are not sought to cover up the fact that someone was injured in the car.
We’ve seen industry overhype that artificially and counterproductively inflates public expectations, making it difficult for companies to ride the tiger. Instead, companies should be honest and say, “In some driving scenarios, the risk is slightly higher than that of a human driver.” Then, come back a year later and report on what improvements have been made over the year. This practice is credible, and ultimately beneficial to the company, and trust follows, rather than as an end in itself.
Bryant Walker Smith is a professor from the School of Law at the University of South Carolina.
Editor:Yu Hui
Copyright©2023 CSSN All Rights Reserved