Liability for AI product infringements: Are users at fault?
Once confined to specialized professional fields, artificial intelligence has increasingly integrated into daily life, appearing in everything from household products to transportation. In this context, a pressing question arises: Should users be held accountable for damages caused by AI products during use, in the same way they would be with traditional products?
Differences in liability mechanisms
AI systems can be categorized as either “assistive” or “substitutive.” Assistive systems emphasize the user’s final decision-making and responsibility, while substitutive systems reduce the need for real-time human intervention and the associated responsibility. Assistive systems—such as medical diagnostic assistants or judicial material retrieval tools—require human experts to make final judgments with AI support. These systems are typically employed in specific fields where experts leverage AI to improve decision-making efficiency and mitigate human biases and arbitrariness. However, every conclusion provided by AI must be reconfirmed by human experts before implementation. In contrast, substitutive systems—such as vehicles with conditional self-driving capabilities—can replace human tasks within their operational design domain, though users must still take control in the event of system failure.
The integration of an AI system significantly alters the functionality and user experience of traditional products, but the user’s legal status remains unchanged. Whether an expert or an ordinary consumer, a user does not become the producer merely by using AI. The producer remains the central actor responsible for the AI system’s safety, including its design, manufacturing, maintenance, and risk management. The user, by contrast, is liable for damages resulting from their own misuse. Accordingly, the use of AI does not disrupt the existing liability framework, and user’s legal status continues to be determined by their actions and fault, rather than the autonomy of the technology itself.
User fault determination
Fault is primarily determined by whether the actor has breached a reasonable duty of care, with “foreseeability” as a prerequisite. Fault arises when an actor could have anticipated that their actions might harm protected legal interests but failed to take reasonable preventive measures. Foreseeability is assessed not based on the actor’s subjective abilities, but according to an abstract “reasonable person” standard, which evaluates whether the actor should have anticipated the risk based on knowledge, experience, competence, and diligence typical of contemporary professions. It requires proactive investigation and understanding of the risks associated with one’s actions, with “risk” referring to potential categories of harm rather than specific instances. The duty of care obliges the actor to take measures proportional to the level of risk.
The introduction of AI presents challenges to determining user fault. First, the “black box” nature of AI technology renders traditional foreseeability standards less directly applicable, and the focus of the user’s duty of care shifts from exercising caution over their own actions to reviewing, verifying, and supervising the AI system. Notably, a user’s ability to manage risks often depends on the provider fulfilling its duties of information disclosure and provision of countermeasures. If the provider fails to adequately inform the user of usage norms or provide necessary security updates, the user may not bear the corresponding duty of care. Therefore, although the use of AI does not completely overturn the fundamental principles of fault determination, the standards for assessing fault should be adjusted to distinguish between experts and ordinary users.
Experts, equipped with greater knowledge and skills, engage in more specialized activities. Accordingly, their duty of care when using AI systems should exceed that of ordinary users. The duty of care for ordinary users varies according to the type of AI system, the context of its use, and the user’s ability to manage risk. For assistive AI, users remain involved in decision-making and supervision, thereby bearing a higher duty of care. For substitutive AI, the focus shifts to system inspection and compliance with usage norms.
In the future, as AI technology continues to advance—particularly with improvements in interpretability and transparency—the difficulties in determining user fault are expected to be gradually overcome. To keep pace with these technological shifts, legal standards for the user’s duty of care must also continuously adapt. In the early stages of technological development, a lower duty of care can encourage the adoption and diffusion of new technology. As the technology matures and interpretability mechanisms become well-established, users may be required to assume higher duties of supervision and intervention to balance freedom of action, victim protection, and risk control.
Dou Haiyang is a research fellow from the Institute of Law at the Chinese Academy of Social Sciences.
Editor:Yu Hui
Copyright©2023 CSSN All Rights Reserved