Who should be liable for generative AI infringements?

Unauthorized or improper use of image generation technology may constitute various legal violations. Photo: TUCHONG
In recent years, the rapid rise of generative artificial intelligence has challenged the existing legal framework for tort liability. Current regulatory rules attempt to clarify the allocation of liability for generative AI within the traditional tort law framework, distinguishing between its “provision” and “use,” with technology service providers as the primary focus of regulation. Yet as generative AI becomes increasingly embedded in diverse applications, this binary classification has proven inadequate. It is therefore imperative to revisit the principles guiding liability in cases of generative AI infringements.
Expansion of liable parties
Amid deep industrial integration, the traditional dual structure categorizing liable parties as “providers” or “users” no longer captures the full spectrum of actors involved. Because generative AI output is shaped jointly by models and data, liability should be allocated proportionally to the degree of fault. Deployers—who adapt foundational models to specific application scenarios through API invocation, local deployment, or system integration—play a pivotal role in shaping outputs. They are simultaneously users of AI technology and providers of AI-powered professional services. Simplistic classifications blur the scope of duties and standards of reasonable care, complicating liability attribution.
It is therefore necessary to establish three categories of liable parties: service providers, who develop and supply foundational models; deployers, who tailor models to specific applications; and users, who ultimately employ AI-generated content. Obligations and liabilities should be reconstructed in accordance with each party’s actual degree of control and influence.
Difficulties in fault determination
Determining fault in generative AI infringements is complicated by the tension between AI’s “black box” nature and the legal requirement for interpretability in liability attribution. Under current technological conditions, a reasonably structured framework for presuming fault could be implemented through several approaches.
First, regulatory authorities could require service providers and deployers to disclose the design principles of foundational algorithms, the types of training data used, and the range of potential biases, as well as to submit algorithmic registration and transparency reports. Second, the presence of fault in a generative AI system can be inferred indirectly by comparing its outputs with those of similar systems or with the behavior expected of a “reasonable person” under comparable circumstances. Third, by examining system logs and historical output records, regulators can review whether a model has previously generated similar infringing content or whether corrections have been implemented through feedback mechanisms.
In content-generation chains involving multiple actors, harm rarely results from the actions of a single party. When service providers or deployers knew, or should have known, of risks but failed to implement reasonable preventive measures, they may be held jointly liable.
Articles 1168 to 1175 of China’s Civil Code allow considerable interpretive flexibility in applying joint and several tort liability to scenarios involving generative AI. Nonetheless, such liability should be applied cautiously to avoid dampening incentives for innovation. From the perspectives of technological controllability and economic efficiency, liability should not be traced entirely back to the original developer unless a foundational model exhibits obvious design defects or unpatched known vulnerabilities.
Shifting grounds for exemption
Some scholars argue that the “notice-and-takedown” rule applied to traditional online services can be fully extended to generative AI systems. Yet this view risks overlooking the particularity of AI-generated content. For instance, if a user deliberately inputs another person’s name along with defamatory descriptions and induces an AI model to generate a corresponding image, this may constitute an infringement of personality rights. In such cases, completely exempting the AI art platform from liability merely because it deleted the image upon notification would not adequately remedy the harm suffered by the victim.
Therefore, a limited, tiered exemption mechanism should be established, emphasizing “reasonable control capacity” and “technical feasibility.”
Under such a tiered exemption framework, service providers or deployers who can demonstrate that they have applied all reasonable measures—such as data cleansing, compliance filtering, risk alerts, and real-time monitoring—yet could not prevent certain infringing content may qualify for partial or full exemption from liability. Liability may also be mitigated if a party promptly removes infringing content or adjusts the model upon notification. Technology research and development enterprises that have implemented comprehensive compliance procedures and adhered strictly to industry standards could similarly be eligible for reduced liability for unforeseeable infringements.
Deng Jianpeng is a professor from the School of Law at Central University of Finance and Economics.
Editor:Yu Hui
Copyright©2023 CSSN All Rights Reserved