CONTACT US Wed Nov. 13, 2013

CASS 中国社会科学网(中文) Français

.  >  RESEARCH  >  LAW

Algorithmic discrimination requires further regulation

Author  :  ZOU JU     Source  :    Chinese Social Sciences Today     2020-11-19

In the context of artificial intelligence, algorithms are a double-edged sword, widening existing gaps in society while bringing convenience to people's lives. Photo: FILE

In the context of artificial intelligence, algorithms are a double-edged sword. As data about people's identities and behaviors is being extensively collected, their health conditions, demand preferences, consumption power, and development potential are controlled by algorithm users. By precisely identifying various types of social subjects, algorithms discriminatively confine people to their own data cocoons, widening existing gaps in society while also bringing convenience to people's lives.

Reasons

Human biases and intentions are factors which contribute to algorithmic discrimination. Fundamentally, algorithms are a value-driven product. Although machine decision-making is described as automatic, humans decide which data features affect the output. Currently, value orientations and biases deeply rooted in human society can easily be incorporated into the design of technical systems.

Apart from biased input in the design stage, biases might also take hold if the learning of algorithm systems is mixed with discriminatory cases, thus forming a bias cycle.

Algorithms can also repeat unintentional technical discrimination. After data input, the output will easily go awry if training data is obtained from groups affected by biases, or if collected data doesn't reflect the real conditions of the target group.

During machine learning, the process from data collection to the generation of knowledge is a process of machine translation and analysis based on correlation principles, which doesn't align with specific situations and human intuitions.

Compared with discrimination in the traditional sense, the two types of algorithmic discrimination listed above are more invisible, posing new challenges to governance. Algorithmic decision-making is intrinsically complex. Though data input and output are visible, the logic and mechanisms of machine learning are in essence far beyond the scope of ordinary people's knowledge.

Sheltered by professional technologies and algorithmic black boxes, it will be more difficult to discern and verify human biases as well as underlying political and economic intentions. In addition, the protection of intellectual property rights and confidential business clauses also hinders legal interventions in the internal review of algorithms.

With the development of artificial intelligence, the black-box effect of technologies will be more severe. A lack of regulations will continuously exacerbate power imbalances and social risks brought by algorithmic discrimination.

EU and US regulations

Regarding discrimination by algorithms, some countries have implemented regulations that are tailored to their own legal systems and governance traditions. European Union (EU) countries mostly chose the regulatory path of individual empowerment.

Since algorithmic discrimination is grounded in data collection and processing, member states of the EU have expanded their regulatory framework from traditional Anti-Discrimination Law to General Data Protection Regulation (GDPR), trying to empower individuals to exert influence on automated decision-making by granting them new data rights.

For example, the GDPR gives data subjects the rights of access, to erasure and to data portability. Article 22 even stipulates that the data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her. Moreover, the data subject shall have the right to intervene in algorithms to express his or her point of view and to contest the decision.

The EU model values fairness, placing an emphasis on the extension of individual rights to inhibit the power of algorithms. However, due to the covertness of algorithmic discrimination, the real ability of data subjects to identify discrimination and testify against it is incomparable to combating discrimination within an institutional design. Furthermore, discrimination by algorithms sometimes exceeds the scope of personal data protection regulations, because some intelligent analysis based on public data might also lead to discrimination.

Compared with the EU model's fairness in rights allocations, American lawmakers seek the justice of algorithmic results. Following this idea, lawmakers take a regulatory path with external accountability at the core. Policies seek to institute specialized administrative or regulatory bodies, and algorithm controllers are obligated to report the reasonability of algorithms alongside risk-prevention measures. Government agencies are entitled to investigate algorithm controllers.

The model requires algorithms to be explanatory and censorable. On the premise of strengthening the obligations of algorithm controllers, external regulators urge them to make internal, self-regulatory responses to problems, thereby reducing risks of algorithmic results.

Currently, the accountability acts released by New York City in 2017, and Washington State in 2019, as well as related federal court reports published in 2016, focus on algorithmic decision-making for the public sector. The Algorithmic Accountability Act, which is still being debated in congress, attempts to broaden the accountability scope to include large private organizations.

The pioneering practice of establishing an algorithmic accountability system for the public sector conforms to the US tradition of marketization, but whether it could be extended to the private domain remains unknown. Meanwhile, challenges in technological transparency and barriers in intellectual property law still have to be solved.

Generally, EU countries and the US regulate algorithmic discrimination by building a fair governance environment to strengthen public control of data and algorithms, and by intensifying institutional incentives to make algorithmic technologies justice-oriented through external regulation and internal cooperation, respectively. How effective the two models will be requires further observation, but both reflect the shift from real damage accountability to risk prevention.

In the case of China

The Constitution of China is built on the fundamental principle that all people are equal before the law. Legal provisions regarding the new problem of algorithmic discrimination were not in place until 2019. For example, the E-Commerce Law mandates that e-commerce operators provide consumers with options which do not target their personal characteristics. From the perspective of legal improvement, certain measures can be taken.

First, it is essential to achieve fairness in code design and data use. Coding is the language tool used to realize algorithmic goals, and as such, codes are technically capable of regulating algorithms. This requires experts to internalize the principle of fairness in code design, paying particular attention to blending laws and ethics into technologies in such links as data screening, cleaning, and feature selection.

The fairness in data use manifests itself by empowering data subjects while imposing requirements on algorithm controllers to minimize the range of data collection and democratize data use. Zhang Xinbao, a professor of law at Renmin University of China, pointed out in the Personal Information Protection Law (Expert Proposal) the necessity to restrict information practitioners from gathering and analyzing sensitive personal information. They should inform individuals and obtain consent when making algorithmic decisions regarding credit granting, insurance acceptance, and job opportunity recommendations. Meanwhile, individuals have the right to intervene and withdraw.

Moreover, efforts should be made to reasonably enhance the transparency of algorithmic design and operation. For technical and legal reasons, it is unrealistic to ask for full transparency from algorithms, so the transparency of algorithms should be limited to a certain extent and range, thereby striking a reasonable balance between innovation and regulation.

In different industries or application fields, the requirements for transparency should vary according to such factors as the complexity of architecture, the probability of discrimination occurrences, the scope of influence by results, and the severity of tort.

To implement the transparency principle, it is crucial to establish exclusive and professional regulatory bodies to investigate practitioners in a refined and scenario-based manner, and combine ex ante and ex post, regular and irregular investigations.

What's more, algorithmic discrimination's accountability for damages should be reinforced. Liability for damages caused by algorithmic discrimination should be undertaken by natural or legal persons who develop or use the system. The information of personnel involved in algorithm development should be disclosed to the public.

This is not only an issue of legal liabilities, it also makes public rights of intervention and withdrawal possible, allowing for timely remedies for damages. Once damage occurs, the hidden nature of algorithmic discrimination makes individuals normally unable to perceive and testify against it. Some scholars proposed introducing a class action system to alleviate information and resource asymmetry between victims and algorithm operators, lower lawsuit costs, and improve procedural efficiency.

Regarding the burden of proof, the GDPR's stipulation which reverses the burden of proof regarding the liability of defective products is instructive. This legal precedent can narrow the gap in knowledge and technology between the two sides, reflecting fairness. At the same time, it helps algorithm operators enhance algorithmic transparency while evaluating and easing potential discriminatory damages.

Renowned tech historian Melvin Kranzberg noted that technology is neither good nor bad; nor is it neutral, adding that it will generate different results under different application circumstances. As strategic emerging technologies to usher in the future, artificial intelligence and algorithms should be oriented towards goodness. The law should follow up closely, to ward off detrimental consequences from algorithmic discrimination and make artificial intelligence a real positive force in promoting the equal development of humanity.

 

Zou Ju is an associate professor from the School of Journalism and Communication at Nanjing Normal University.

Editor: Niu Xiaoqian

>> View All

Ye Shengtao made Chinese fairy tales from a wilderness

Ye Shengtao (1894–1988) created the first collection of fairy tales in the history of Chinese children’s literature...

>> View All