CONTACT US Wed Nov. 13, 2013

CASS 中国社会科学网(中文) Français

.  >  FOCUS  >  SCIENCE & HUMANITY

AI reconsidered: Thinking outside Pandora’s box

Author  :  Du Yanyong     Source  :    Chinese Social Sciences Today     2016-05-17

When Google’s AlphaGo computer program defeated the South Korean professional Go player Lee Sedol in a historic-five game match in March 2016, it marked a milestone for artificial intelligence. For the first time, a program beat a world-class human player in the traditional Chinese strategy game by adapting and developing strategies, prompting a new round of reflection on the future of this technology.

This is only the latest of a series of matchups between computers and human players. In May 1997, an IBM supercomputer known as Deep Blue humbled the chess world champion Garry Kasparov, who had earlier bragged he would never lose to a machine. In 2011, IBM’s Watson supercomputer defeated all its human rivals in Jeopardy.

However, this most recent match is apparently unfair. AlphaGo was built upon the latest research results and most advanced techniques. It far surpasses human players in speed and accuracy in calculation as well as in frequency of training. Moreover, it can learn from matches with human players and by playing against itself to improve its skills.

In addition, it is immune to the nervousness and anxiety that human players tend to suffer. Humans also have distinct advantages compared with AI. For example, AlphaGo is very capable of learning the game of Go, but it can’t draw an analogy from it. Humans, however, can apply the learning experience of one area onto other areas.

The fall of the game of Go as “the last high ground of human wisdom” proves that AI has surpassed humans in some aspects. While humans achieved the current wisdom after millions of years of accumulation, AI only took decades to arrive at this success. Hence, it is reasonable for people to believe that AI may catch up with or even surpass human intelligence in the future. So how should we view increasingly smart AI?

Nowadays, many robots have been invented for various uses, such as military, caretaking and agricultural tasks. Nevertheless, the use of robots is based on the foundation that robots are under human control, which raises the question of the potential for AI to develop autonomy.

It’s worth noting that scientists tend to think more about how to develop the capacities of AI and less about how to control it. People jokingly say that if we pull the plug, then AlphaGo will stop working. But there is a possibility that AI could one day provide its own energy supply.

Oxford philosopher and transhumanist Nick Bostrom once noted the ethical issues in advanced AI through the canonical thought experiment of the paperclip maximizer, a superintelligence whose sole goal is something completely arbitrary, such as manufacturing as many paperclips as possible. It would resist any attempt to alter this goal, ultimately transforming the entire planet into one giant paperclip-manufacturing facility. It shows how a general artificial intelligence, even one designed competently and without malice, could destroy humanity. The thought experiment demonstrates that AI with apparently innocuous values could pose an existential threat.

Though scientists generally are optimistic about the future of AI, they cannot maintain absolute control over how their inventions are used. Nuclear weapons are an example of this. In this sense, the rapid development of technology exposes people to even bigger threats. If scientists cannot control the application of their inventions, they should be careful in developing them.

  

Du Yanyong is an associate professor from the School of History and Culture of Science at Shanghai Jiao Tong University.

Editor: Yu Hui

>> View All

Ye Shengtao made Chinese fairy tales from a wilderness

Ye Shengtao (1894–1988) created the first collection of fairy tales in the history of Chinese children’s literature...

>> View All