Can AI lead to Our Extinction? (By: Dwaipayan Mondal)

Can AI lead to Our Extinction?

https://d1m75rqqgidzqn.cloudfront.net/wp-data/2019/11/12140511/shutterstock_519560572.jpg

Everything we love about civilization is a product of intelligence, so amplifying our human intelligence with artificial intelligence has the potential of helping civilization flourish like never before – as long as we manage to keep the technology beneficial.


What is AI?

From SIRI to self-driving cars, artificial intelligence (AI) is progressing rapidly. While science fiction often portrays AI as robots with human-like characteristics, AI can encompass anything from Google’s search algorithms to IBM’s Watson to autonomous weapons.

https://encrypted-tbn0.gstatic.com/images?q=tbn%3AANd9GcSK2TknU4Kqh3lbsFI96hHK5nrIHlrK8L27hV-6Kag7xIkfRryM&usqp=CAU

Artificial intelligence today is properly known as narrow AI (or weak AI), in that it is designed to perform a narrow task (e.g. only facial recognition or only internet searches or only driving a car). However, the long-term goal of many researchers is to create general AI (AGI or strong AI). While narrow AI may outperform humans at whatever its specific task is, like playing chess or solving equations, AGI would outperform humans at nearly every cognitive task.


AI Current Usage


Discussions about Artificial Intelligence (AI) have jumped into the public eye over the past year, with several luminaries speaking publicly about the threat of AI to the future of humanity.

https://images.livemint.com/img/2019/07/10/600x338/U7N0JQPG_1562736394652.jpg

Over the last several decades, AI — computing methods for automated perception, learning, understanding, and reasoning — have become commonplace in our lives. We plan trips using GPS systems that rely on AI to cut through the complexity of millions of routes to find the best one to take. Our smartphones understand our speech, and Siri, Cortana, and Google Now are getting better at understanding our intentions. AI algorithms detect faces as we take pictures with our phones and recognize the faces of individual people when we post those pictures to Facebook. Internet search engines, such as Google and Bing, rely on a fabric of AI subsystems. On any day, AI provides hundreds of millions of people with search results, traffic predictions, and recommendations about books and movies. AI translates among languages in real time and speeds up the operation of our laptops by guessing what we’ll do next. Several companies, such as Google, BMW, and Tesla, are working on cars that can drive themselves — either with partial human oversight or entirely autonomously.

https://www.wattknowledge.com/assets/media/autonomous-cars-data_blogMain.jpg

Beyond the influences in our daily lives, AI techniques are playing a major role in science and medicine. AI is at work in hospitals helping physicians understand which patients are at highest risk for complications, and AI algorithms are helping to find important needles in massive data haystacks. For example, AI methods have been employed recently to discover subtle interactions between medications that put patients at risk for serious side effects.


How can AI be dangerous?

Most researchers agree that a superintelligent AI is unlikely to exhibit human emotions like love or hate, and that there is no reason to expect AI to become intentionally benevolent or malevolent. Instead, when considering how AI might become a risk, experts think two scenarios most likely:

  1. The AI is programmed to do something devastating: Autonomous weapons are artificial intelligence systems that are programmed to kill. In the hands of the wrong person, these weapons could easily cause mass casualties. Moreover, an AI arms race could inadvertently lead to an AI war that also results in mass casualties. To avoid being thwarted by the enemy, these weapons would be designed to be extremely difficult to simply “turn off,” so humans could plausibly lose control of such a situation. This risk is one that’s present even with narrow AI, but grows as levels of AI intelligence and autonomy increase.
  2. The AI is programmed to do something beneficial, but it develops a destructive method for achieving its goal: This can happen whenever we fail to fully align the AI’s goals with ours, which is strikingly difficult. If you ask an obedient intelligent car to take you to the airport as fast as possible, it might get you there chased by helicopters and covered in vomit, doing not what you wanted but literally what you asked for. If a superintelligent system is tasked with a ambitious geoengineering project, it might wreak havoc with our ecosystem as a side effect, and view human attempts to stop it as a threat to be met.


https://yellowtube.org/wp-content/uploads/2016/07/Artificial-intelligence-could-be-a-danger-and-Google-already-thinking-about-how-can-deactivate.jpg

As these examples illustrate, the concern about advanced AI isn’t malevolence but competence. A super-intelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we have a problem. You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green energy project and there’s an anthill in the region to be flooded, too bad for the ants. A key goal of AI safety research is to never place humanity in the position of those ants.


The growth of the effectiveness and ubiquity of AI methods has also stimulated thinking about the potential risks associated with advances of AI. Some comments raise the possibility of dystopian futures where AI systems become “superintelligent” and threaten the survival of humanity. It’s natural that new technologies may trigger exciting new capabilities and applications — and also generate new anxieties.

https://www.thetablet.co.uk/UserFiles/images/weeklynews/6-january-2018/10-art.cornwell.jpg

The mission of the Association for the Advancement of Artificial Intelligence is two-fold: to advance the science and technology of artificial intelligence and to promote its responsible use. The AAAI considers the potential risks of AI technology to be an important arena for investment, reflection, and activity.

https://bloximages.newyork1.vip.townnews.com/reflector-online.com/content/tncms/assets/v3/editorial/8/78/878eb19c-cb0a-11e7-af31-d37d3b6b6159/5a0df1c119bca.image.jpg?resize=400%2C357

One set of risks stems from programming errors in AI software. We are all familiar with errors in ordinary software. For example, apps on our smartphones sometimes crash. Major software projects, such as HealthCare.Gov, are sometimes riddled with bugs. Moving beyond nuisances and delays, some software errors have been linked to extremely costly outcomes and deaths. The study of the “verification” of the behavior of software systems is challenging and critical, and much progress has been made. However, the growing complexity of AI systems and their enlistment in high-stakes roles, such as controlling automobiles, surgical robots, and weapons systems, means that we must redouble our efforts in software quality.

https://19j7mk3co2lu2b2wgr27dgh5-wpengine.netdna-ssl.com/wp-content/uploads/sites/21/2020/01/Machine.Learning.Medical.Error_.Alerts_G_1186776025-1024x768-860x645.jpg

AI algorithms are no different from other software in terms of their vulnerability to cyberattack. But because AI algorithms are being asked to make high-stakes decisions, such as driving cars and controlling robots, the impact of successful cyberattacks on AI systems could be much more devastating than attacks in the past. US Government funding agencies and corporations are supporting a wide range of cybersecurity research projects, and artificial intelligence techniques in themselves will provide novel methods for detecting and defending against cyberattacks. Before we put AI algorithms in control of high-stakes decisions, we must be much more confident that these systems can survive large scale cyberattacks.

https://miro.medium.com/max/2788/0*ub_inexrRJZhfmWK.jpg

A third set of risks echo the tale of the Sorcerer’s Apprentice. Suppose we tell a self-driving car to “get us to the airport as quickly as possible!” Would the autonomous driving system put the pedal to the metal and drive at 300 mph while running over pedestrians? Troubling scenarios of this form have appeared recently in the press. Other fears center on the prospect of out-of-control superintelligences that threaten the survival of humanity. All of these examples refer to cases where humans have failed to correctly instruct the AI algorithm in how it should behave.


Our Job


AI doomsday scenarios belong more in the realm of science fiction than science fact. However, we still have a great deal of work to do to address the concerns and risks afoot with our growing reliance on AI systems. Each of the three important risks outlined above (programming errors, cyberattacks, “Sorcerer’s Apprentice”) is being addressed by current research, but greater efforts are needed.

We urge our colleagues in industry and academia to join us in identifying and studying these risks and in finding solutions to addressing them, and we call on government funding agencies and philanthropic initiatives to support this research. We urge the technology industry to devote even more attention to software quality and cybersecurity as we increasingly rely on AI in safety-critical functions. And we must not put AI algorithms in control of potentially-dangerous systems until we can provide a high degree of assurance that they will behave safely and properly.


Comments

Popular Posts