Artificial Intelligence – Five Fears Explained

Richard Benjamins    19 October, 2018

While there are many great applications of Artificial Intelligence, a disproportionate amount of attention is given to concerns about AI such as robots taking over control, losing our jobs, malicious use, bias, discrimination and black box algorithms.  While we agree that some of those concerns are legitimate, others are unrealistic or not specifically related to AI. Moreover, we should not look at AI in action in isolation, but in comparison with how things are happening without AI. Why are we afraid of AI? And should we be?

No technology is without risk. The fear for Artificial Intelligence is partially based on legitimate concerns, and partially on movies and limited understanding

Fear 1 is about humanity losing control to robots who will take over the world. This fear comes from science fiction movies, and from confusing narrow AI (a machine that performs one specific task very well) with general AI (able to perform a wide range of tasks, being conscious). Today and in the foreseeable future, we are in the era of narrow AI. No need to fear that machines will take over, unless you believe in technological singularity, or think that we, humans, are machines ourselves …  

Fear 2 is about AI taking over our jobs by automating many tasks that currently are carried out by people. History has shown that any large technical revolution (electricity, motorised transportation) will affect jobs. Part of the jobs will disappear, but mostly, jobs will change nature and new jobs will be created (many of those new jobs are still unknown to us). Part of this fear is legitimate for those workers whose job will be mostly automated while not being able to develop the skills needed for the changing and new jobs.

Fear 3 is that increasingly more decisions about people are taken or supported by AI: decisions about hiring, acceptance by insurers, granting of loans, medical diagnosis & treatment, etc. Such AI systems are trained on large data sets, and those data sets can contain undesired bias or sensitive personal data. The concern is that this might lead to discriminatory impact. Moreover, sometimes the algorithms of AI systems are black boxes, which justifies the concern that decisions are taken without people being able to understand them. Those fears are justified and creators of AI should be aware of, and transparent about those concerns, and do everything they can to remove them. If unsuccessful, then the AI systems should not be used for decisions that significantly impact people’s lives.

Fear 4 is that sophisticated AI systems can cause huge harm in the hands of malicious people: think about AI-based cyberattacks. This is definitely true, but is not specific to AI and applies to any powerful technology.   Fear 5 is based on losing one’s privacy, caused by all kinds of apps and companies that collect massive amounts of personal data, often in a less than transparent manner. This is a well-recognised issue and one of the reasons the GDPR exists. However, this fear does not only apply to AI systems, but to most digital systems that operate with personal data. We warn against an unfounded fear of AI. There are so many more good uses than bad uses.

As we can see, while two of the five fears of AI are legitimate (jobs & discrimination/transparency), they have limited scope and solutions can be foreseen, either societal/organisational (fear 2) or technical/organisational (fear 3). Fear 1 (super-intelligence) is more a philosophical debate as well as the topic of movies. It is not a reality. Fear 4 (malicious use) and fear 5 (privacy loss) are very real and will happen, but are not specific to AI.

It is human nature to pay attention to fear; that has contributed to putting us on top of the evolutionary chain. But let’s also not forget that AI can be used for an infinite number of good things to improve our world. Think about AI for Social Good to help achieve the UN’s Sustainable Development Goals (no poverty, no hunger, peace, health, education, equality, climate, water, etc.)  

Sometimes we think that we humans are a great species, but we shouldn’t forget that the majority of the misery, pain, destruction, wars, etc. in the world has been and is created by humans. We need to worry about machines becoming more intelligent and autonomous, but sometimes we could ask ourselves whether the world would be a better place, if less humans and more machines made decisions. 

Don’t miss out on a single post. Subscribe to LUCA Data Speaks.
You can also follow us on TwitterYouTube and LinkedIn

Leave a Reply

Your email address will not be published. Required fields are marked *