Are you afraid of AI? Should you be?
BLOG
TEKOÄLY JA KONEOPPIMINEN

Are you afraid of AI? Should you be?

Is AI dangerous or helpful? When and how should one pay attention to ethical questions concerning the usage of AI?

Artificial intelligence (AI) and machine learning are making our lives easier by automating many activities, which required more of our attention or effort earlier. In our everyday lives now as normal consumers, an AI system might assist or encourage us in making simple and often harmless decisions such as which music record to listen to next, which items to buy from a web shop, or maybe even in tasks that would cause unnecessary stress when done manually, such as parking a car in a tight slot on a busy street. But whose responsibility it is to decide what kind of recommendations or automations are really “harmless”? And when should one pay attention to such ethical questions when designing, implementing or using automation?

Recently, I was asking in my own social media feeds what people were thinking about AI: what do people spontaneously see as ethical threats when using AI, or what are their biggest fears about AI in general. I was a bit surprised about the high amount of discussion on Facebook and especially from people who do not necessarily have any keen professional interest towards AI. On the other hand, I would have expected the discussion to be more active on LinkedIn where I have a great network of professionals working with AI. Although I have to admit that for a technical person like me, the discussion about ethics does not feel very comfortable.

Spontaneous opinions and thoughts from my social media contacts

Here are some excerpts from what people commented on social media. It should be noted that many of them have been slightly interpreted or translated by me. Of course, not all of them touch upon the actual topic of ethics, but I wanted to understand the fears and doubts about AI in general.

  • Outsourcing decision-making to a machine: people becoming dumber while they get too used to having access to AI capabilities
  • Optimization without thinking about what is right: for example, deciding to stop all healing treatment for terminally ill people who often would not survive
  • Increased usage of technology for remote warfare
  • People losing jobs.
  • Consumers/citizens losing access to real people while receiving services such as healthcare or legal
  • Spending an enormous amount of electricity to perform simple tasks that do not add value to society and in turn speed up the climate change by exhausting our energy resources.
  • Cognitive automation evolving without sufficient control, ending up performing crimes while the AI gradually changed its behavior: who would be legally responsible?
  • People letting AI systems make decisions that require moral or ethical judgment
  • AI systems will use other AI systems as data sources and in decision making. These chains may end up running in closed circles or bubbles and thus self-reinforcing their logic and decisions with speed we cannot even be able to notice. If an ethical AI would use data or advice from an AI that has no ethical considerations, then the results might not be ethical.

“Deep learning” in AI systems

While this post is about potential threats or problems with using AI, it would be good to define what I mean by the term artificial intelligence in this context. Most of the threats listed above concern the usage of machine learning, and especially ‘deep learning’, in automating decision making of all kinds.

In somewhat simplified terms, deep learning is a kind of complex mathematical algorithm which aims at providing desired outputs for given inputs, and often utilizes historical data and examples in figuring out how to deal with the given problem in real life usage when getting new inputs. For example, if you provide enough samples of photos that have a cat and samples of photos that do not have a cat to a suitable deep learning system, it will come up with a functionality that might be very good at recognizing photos that do or do not have cats. Some might claim that you can create a machine that is better at cat-recognition than people, i.e. you would have a superhuman cat recognition capability.

Cat recognition is also a good and simple example of what is meant by the statement that current systems are representing narrow AI. A narrow AI system might be or become great at doing a task that it has been trained to do, but it might never be anything more than that. The cat recognition system would not have the slightest clue about what a cat is or where you would expect to find them and where not. If you provide an AI system with a bunch of satellite images, it might still make a high confidence guess about seeing a cat in one of the pictures. Would the world’s best cat-recognizing human do that mistake? Or would your child? By the way: the cat recognition system trained from cat samples would never identify dogs – not except if you trained it with photos of dogs and not dogs.

I hope this is enough to explain the topic we are speaking about.

Unlearning from narrow Cat-astrophes! 

Below, I am listing some of the problems that are increasingly being discussed in many places where AI system’s strengths and weaknesses, opportunities and threats are considered.

  • Deep learning is a black box, and it is often impossible to describe in human language why the system came up with a certain decision. If you get a satellite image categorized as a cat, you can try to fix it by having some photos taken by telescopes as not cat samples – and then try and see if it worked.
  • Deep learning models might become erratic when they receive unpredictable inputs. Even a slightest modification in input, invisible or irrelevant to human eyes, might affect the deep learning systems in very unexpected ways. If you put a cat sticker on a traffic sign, will the AI system see a cat, a traffic sign, or something else that is not a cat and not a traffic sign? And how can one control all the potential occurrences well enough? Sharing thousands of samples of traffic signs with cat stickers with your algorithm will probably not fix it for good.
  • The biases in data and AI systems might cause a negative feedback loop that increases the unfair prejudices as we go. If you have historical data that discriminates a certain group of people and use that data to train an AI system, you can expect the system to learn and follow up on that discrimination. After a while, when you want to retrain or further develop your system, maybe you use the latest data to create “a better model”, and voilá, the emphasized discrimination might get even more focus in the next release. Even if your data does not have clear parameters saying that “this person belongs to a discriminated group”, maybe there is something else in the training dataset that does the same trick. Even having a zip code in your data might be enough to get started with these kinds of biases.
  • People might think that the AI systems are lot more intelligent than they actually are – or that they have a similar kind of common sense as we humans do. This is why people need to be trained about how AI will assist them, and how they need to use their human instincts and knowledge also in the future. The question about human in the loop is increasingly relevant. How is the quality assurance for AI systems performed? Will quality assurance be part of everyday work after deployment? How are humans trained in collaborating with AI recommendations? How will they react if they see troubling (or even any relevant) mistakes emerging from the algorithm? If your employees repeatedly see satellite images categorized as cats, would they simply exclaim “Gosh, here we have yet another bunch of satellite images in our cat collection!”, or would they know how to report the error and request a fix?
  • When implementing AI and especially when using AI for important, potentially life affecting decisions, we really need to think carefully about the potential biases and their effects:
    • Should the system be allowed to take the decision alone? If yes, then in which instances?
    • Is there something in the training data that might cause mistakes, biases or even discrimination? Is anyone educated enough to even validate that?
    • If the system makes a decision that affects someone negatively, what is the procedure to complain against the decision and reverse it? What is the approach after receiving a complaint? How long must the person bear with the potential mistake? If the system is clearly guilty of a mistake, how will the AI system be treated, retrained, modified or fixed?

A great conversation about AI with my Humanist-Technologist colleague

Going back to the earlier social media summary, I had a longer discussion on the topic with my fellow colleague Kasimir Pihlasviita from Idean. Here is an excerpt:

Jaakko: What do you now consider as the biggest ethical threat in ever increasing usage of artificial intelligence?

Kasimir: First would be the lack of transparency with deep neural networks. There is no way to know why any given decision was made. The second threat is how human biases get carried over to AI by biased training data – and how these biases can be very hard to notice, let alone to recover from.

Jaakko: I guess your opinions lead to further questions: in some processes, are you even allowed to make a decision that you cannot explain in human logic and language?

Kasimir: At least decisions made in the public sector should be explainable, e.g. why didn’t you get this or that benefit or why you were denied an expensive treatment for your disease. Explainability would be nice also when you apply for an insurance or a loan. It would be especially important when you receive a sentence for a crime you have committed.

Jaakko: Do you think that a machine imitating human biases would be any more evil than the people who have been acting in a similar way earlier?

Kasimir: I don’t know about evil – after all, cognitive biases are the heuristic enabler of our decision making, and we would be utterly incapacitated without them in any complex situation. And cognitive biases are often behind our social biases or prejudices. But when an AI exhibits those, there are some quite unfortunate consequences. People expect other people to have biases, they can often spot those biases and demand corrections, which may result in the decision to be reconsidered and in fortunate cases shedding of some prejudices. Unfortunately, people often falsely expect AI to be neutral and unbiased, so the biases go unnoticed more often. And as decision making is opaque, it is much harder or impossible to change it.

Thriving with AI starts from awareness

The discussion with Kasimir nicely wraps up what many of the current AI systems are, and what they are not. Any AI is not evil. As far as I know, there is no such system that would know how to be evil or how to even obey a command of becoming evil in a selected process. I want to believe that almost all people who are either training AI systems, building AI applications, or amending their current systems and processes with AI-based functionality have solely good intentions. At best, the current narrow AI alone can help us in getting better, faster and more accurate with many complex tasks that have been solely human-powered for decades or centuries. One of the important things helping us to overcome the challenges and threats of AI starts from awareness. Only knowing what can go wrong with too lightheaded an approach to AI is hopefully at least halfway from avoiding the worst caveats. AI is here to stay, here to make us more efficient, and we need to learn how to use it the right way.

 

Jaakko Lehtinen

Acknowledgements:

Thanks to the following friends and contacts (in alphabetical order) whose social media comments I have used with their kind permission when creating this blog post: Sami Erholtz, Tomi Paapio, Ristomatti Partanen, Kasimir Pihlasviita, Heini Pensar, Patrik Sjögren, Lotta Toivonen, Sari Vesiluoma. If I would use narrow AI to optimize my future discussions about the topic, and if I did not have common understanding about how the world and causality works, I could possibly get and believe the recommendation to only reach out to people whose name contains at least a letter T or S in their names as clearly no-one else has anything relevant to say about the topic.

CONTACT
  • Jaakko Lehtinen
    Jaakko Lehtinen
    Director of Automation, AI and Analytics
    +358 41 501 3917