Topics

Church and Gospel

Good News for Today’s World

Mission and Ministry

Sharing Twenty Years Experience

Projects and Initiatives

Science, Tech, Truth and More

Sermons and Talks
A Selection of My Ideas
Reflections on My Life

A Few Stories and Insights

The Ethics of Artificial Intelligence

I’ve been thinking lately about the ethics of Artificial Intelligence (AI). This came about because I recently spoke on the topic of AI at a deanery synod zoom meeting. Apparently, it was something of a success, so I’ve been invited back. The trouble is, I’d used all my ideas the first time round, so I’ve needed to investigate the topic more deeply. As I’ve done so, I’ve realised that AI raises many distinctly different ethical questions. Perhaps we’ve worried about AI taking over the world, or maybe taking over our jobs. But here are six more issues that are not so often thought about.

Applications – What should AI be used for? Clearly there’s no problem with the AI in your car satnav, or with the AI systems that are used in resource planning. Things get trickier when the AI is, for example, making recommendations about your medication. Is it OK to trust a machine? I’d say it is, but not everyone agrees. For example, some Muslims believe that it’s contrary to the will of God to artificially extend life. So, they would reject such uses of AI. And many people would have concerns about AI being used to autonomously operate weaponry.

Equal Access – Who should get access to the AI? Developed western countries have sometimes been accused to hoarding resources for themselves. Might the same be said about AI? It isn’t just about equitable access to the computer hardware. Intellectual ideas should be shared, too. Also, much AI is built using vast repositories of data – the kind that’s collected whenever we interact with Google, Amazon, and the other tech giants. It’s been said that this data is the new oil. But if the data is hoarded by a few powerful operators, then others cannot use it to develop AI systems. Of course, technology companies would argue that they should have privileged access to the data that they have obtained.

Moral AI Systems – What should a self-driving car do when faced with the choice: either (a) kill a group of pedestrians, or (b) swerve out of the way into a wall and kill the driver? That’s the kind of conundrum that AIs will need to deal with. So, how do we encode ethics into our computer systems? While this seems like a difficult problem – maybe impossibly difficult – it will be a crucial one to solve if we’re ever going to allow AIs out onto the road.

Biased Data – Are AIs learning in a non-discriminatory way? Many AI systems (but not all) used so-called “deep learning” in order to work out for themselves the patterns that govern the world. For example, a suitably arranged AI can be given thousands of pages of text in English and the equivalent in German, and then work out for itself how to translate between the two languages. But what happens if the data if flawed or biased? The AI learns to be biased. This has been the case with, for example, a system designed to select suitable candidates for a job. It ended up copying perfectly human bias.

Trusting the Unknowable – What should we do when we can’t tell how an AI works? I said that an AI can learn to translate between English and German by parsing pages of text. To do this it uses an artificial neural network. These come in a variety of flavours – Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs) and Generative Adversarial Networks (GANs). It’s not important to understand the details, but it is important to realise that once these networks have learned how to solve a problem such as translation, it is literally impossible for anyone to understand how they work. Even their designers don’t know. So, should we use a system if we don’t know how it works? Should we trust that system? Well, in actual fact we do. Whenever we use speech recognition on our phones, we’re using just such a system.

Limits of AI – Does it matter that AIs can’t form relationships? You may love your smart speaker, but it doesn’t love you back. It can’t. If there’s one thing that’s still completely beyond the computer scientists, it’s giving AI emotion, feeling or empathy. While the AI might be awesome at playing chess, it will never experience the joy of winning or the frustration of making a poor move. For many tasks, that’s not a problem. But if the AI is in a robot that’s designed to help a vulnerable person in the home, it would seem to be valuable if there was a sense of feeling and empathy. Would you want an unfeeling human carer going in to look after Granny? As long as this problem remains unsolved – and maybe it’s unsolvable – then we should continue to use AIs as mere machines and not as human replacements, however much we’re tempted to treat our Amazon Alexa as a friend

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Register and Log In

Register or Log In to your Account