Posted By

Tags

AI challenges with ethics, not employment

Alex Penk

Auckland, June 4, 2018

You may have heard that robots are coming to take our jobs, thanks to Artificial Intelligence (AI).

But a recent New Zealand report, ‘Artificial Intelligence: Shaping a Future New Zealand,’ says that only 10% of “normal job creation and destruction” will be due to AI.

The biggest issues with AI may be about ethics, not employment.

Unbiased decisions

In theory, AI offers an impartial tool to make evidence-based decisions, instead of leaving them up to the foibles and prejudices of an individual.

Associate Professor Colin Gavaghan, of Otago University, points out that AI often has a “veneer of objectivity because people think machines can’t be biased.”

The trouble is that the biases of developers can be built into the tool itself.

For example, the Artificial Intelligence report notes that judges in the US have been using AI to help sentence offenders.

The AI they were using turned out to be biased against black defendants because it was based on ‘historical sentencing data.’

Militaries around the world are also considering the development of ‘lethal autonomous robotics,’ which, once enabled, would be able to kill humans without any direct human control. When and how machines should be empowered to kill is a fraught ethical question.

AI benefits

There are other issues with AI, like simple failure, but AI isn’t a bad thing by itself. There will also be benefits. AI could be used to carry out the kind of number-crunching necessary to detect complex fraud, as the New Zealand report ‘Determining our Future’ has pointed out.

The problem is that development of AI is running well in advance of public awareness, ethical reflection, and legal and regulatory frameworks that could make the most of the benefits and minimise the risks.

Technological imperative

This is a common problem with technology because it’s hard to come up with good ways of thinking about things that haven’t been invented yet.

Unfortunately, this gap in our ethical thinking is often replaced by what’s known as the “technological imperative,” the belief that if new technology exists, we should use it.

This can lead to us deploying technology before we’ve worked through all the implications.

For example, the AI report says that ‘robo-advisors’ are coming to New Zealand.

These AI advisors may be able to give consumers financial advice in a more cost-effective and timely way than talking to a real person, but before we start to use them we need to answer questions like, who is responsible if the advice they give is wrong? The person who created them, or the person who chose to rely on them, or someone else?

Multi-disciplinary Group

‘Determining Our Future’ called for the creation of a multidisciplinary “high-level working group” featuring “expertise in science, business, law, ethics, society and government,” and the recent creation of an AI and Public Policy Centre at Otago University is a positive step. These are the kind of steps that could help our ethical and legal frameworks catch up with the technological development that’s already taking place, and prevent the technological imperative pushing us into places we don’t want to go.

Alex Penk is the Chief Executive of the Maxim Institute based in Auckland (Picture Supplied).

Share this story

Related Stories

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Indian Newslink

Previous slide
Next slide

Advertisement

Previous slide
Next slide

Advertisement

Previous slide
Next slide

Advertisement

Previous slide
Next slide

Advertisement

Previous slide
Next slide

Advertisement

Advertisement

Previous slide
Next slide

Advertisement

Previous slide
Next slide

Advertisement

Previous slide
Next slide

Advertisement