AI 'not intelligent enough' for moral decisions
Humans are succumbing to the dangerous assumption that technology is capable of making intelligent moral and political decisions says tech ethicist Shannon Vallor.
Speaking ahead of her talk at The Federation in Manchester, Vallor shared with BusinessCloud what it means to be 'human' in a tech-focussed world.
The internationally acclaimed author and academic is based in the heart of Silicon Valley in the department of Philosophy at Santa Clara University and is also a consulting AI ethicist supporting Google Cloud AI.
Vallor believes the consequences of continuing to use tech in the way we have been will start to seem increasingly unacceptable.
“The kinds of decisions which are appropriate to delegate to machines and humans will become increasingly more visible,” she said. "There will also be increasingly more cases of injustice or unfairness as a result of the mindless uses of automation and AI.
“I’m definitely not against the use of those things in very well thought-out contexts but the idea that machines can make the kinds of moral and political decisions that we’re responsible for making is an incredibly dangerous illusion.
"Unfortunately a lot of people right now are succumbing to that illusion, so we’re going to see some unfortunate consequences."
Ideally, says Vallor, humanity will wise up before it pays the price of entrusting machines with too much responsibility, and it’s this theme which drives her work.
“My colleagues and I are trying to show that we’ve already hit a wall we should’ve seen coming, with respect to social media and political discourse, so let’s not do that again,” she explained.
“Let’s try and see the danger before we go off the cliff and take a better path this time.”
There is still hope that this will happen, believes Vallor, especially in light of the recent ‘moment of reckoning’ tech companies have faced following the political impact of scandals such as Cambridge Analytica.
“In the last 12 months tech companies have gone into a listening, and increasingly action, mode,” she said.
“They’re starting to develop initiatives around bringing ethics into product design, research and evaluation and that’s great to see.”
While there’s still a lot to be done, changing how we measure success – both as companies and as consumers – is key, says Vallor.
“We need to focus more on the extent to which the tech is serving the public interest instead of looking at narrow metrics like user base growth and engagement,” she said.
“Investors will hopefully start paying attention too and holding companies accountable, and consumers can do same – we’re already starting to see a lot of pushback.”
Employees are also recognising their power, as top tech talent is a prime resource in short supply, she says.
“They’re starting to realise they don’t just want to work for a big company, they want to work for one they can be proud of and that if they want that to happen they may need to exert some pressure.”
Consumers who may be on board with more ethical companies but don’t want to give up the fast shipping and low price points they’re used to also have to think about short term goals versus more sustainable ones.
“We know from an environmental perspective that some of our practices really aren’t compatible with sustainable industry,” said Vallor.
“If you have a product that magnifies false beliefs or spreads harmful conspiracies then that’s not a sustainable product either.”
Ultimately, ethics is something humans have to learn to be good at and, like most things we learn to be good at, can help us enjoy life in a different way – in this case, one with less Fake News and addictive technology.
“Ethics is something that brings us to that future – and that’s worth wanting,” she concluded.