Killer drones. Evil AI. How do we make sure tech remains a force for good?
For six years in a row, Amazon came top of the UK Customer Satisfaction Index. But when the 2019 countdown was published at the start of the year, they were not first, or second, or third, but fifth. The reason? A new category had been added, and it relates to ethics.
The Institute of Customer Service, who ran the poll, could not find a space for Amazon in its top ten for ethics, but insisted they were not trying to ‘shame’ any of the brands featured. The Chief Executive of the ICS, Jo Causon, said the ratings were ‘about helping organisations to get better.’ ‘It’s not, per se, a league table,’ she commented.
It coincided with the release of a new Apple ad campaign - ‘What happens on your iPhone, stays on your iPhone’ - which was emblazoned on a billboard at the mammoth Consumer Electronics Show.
Ignore for a moment what the ad implies about your last 10 years of iPhone use. The point is that after a series of scandals involving gross data mismanagement, hacking, the spread of hate online and more, ethics has become the hottest thing in tech, and those deemed to be underperforming in that area will be taken to task.
Better late than never. Taking shape in labs and universities from California to China is artificial intelligence, whose eventual realisation may be the single most world-changing event in the history of the human race.
Some of the greatest thinkers of our time, from Sam Harris to Elon Musk, have described nightmarish sci-fi scenarios in which man, having dared to play God, is usurped or destroyed entirely by his greatest creation. Nick Bostrom, the Oxford philosopher and author of Superintelligence, says we are ‘like children playing with a bomb’.
That’s to say nothing of the potential of drones to do harm. Militaries around the world have been quick to invest in drones and people to control them, and hundreds over the Christmas period found out first hand just how elusive they can be. Then there are advanced hacking techniques and viruses, such as the Stuxnet programme discovered by Symantec.
For good reason, then, the question of how to make sure tech remains a force for good in our lives, and in the retail space, is one asked time and again. Technology has no moral code, and lawmakers who, let’s face it, are more confused than anything about where to start with regulation have no hope of keeping up with the changes in the field.
The responsibility for making tech something that continues to make lives better (and despite what you might read, it does) falls to the maker, the user and the wider public, at least in the short term.
It’s always useful to remember that technology is a tool. The kind of technology I’m describing here looks very different from a screwdriver or a hammer. But technology is, still, just a tool, and like any tool, it can be used for good or bad.
A minority of people will use it unethically, but most won’t, and that will remain the case so long as we ourselves lead by example, and call out those, including companies, who don’t. This will help to bring about an ethical ‘digital culture’ from which companies will emerge in the understanding that making their technology ethical is the bare minimum.
But in order for people to use technology as a tool, to use it in ethical and productive ways, they need to know what they’re dealing with. That’s why education is so important: it empowers people to make the right decisions. This starts in schools and homes. But the most powerful tech companies in particular also have a responsibility to inform the public about the dangers of technology, including theirs.
When people started to use social media, for instance, they didn’t realise the impact it was having on their mental health (and the impact their comments were having on the mental health of others). It wasn’t until Sean Parker accused Facebook of ‘exploiting human vulnerability’ that the problems of social media began to be taken seriously.
Of course, there is only so much we can do. The truth is that we can’t make sure that tech remains a force for good. And so dizzying is the potential of AI that, aside from encouraging the best minds in computer science and maths to make AI safety and security a priority, there is little we can do to guarantee Nick Bostrom’s bomb won’t go off.
But I’m quietly optimistic. Through education, public scrutiny and individual effort to use technology ethically, I’m confident that we can bring about a future in which tech improves our lives in increasingly impactful and rewarding ways.