Lord's review highlights the real danger of AI: Automating Inequality
As syndicated in the Sunday Telegraph
While the Lords may have made headlines last week voting against the Government’s latest Brexit Bill, they were also tackling an issue that could prove to be an even greater challenge for the UK in the coming decade.
The Lords select committee on artificial intelligence released its latest report into the opportunities and challenges of this new technology, and, at a hefty 183 pages, you could forgive a few people for dosing off on the red benches. However, unlike many reports on artificial intelligence, the committee didn’t suggest that the greatest threat to the UK was mass unemployment or rogue terminators, but rather the ethical implications of adopting such complex software.
Rather than claiming artificial intelligence was something that was going to radically change our society as we know it, they described a much more “complex and mundane” set of issues.
There was less discussion of robot butlers and flying cars and instead a more sensible exploration of how complex decisions, from the types of advertising you are shown on Facebook to your suitability for a mortgage, will be increasingly made by sophisticated but opaque autonomous systems.
While this may not be as exciting a concept as killer drones, it is in many ways a far more important discussion.
If we are going to rely more and more on such systems in our lives, then who can develop the algorithms behind them, how we can hold them accountable and how they can be used to everyone’s benefit are all serious and urgent questions.
Take the current impact on citizens affected by the Windrush deportation crisis. While this seems to have been down to some pretty glaring human errors in the Home Office, over the next decade similar immigration decisions will increasingly be dealt with by faster and cheaper artificial intelligence systems. This might be good for the tax bill, but these systems are limited by the quality of the data they are trained on. If these British citizens were deemed to be here illegally in the Home Office’s training dataset then, simplistically, this bias could be ingrained in the software’s programming from the start.
This becomes an even bigger issue when we think about the NHS. While it may seem premature to talk about the NHS using advanced technologies like this, it has in fact adopted many breakthroughs in the use of AI in areas as diverse as automatically spotting cancers from X-rays, to nudging people to smoke less. These systems have been built using huge quantities of data, and the NHS stands to benefit hugely from such breakthroughs. But we need to make sure that these systems are transparent, scrutable and built in line with the same high standard of ethics to which we hold our healthcare professionals.
And this isn’t just a problem for government. Businesses large and small are increasingly using powerful new software to automate decisions and their work using huge datasets.
The Lords isn’t alone in exploring this. The Government is setting up a Centre for Data Ethics and Innovation to monitor how AI is used in both the public and private sectors, companies like Google’s Deepmind have set up oversight bodies to monitor their work, and even the Vatican, an institution older than the House of Lords, has a padre leading work in artificial intelligence.
While we are already used to the frustration of hearing “the computer says no”, breakthroughs in artificial intelligence and the huge amount of data that we now produce daily mean that such abstract decision-making processes are only going to become more prevalent and powerful.
If we want to avoid simply automating inequalities and injustices like those faced by the Windrush generation, we’re going to need to add some real ethics to our artificial intelligence.