Artificial Intelligence – The End of the Human Race?

Yesterday, I attended a New Scientist Instant Expert Event on Artificial Intelligence (AI) in London. It was packed. And had some fascinating speakers. It started with the excitement made in the press by Stephen Hawking’s claim in 2014 that “The development of full artificial intelligence could spell the end of the human race.” Could things really be that bad?

This is a field moving so fast that society is really not keeping up with the ethical and privacy questions that it raises. Lilian Edwards (University of Strathclyde) gave a fascinating talk on the legal issues around AI. She said that although there are many questions raised, our laws are mostly good enough at the moment. A robot / AI is not a legal personality like a human or a limited company. It should be thought of as a tool – and we’ve had industrial robots around for years that fit without problem into the existing framework for Health and Safety, for example. One question that is very relevant today is what if an autonomous vehicle injures someone – who is to blame? Is it the driver, manufacturer, algorithm creator, training set, 3rd party hacker etc.? But actually, we have a similar situation at the moment where, when there is an accident, there is the potential of different parties taking part of the blame. Whether driver, manufacturer, mechanic or road maintenance. So this is not a new type of issue in law. We solve it currently with compulsory insurance and insurance companies fight it out. Of course, that doesn’t fix the injury though.

Another interesting area that was explored was privacy with the use of AI on Big Data. There was the example of a girl who was found to be pregnant by an algorithm at Target when the father was not aware – and then the father received lots of vouchers for baby products. Or smart meters for electricity being able to detect usage when someone is upstairs in their house even though they are receiving benefits for being disabled and unable to go upstairs. We should all be asking where our data is going and how it is going to be used.

Irina Higgins from DeepMind (Google) talked about the astonishing achievement of AlphaGo beating the world champion, Lee Sedol, at the game ‘Go’. She and Simon Lucas from the University of Essex talked about why AI researchers focus on games so much. Games are designed to be interesting to humans and so presumably encapsulate something of what it is to be human. They can also be set up and played many times without any safety concerns (which you might get from a robot flailing around in a lab). There was a great example of use of AI software to tackle games it had not seen before – and after playing them many times, working out strategies to be ‘super-human’. Including coming up with strategies that no human had come up with before to win the game. Irina also shared how the AlphaGo AI had been used to reduce the energy consumption of Google’s cooling system by 40%. Human intelligence is difficult to define and one of the attributes of human intelligence is its ability to be general – to tackle not just one task well but completely unrelated ones too. AI and robots can be super-human in specific areas but it will be a long time before they have this general intelligence. But AI can really assist humans to be faster and smarter in a world with too much information and great system complexity. We should see it as a tool.

Kerstin Dautenhahn from the University of Hertfordshire talked about the use of robots and AI to help people – whether they are infirm and living at home or with autism. With her background in biology Kerstin brought an interesting slant to the discussion.

The final session was a debate where the audience had submitted questions and it was a lively affair. Example big questions – can a robot be truly conscious? Does so much funding of robotics by the military taint the field? Should there be a tax on companies that rely on AI for their profits? Should sex robots be allowed?

The final question at the end of the debate was very revealing. The five panellists were asked whether they believed we would see AI equivalent to human intelligence in the next 70 years. Three gave unequivocal no’s, one a qualified no and one an unequivocal yes. So whilst AI is a very fast moving field, it seems that on balance the experts (on this panel at least) think human level intelligence is a long, long way off. My takeaway is that AI offers huge promise for mankind – we should not view it as a coming apocalypse that will end our race. But we do need more debate like this. We need to be discussing as a society the big questions and the ethics so that we can minimise the unintended consequences of this fantastic opportunity.

 

Text © 2017 Dorricott MPI Ltd. All rights reserved.