Daniel Susskind: The AI fallacy

,

Add a comment

We’re asking the wrong questions about AI, according to University of Oxford economist and author Daniel Susskind

Speaking at the Learning Technologies 2019 conference, Susskind shared some of the experiences of AI recounted in his new book The Future of the Professions. Those in the “first wave” of AI in the 1980s could not have predicted it would be able to overtake humans, as they believed the technology wouldn't accurately mimic human skills, he explained.

“The general approach towards artificial intelligence […] was the same; if you wanted to build a machine that could outsmart a human expert you had to sit down with an expert, get them to explain to you how they solved the problem, and then try and capture that human explanation in a set of instructions or rules for machines to follow,” said Susskind.

He gave the example of AI’s ability to play chess: “If you sat down with a chess player and asked them how to be the best, they could show you all of the various moves, but ultimately they would say it requires judgement, instinct, intuition… they might not be able to explain to you why [they’re] so good at chess.”

However, in 1996 champion chess player Garry Kasparov was beaten by IBM’s chess-playing computer Deep Blue. “He was blown out of the water by its super high-processing capabilities and very high storage. The system was playing a very different game to him, and when we spoke to one of the founding fathers of AI for our book he summed it up by saying: ‘there are lots of ways of being smart that aren’t smart like us’,” Susskind continued.

Susskind explained that this is an example of the “artificial intelligence fallacy”; where we believe that the only way machines can be intelligent is through replicating human skills.

“Rather than asking if a machine can perform a task better than us, we should be asking what problem do we have where someone needs to ask for a judgement from us and the answer is uncertainty. So really the most important question is: can a machine deal with uncertainty better than we can? And the answer is that, in many cases, yes it really can,” he said.

Susskind warned that as machines become increasingly advanced, there is a risk of humans falling behind if organisations fail to invest in skills. “From a public policy view there are two strategies going forward: either you want to be the sort of person who wants to do these things machines cannot do, [or] despite everything I’ve said there are clearly areas that remain out of reach of automation.”

He explained that in 2017 the Organisation for Economic Co-operation and Development did a review of literacy, numeracy, and problem-solving, and found no examples of education programmes that will prepare adults to perform those skills at a level that computers are close to reproducing. Just 13% of adults are able to use these skills on a high enough level to compete with machines.

“We recognise that these skills are important and yet we’re falling short,” Susskind said.

One way to tackle this skills gap between people and machines is to remove the stigma surrounding later-life learning, suggested Susskind. “There is still a very strong sense that education is something you do at the start of your life, that you spend a lot of time, money, and energy building up human capital as an asset, and then as you move through your working life you’re done, you’ve built up that human asset,” he said.

“We have to move away from that. There is a huge amount of uncertainty so it’s really important that people are able to reskill and train with the same intensity that we treat education at the start of our careers.”

Comments
Change the CAPTCHA codeSpeak the CAPTCHA code
 

All comments are moderated and may take a while to appear.