A Background to this IEEE Summit

The forethinkers of Artificial Intelligence (AI) (including Alan Turing, John McCarthy, Marvin Minsky, Allen Newell and Herbert Simon) were optimistic about the future. Turing, the English mathematician, codebreaker and father of modern computer science wrote in 1950 that “at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted”.  Simon, a Nobel Prize laureate and one of the most influential social scientists of the twentieth century, predicted in 1965 that “machines will be capable, within twenty years, of doing any work a man can do”. Minsky, co-founder of CSAIL (MIT Computer Science and Artificial Intelligence Laboratory), wrote in 1967 that “within a generation … the problem of creating ‘artificial intelligence’ will substantially be solved”.

Sixty years after the legendary Dartmouth Summer Research Project on Artificial Intelligence and eighty years after Turing published a paper recognised as the foundation of modern computer science, we are very much at the cusp of a new technological revolution.

As with all such revolutions, there are inevitably ethical, social, legal, security and even existential concerns (such as have been expressed by Stephan Hawking, Elon Musk, Bill Gates and Nick Bostrom) that need to be addressed so that we can reap the actual and perceived benefits that AI can deliver.

In his 2014 book ‘Superintelligence: Paths, Dangers, Strategies’, Bostrom, Swedish philosopher and director of the Future of Humanity Institute, University of Oxford argues that once machines surpass human intellect, they could mobilize and decide to eradicate humans extremely quickly using any number of strategies. He warns that the world of the future could become a “society of economic miracles and technological awesomeness, with nobody there to benefit –  A Disneyland without children.”