Beware of Artificial Intelligence! says (some) experts

Article by
Michael Szollosy

A few notable nameMale robot thinking about something.s have made some warnings about the dangers of artificial intelligence in the last few months. Bill Gates apparently cannot understand why we are more concerned about this impending threat. Professor Stephen Hawking recently has come out and completely endorsed the notion of the Singularity – that AI, once autonomous, would take over designing itself and so improve at an exponential rate, uninhibited by the biological limitations of humans, developments that could eventually lead to ‘the end of the human race’.

Stephen Hawking warns of the dangers of AI

And Elon Musk, founder of PayPal and SpaceX, has even gone so far as to put not inconsiderable sums of money to backup his fear, in case we didn’t believe him, donating $10 million to The Future of Life Institute, ‘organization working to mitigate existential risks facing humanity’, focussing at present, they explain, on ‘potential risks from the development of human-level artificial intelligence’. (This was reported by Forbes as ‘Elon Musk puts down $10 million to fight Skynet’)

With so many influential nRobot allongÈames coming forward and robustly warning us to be afraid, be very afraid, of the looming AI threat, you would think that we would be acting with more urgency on this apparent consensus. But not everyone agrees that AI is such an imminent menace, and questions are being asked: how many of these fears are genuine, and how much is the product of misinformation, optimistic (yes, optimistic) evaluations of our current and future technological prowess, and how much is simply the inevitable side-effect of hype and mass marketing machines?

Gates, whose futurist credentials have certain legitimacy (he did found Microsoft, though that was some time ago now), directly contradicted present Microsoft research chief, Eric Horvitz (who also, therefore, has a reasonable claim to say he knows of which he speaks). Horvitz claims that while he believes AI will achieve consciousness, he does not think that this is something about which we need to worry, and has co-authored an essay with Tom Dietterich, of the Association for the Advancement of Artificial Intelligence, in direct response to some of this celebrity-induced paranoia.

Horvitz and Dietterich have also, it is worth noting, however, signed the FLI open letter that inspired Musk to away so much of his money, so they aren’t unambivalent champions of the brave new world. But perhaps that is the correct position to take on artificial intelligence – and maybe, if one were to hazard a guess, on most prophesising about the utopia/doom (delete as appropriate) that we face. So this is not to say that there isn’t something to be concerned about (Horvitz and Dietterich suggest three important places where we perhaps need to start worrying), but it is probably worth listening to some of the less alarmist voices in this conversation, such as Professor Tony Prescott, director of Sheffield Robotics, on this very blog.

ai-and-robotics-3252But these voices, alas, are not what we tend to be interested in. These voices don’t scream headlines that help shift newspapers. And we live in a (post-)modern culture, characterised by a hermeneutics of suspicion, where we assume that sober voices of authority (politicians, scientists, etc.) are hiding things from us, so we tend not to listen to the less-extremist opinions anyway. The FLI focus on AI – a potential danger that might one day materialise – ignores the much more very real and immediate danger posed to human existence by climate change, for example. But that’s a much harder kind of fear to sell to people.

It is a well-known axiom that ‘sex sells’. So too, it seems, do rampantly genocidal intelligent robots.

Horvitz’s and Dietterich’s essay is perhaps the most sober, realistic assessment. They do not indulge in the instinctive panic induced by science fiction and unrealistic expectations (perhaps, too, the aspirations) of some in the science community. However, they also recognise that there are important steps that need to be taken in order to ensure that certain risks are mitigated, or – perhaps more importantly – that the public is reassured that such potential risks associated with AI do not pose a significant, or material threat.

AI doomsday scenarios belong more in the realm of science fiction than science fact. However, we still have a great deal of work to do to address the concerns and risks afoot with our growing reliance on AI systems. […]

We urge our colleagues in industry and academia to join us in identifying and studying these risks and in finding solutions to addressing them, and we call on government funding agencies and philanthropic initiatives to support this research.

But perhaps the most important warning we should heed isn’t telling us to beware of AI at all, but something else entirely. Consider this assessment from a session at this year’s World Economic Forum in Davos:

Natural stupidity will beat out artificial intelligence any time for really screwing things up. We have plenty of natural stupidity. And the combination of natural stupidity and artificial intelligence can be a really dangerous combination.

Davos Report: Fear Natural Stupidity, Not Artificial Intelligence

Share this article:

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>