Robots ‘will reach human intelligence by 2029 and life as we know it will end in 2045’.
This isn’t the prediction of a conspiracy theorist, a blind dead woman or an octopusbut of Google’s chief of engineering, Ray Kurzweil.
Kurzweil has said that the work happening now ‘will change the nature of humanity itself’.
Tech company Softbank’s CEO Masayoshi Son predicts it will happen in 2047.
And it’s all down to the many complexities of artificial intelligence (AI).
AI is currently limited to Siri or Alexa-like voice assistants that learn from humans, Amazon’s ‘things you might also like’, machines like Deep Blue, which has beaten grandmasters at chess, and a few other examples.
But the Turing test, where a machine exhibits intelligence indistinguishable from a human, has still not been fully passed.
Not yet at least…
What we have at the moment is known as narrow AI, intelligent at doing one thing or a narrow selection of tasks very well.
General AI, where humans and robots are comparable, is expected to show breakthroughs over the next decade.
They become adaptable and able to turn their hand to a wider variety of tasks, in the same way as humans have areas of strength but can accomplish many things outside those areas.
This is when the Turing Test will truly be passed.
The third step is ASI, artificial super-intelligence.
ASI is the thing that the movies are so obsessed with, where machines are more intelligent and stronger than humans. It always felt like a distant dream but predictions are that it’s getting closer.
People will be able to upload their consciousness into a machine, it is said, by 2029 – when the machine will be as powerful as the human brain – and ASI – or the singularity – will happen, Google predicts, in 2045.
There are many different theories about what this could mean, some more scary than others.
What is the singularity?
In maths/physics, the singularity is the point at which a function takes an infinite value because it’s incomprehensibly large.
The technological singularity, as it called, is the moment when artificial intelligence takes off into ‘artificial superintelligence’ and becomes exponentially more intelligent more quickly.
As self-improvement becomes more efficient, it would get quicker and quicker at improvement until the machine became infinitely more intelligent infinitely quickly.
In essence, the conclusion of the extreme end of this theory has a machine with God-like abilities recreating itself infinitely more powerfully an infinite number of times in less than a blink of eye.
‘We project our own humanist delusions on what life might be life [when artificial intelligence reaches maturity],’ philosopher Slavoj Žižek says.
‘The very basics of what a human being will be will change.
‘But technology never stands on its own. It’s always in a set of relations and part of society.’
Society, however that develops, will need to catch up with technology. If it doesn’t, then there is a risk that technology will overtake it and make human society irrelevant at best and extinct at worst.
One of the theories asserts that once we upload our consciousness into a machine, we become immortal and remove the need to have a physical body.
Another has us as not being able to keep up with truly artificial intelligence so humanity is left behind as infinitely intelligent AI explores the earth and/or the universe without us.
The third, and perhaps the scariest, is the sci-fi one where, once machines become aware of humanity’s predilection to destroy anything it is scared of, AI acts first to preserve itself at the expense of humans so humanity is wiped out.
All this conjures up images of Blade Runner, of iRobot and all sorts of Terminator-like dystopian nightmares.
‘In my lifetime, the singularity will happen,’ Alison Lowndes, head of AI developer relations at technology company Nvidia, tells Metro.co.uk at the AI Summit.
‘But why does everyone think they’d be hostile?
‘That’s our brain assuming it’s evil. Why on earth would it need to be? People are just terrified of change.’
These people still struggle with the idea that your fridge might know what it contains.
Self-driving cars, which will teach themselves the nuances of each road, still frighten a lot of people.
And this is still just narrow AI.
Letting a car drive for us is one thing, letting a machine think for us is quite another.
‘The pace of innovation and the pace of impact on the population is getting quicker,’ Letitia Cailleteau, global head of AI at strategists Accenture, tells Metro.co.uk.
‘If you take cars, for example, it took around 50 years to get 50 million cars on the road.
‘If you look at the latest innovations, it only takes a couple of years – like Facebook – to have the same impact.
‘The pace of innovation is quicker. AI will innovate quickly, even though it is to predict how quickly that will be.’
But, as with all doomsday predictions, there is a lot of uncertainty. It turns out predicting the future is hard:
‘Computer scientists, philosophers and journalists have never been shy to offer their own definite prognostics, claiming AI to be impossible or just around the corner or anything in between,’ the Machine Intelligence Research Institute wrote.
Steve Pinker, a cognitive scientist at Harvard puts it more simply:
‘The increase in understanding of the brain or evolutionary genetics has followed nothing like [the pace of technological innovation],’ Pinker has said.
‘I don’t see any sign that we’ll attain it.’
Yet there already those who think we’re already part of the way there.
‘We’re already in a state of transhumanism,’ author and journalist Will Self says.
‘Technology happens to humans rather than humans playing in a part of it.’
The body can already be augmented with machinery, either internally or externally, and microchips have been inserted into a workforce.
On a more everyday level, when you see people just staring at their phones, are we really that far away from a point when humans and machines are one and the same?