AI

Singularity: beyond prediction

6 November 2019

AI

There’s a well-known saying: ‘It’s difficult to make predictions, especially about the future.’ And it’s becoming even harder if you listen to the current generation of futurists, who are all talking about ‘singularity’. But what does singularity mean, and do you need to think about it?

This article was written in collaboration with academics from Ghent University and is part of our 'Co-thinking about the Future' magazine.

There have always been advocates and opponents of new technologies, but recently the debate has become much fiercer than ever before. The reason for this is that some people believe new developments in robotics and AI could spell the end for humanity.

An intelligent supercomputer

One term that always crops up in these discussions is ‘singularity’. But what does it mean? Thomas Verschueren from Realdolmen explains: ‘Put simply, singularity is the point in time in the future when, with the best will in the world, we cannot make any more meaningful predictions. New developments will be so advanced that they will trigger runaway technological growth, making literally anything possible.’ We’re mainly talking about developments in robotics and computer technology, but there will be more new discoveries and inventions, for example in genetics and nanotechnology, too. Experts believe the crucial aspect will be predominantly in the possibilities of artificial intelligence (AI). Advancements in this field have accelerated dramatically over recent years, leading some people to start dreaming of AI with its own consciousness. ‘Think of a supercomputer with unimaginable processing power and access to pretty much all the possible sources of information in the world,’ explains Verschueren. ‘Add algorithms to this, which enable technology to recognise all these data patterns, perceiving all problems and discovering opportunities, and you have something like Skynet from the Terminator films. Especially if that computer system is connected to a huge number of robots, who act slavishly – because that’s what robots do – to execute
the system’s instructions.’

AI

Nightmare

Widely respected scientists and entrepreneurs, such as Stephen Hawking, Elon Musk and Bill Gates, literally lie awake at night thinking about such a scenario, because it doesn't leave humanity in a very good position. Humans can, after all, be considered the greatest threat to the planet, from a purely rational point of view. A judicious AI system could therefore want to delete us as quickly as possible. But such a course doesn’t need to be taken, argues Verschueren: ‘In contrast to these pessimists, you also have experts who mainly see the good sides of current developments in AI and related technologies. You have the transhumanists, for example, who believe that humanity can and even must improve itself with the help of technology. Ray Kurzweil, who has carried out pioneering work in the development of scanner and speech technology, has pretty much become the face of this movement. In his vision, singularity is the point in time when humanity starts using technology to unlock our true potential, so we can resolve all our problems, such as hunger and poverty (and heavy workloads and traffic congestion). This vision of the future also includes us becoming immortal and colonising other planets among its possibilities.’

Not far away

Verschueren believes it’s as good as certain that singularity will happen: ‘Drastic steps are now being taken at such a high rate in so many domains that the chance of unimaginable consequences is increasing all the time. Some scientists even dare to put a date on it; Kurzweil thinks singularity will take place around the year 2045. This calculation is based on new insights in Moore’s Law, from which it can be concluded that the processing speed of computer chips will then be fast enough to emulate the human brain in real time.’ You might think 2045 isn’t that far away, so it’s already time to start preparing for it. But that’s the main problem with singularity. ‘How can you prepare for the unpredictable?’ ponders Verschueren. ‘There’s nowhere to start. The only thing you can do is think up all the possible “what if?” scenarios and try to formulate responses to them. Maybe singularity won’t be caused by AI, for example, but by nanotechnology. It’s not inconceivable, after all, that we’ll one day reach the stage in this domain where we can reproduce any possible raw material. That would collapse our entire economic system, because it’s based on the rarest raw materials. So creating an unlimited stock of raw materials would cause a huge shock, with incalculable consequences.’

AI

Real world or science fiction?

Can we do anything except simply wait for an economic apocalypse to erupt? Verschueren thinks we can: ‘It wouldn’t do any harm for scientists and entrepreneurs to stop and think about the ethical and social consequences of new technological evolutions more. We need to be aware that every new stage of progress could set singularity in motion, which means it could also happen by accident. These sound like science fiction stories, but the chance of them actually happening is more real than ever.’

Wondering what AI can do for you in the future?

Find out within the hour at our webinar on November 29th.

Sign up for our newsletter

Would you like to receive our newsletter and stay informed about your preferred topics? 

Sign up here