The future of humankind is steeped in an unprecedented amount of emerging technology. Increasingly, robots are being used in our houses, in the factories, and in war zones. Nanotechnology could help develop “personalized medicine” that takes the place of conventional treatment. Artificial intelligence researchers are making strides in learning about how the brain works by emulating it in labs.
While these advances are on the one hand exciting, there’s always something else to consider in the equation. What risks are there to consider? What precautions do we have to take? What could be some unintended consequences to any given technology?
A group of philosophers, scientists and entrepreneurs, are working to start the Centre for the Study of Existential Risk. The aim is to have a research center that answers the tough questions about risks posed to humans by technology, the take-away being the more we know the better we can prepare ourselves to deal with new technologies.
The center is being established by Huw Price, Bertrand Russell Professor of Philosophy, Cambridge; Martin Rees, Emeritus Professor of Cosmology & Astrophysics, Cambridge, and Jaan Tallinn, co-founder of Skype.
In August, Price and Tallinn published an article that lays out their concerns with Artificial Intelligence. At the end of the article, they point to a philosophical paper called “The Singularity: A Philosophical Analysis” that fleshes out their concerns. Rees, in an op-ed at the Guardian, explains why he thinks the CSER is necessary:
Almost all innovations entail risks: in the early days of steam, people died when poorly designed boilers exploded. But something has changed. If a boiler explodes, it’s horrible but there’s a limit to just how horrible. But new hazards are emerging that could be so catastrophic that even a tiny probability is disquieting. For instance, global society depends on elaborate networks – electricity grids, air traffic control, international finance, just-in-time delivery and so forth. Unless these systems are highly resilient, their manifest benefits could be outweighed by catastrophic (albeit rare) breakdowns cascading through the system. And the threat is terror as well as error; concern about cyber-attack, by criminals or by hostile nations, is rising sharply. Synthetic biology, likewise, offers huge potential for medicine and agriculture – but it could facilitate bioterror. And, looking further ahead, we should even consider the sci-fi scenario that a network of computers could develop a mind of its own and threaten us all.
I’m worried that in 2050 we’ll be desperately trying to minimise these existential risks. It’s not too soon to start thinking about them – even though they are imponderable, and hard to quantify. So I’m delighted that my Cambridge colleague, the philosopher Huw Price, has taken the initiative to gather a cross-disciplinary group to address these issues.
It’s hard to argue against concerns of the future, particularly when humans have been so good at being so bad for ourselves. Though, just as well, it’s hard not to be excited about the potential advances.
I’m interested in following what the CSER comes out with, but I would also like to ask them if a cautious tone would be better, rather than outright pessimism, or even outright optimism. Technologies indeed have issues with them when they’re developing, but we also press the buttons and can direct the future of those technologies. After all, if they are man-made, they can be undone by man, too.
We reached out to Huw Price, though he is unavailable for comment right now. In the coming months we hope to be able to reach the founders or the advisers to the project, and get more information from them about this very interesting and hopefully illuminating project.