[Introductory note: this post’s author does not work for a life insurance company.]
Armageddon, The War of the Worlds, The Day After Tomorrow… Hollywood is overflowing with doomsday screenplays which, though they would eventually lead to a happy ending in the movie, feature a devastated world whose population was at least cut by half. Why do we love to simulate our own end?
Mankind has indeed a genuine interest into imagining how its reign over Earth would end. It all comes down to religion. The Old Testament prophesies the moment when every individual will be introduced before God during the Last Judgement. Scandinavian mythology foretells the Ragnarök, a massive cycle of destruction where gods and legendary beasts throw themselves into an epic fight, with skyscraper-high flames in the background. (Picture next to this the Devil playing electric guitar and you’ve got pretty much every heavy metal album cover.)

Islamic religion assumes the end of our world to be announced by a one-eyed Antichrist; even Buddhism, perhaps the most relaxed of all major religions, patiently waits for the Seven Suns orbiting around our planet to burn it off.
This is to say that sermons regularly happening at Mass time could turn way more epic, would they focus on religious or mythological scenarios of the like.

Surprisingly, bidding farewell to the Pale Blue Dot is neither the main concern of Hollywood screenwriters, nor the favorite preaching topic of monotheist prophets. Actually scientists would easily carry off the laurels, for their commitment towards warning us doomsday is approaching is strong. Dear readers, we do have the worrying privilege of living in a world that could end sometime soon. (Remember: in 2017, the Doomsday Clock is set at two and a half minutes to midnight.)

Pundits at the Future of Humanity Institute, located within Oxford University, claimed as soon as 2008 that mankind had a 19% chance to disappear before the year 2100.
The first assumption involves something known as ‘molecular nanotechnology’. If you already know what this is – well, first, congrats for your Nobel Prize – you can skip over the next paragraph.
Molecular nanotechnology is a kind of construction process occuring at the molecular (i.e. very tiny) scale. To put it simply, picture an assembly line on which workers put together components to assemble a finished product (such as a car). Now, imagine the same process going on at the microscopic scale, a small set of Legos where every piece is a different atom. This means that, would we master the adequate technology, we could build complex structures of molecules and thus recreate everything we want – a chair, cherry yoghurt, a virus… To keep the metaphor going, one just needs: Lego assembly instructions (how to build the chosen microscopic structure), the corresponding parts (atoms needed for that very construction) and the kid to assemble them (a highly complex molecular nanotechnology).

Problem is, though the advent of this amazing technology comes with great promises for the future of humanity – would we be able to handle atoms and stick them together, we could for instance restrain global starvation, cure diseases, reverse the aging process, and many other cool things – it can also doom mankind. Weapons of mass destruction could be deployed using this revolutionary technology, especially in the form of self-replicating robots looting all Earth’s resources to build more versions of them. (This scary assumption is known as ‘grey goo’, a term coined by Eric Drexler). According to the experts of the Future of Humanity Institute, it has a 5% chance to wipe out humanity before 2100.
The second scenario which is most likely to eradicate humanity and worry scientists involves robots; more specifically, a super-intelligent A.I. which will exceed mankind’s intellectual capabilities and then escape its control. Given that brilliant people such as Stephen Hawking currently work on the subject, the assumption that robots could take over seems no longer one of Asimov’s daydreams, but a real concern for the scientific community. Surely, detractors will go: “Right, robots could take over – but what if we just unplug them?” The thing is, it wouldn’t be just Siri screaming at you from your phone. Equipped with super-intelligence, bots would easily fix their weaknesses and turn self-sufficient; and none of them would be having moral or ethical concerns. That is a human thing.

(Some also consider that robots would dramatically fail to take over the planet, including Randall Munroe, the man of genius behind xkcd, who said: “If all [my] experience has taught me anything, it’s that the robot revolution would end quickly, because the robots would all break down or get stuck against walls. Robots never, ever work right.” This post also needed a reassuring point.)

Less likely but just as scary, scientists also warn us about global nuclear war – such as the one we were on the verge of during the Cold War. (In that case, current geopolitics is far more useful than scientific evidence to prove that concern is justified.) In addition, a global pandemic – natural or engineered in a lab – could also wipe out humanity, as well as nuclear terrorism. But added together, the probabilities of these scenarios to make our species go extinct come close to 3 chances out of 100, which is nowhere near the ‘self-replicating Legos’ and ‘computers rise to power’ assumptions’. However, when you turn on your TV news, deadly viruses and escalation towards nukes always make the headlines.
Science has spoken. You’re very welcome.
By the way – doomsday scenarios, as considered by ancient religions and mythologies, are not only synonymous with chaos and destruction. They usually mark the end of a cycle and the beginning of another, where life will sprout again…
“Without us, Earth will abide and endure; without her, however, we could not even be.” – Alan Weisman

Sources:
- https://www.webcitation.org/6YxiCAV0p?url=http://www.fhi.ox.ac.uk/gcr-report.pdf
- https://en.wikipedia.org/wiki/Global_catastrophic_risk
- http://www.spirtech.com/flv/nano/
- https://en.wikipedia.org/wiki/Grey_goo
- https://en.wikipedia.org/wiki/Existential_risk_from_artificial_general_intelligence
- http://www.independent.co.uk/news/science/stephen-hawking-ai-could-be-the-end-of-humanity-9898320.html
- https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html
- https://what-if.xkcd.com/5/
Interesting and witty article! But what about climate change? It seems to me this is the most probable doomsday scenario…
LikeLike
Hi Lily, thanks for your comment! As you rightly highlighted, climate change is also a serious concern for experts investigating doomsday scenarios (which researchers at the Future of Humanity Institute listed among ‘other risks’). However it wasn’t mentioned here since it is not considered likely to wipe out humanity before 2100; rise of sea levels, damage of the ozone layer and higher temperatures (among other effects of climate change) do have the potential to kill us all, but over the long run (so hopefully we still have time to sort this out).
LikeLike
Fair enough. To be honest, I do feel pretty desperate, considering how stubborn the fat cats in power are in their pursuit of short-term profit at the expense of the future of our ecosystems. But I’m glad to hear scientists still believe there is hope.
LikeLike