Thanks to CuriosityStream for supporting this episode of SciShow.
[INTRO ♪]
Let’s face it: we’re all going to die ... someday.
Death is the inevitable end for absolutely every living thing, and it holds true for entire species, too.
99.9% of all species that have ever lived are extinct already, and it’s unlikely ours will be special somehow.
If anything, the fact that we’re uniquely meddlesome animals has scientists thinking we’ll probably bring about our own demise.
And while that might sound more like a science fiction plot device than a real concern, there are entire groups of researchers at very serious universities like Oxford looking into the most salient threats—
what they call the existential risks to humanity.
Scientists surveyed those researchers in 2008 to get a rough idea of how likely the experts think the death of all humanity is.
And that survey—which is published with the title “The Global Catastrophic Risks Survey”—estimates that we have a roughly 1 in 5 chance of rendering ourselves extinct by 2100.
We’re not just talking nuclear wars or fury roads, either, though the effects of climate change, war, disease, and famine will likely play a role in basically any doomsday scenario.
Today, we’re going to talk about five less-than-obvious ways we could drive ourselves to extinction in the next hundred years — and how we can keep them from becoming our fate.
If we’re going to imagine the most cinematic version of the end of the world, we might as well start with a robot apocalypse.
This common plot device is not as implausible as you might think—the Global Catastrophic Risks Survey estimates about a 5% chance of the Skynet apocalypse befalling us by the 22nd century.
Global AI experts surveyed at both the International Conference on Machine Learning and the Neural Information Processing Systems Conference in 2015 claimed that artificial intelligence is already on course to totally eclipse our own intelligence within a century.
And while this is awesome and could have benefits for all of us, the negative implications range from robots taking all of our jobs to computers outright overthrowing humanity.
Yes, that could really happen.
Advancements in AI are leading us toward the development of a superintelligence—a form of artificial intelligence that outperforms humans in just about every possible way.
Not just in task efficiency or math, but in
abstract thinking and general intelligence as well.
We’re talking super calculators with logic,
reasoning, social skills and street smarts.
And those experts say that there’s about
a 50:50 chance that this will happen in less than 50 years.
In the wrong hands, such a powerful tool could
lead to unbelievable destruction.
But more to the point, a superintelligence could easily become autonomous—and it wouldn’t necessarily see things our way.
It might use its sophisticated self awareness
to deem humankind a threat, for example.
Luckily, a superintelligence is far more likely to use its problem solving skills to help us with major crises than to eradicate us.
And as we develop AIs, we typically take steps to make sure that they are safe and not fueled by vengeance.
I mean, no one spends more time thinking about
death by robots than the engineers who make them.
So they do things like program them so that when human lives are on the line, only a human can make a kill decision.
Or the engineers otherwise limit the AI’s programming to be specific to the intended tasks so that there’s no room for violent improvising.
It’s much easier to set up failsafes now than to try to reason with a computer that’s smarter than us.
Don’t let technological hubris get you too worried about computers that have outgrown their use for us, though—
We could get wiped out by tech that
isn’t aware of us at all.
That’s the demise experts are worried might
come from nanotechnology.
Nanotechnology isn’t just fancy miniature robots, it’s any human-designed tech that works at the molecular level.
Mostly, this just means giving cooler properties to the stuff we make, like scratch-resistant glasses, antimicrobial fabrics, durable coatings for machine parts—all practical and straightforward stuff.
But we’re not talking about death by futuristic
sunscreen.
Nanotech enters the existential risk pool because of a process called molecular manufacturing.
That’s the nano-scale building of useful
technology from simple raw materials.
And it opens a whole world of possibilities,
including medical, industrial, and other applications.
But it also opens the door to cheap mass production
for illicit purposes.
Rogue factions could use molecular manufacturing
to stockpile weapons.
So the cost and time normally involved in
developing a nuclear arsenal could drop dramatically, but the ensuing war could be just as destructive.
And, there’s a slightly more extreme but also disastrous possibility known as the gray goo problem.
It comes about because one proposed application for nanotechnology is to build nanomachines capable of self replication—in order to scale up the molecular manufacturing process.
A slight malfunction with a self-replicating nanomachine and suddenly you’ve got yourself a swarm of nanobots that can mindlessly make more of themselves, potentially consuming the entire biosphere as raw material in no time flat.
Luckily, scientists say making self-replicating machines is extremely complicated, and it’s even more difficult to make them endlessly unstoppable—you’d basically have to be trying to make gray goo; it wouldn’t just happen by accident.
Also, it’s probably not necessary to make totally self-replicating nanobots in the first place if your goal is just cheap mass production of things.
So, there probably won’t be any smoke monsters coming out of secret labs on private islands anytime soon.
But it’s still something scientists are keeping an eye on—
overall, the Global Catastrophic Risks Survey gives a 5% chance that we’ll all be killed off by nanotech weapons, and a mere 0.5% chance of armageddon via gray goo or other nanotech accident.
Genetic engineering could help us in all sorts of ways, but one of the most intriguing is its potential to harness the destructive power of pathogens—basically, letting us fight diseases with other diseases.
Unfortunately, a potential side-effect of this kind of genetic tinkering is the risk of a global pandemic.
Basically, a zombie apocalypse, except no
rag-tag team of survivors ... or zombies.
Engineering pathogens to work for us mostly means tapping into what they already do best.
Researchers have noticed that infections sometimes take out cancers, for example.
But no one wants to give a person life-threatening sepsis, even if it’s to treat life-threatening cancer.
So, for decades, scientists have been tweaking the genome of bacteria like Salmonella typhimurium—which causes typhoid fever—
making them mostly harmless to healthy cells while equipping them with cancer-fighting weapons, so they can take down tumors with extreme prejudice.
And the strategy seems to work—at least in animal models—so it’s not hard to imagine a future where this kind of thing is done in all sorts of currently-deadly infections.
After all, bacteria and viruses have fairly simple genomes, so they’re relatively easy to reprogram.
They also reproduce quickly, which is helpful for testing any new lines we engineer as well as distributing them if they work.
But that also means they’re prone to mutations and might do things we don’t anticipate, which is why experts fear a misstep in engineering could unintentionally lead to a super disease.
Studies suggest it’s theoretically possible to create a strain of avian flu that could be easily transmitted to humans, for example, or a vaccine-proof strain of smallpox.
And there’s also always the unfortunate possibility that a humanity-ending pathogen will be created intentionally, as a biological weapon.
The experts in existential risk say there’s an approximately 2% chance of this kind of Resident Evil-style pandemic wiping out humanity by 2100.
Of course, governments around the world are well aware of this threat, which is good, because responsible regulation of this kind of technology is our best bet for preventing either an accidental or deliberately engineered pandemic.
That’s why, in 2015, the US NIH placed a
funding moratorium on any research that gives new functions to major disease viruses.
And that’s why the 182 countries that swore not to develop or stockpile biological weapons by signing the Biological Weapons Convention back in the 1970s continue to meet and discuss global security concerns and ways to prevent this exact type of doomsday scenario.
Thankfully, this kind of engineering is currently out of reach for all but the most well funded labs, making it far less likely that, say, a terrorist cell will start developing a super plague.
Though, as time goes on, technologies are becoming simultaneously more advanced and cheaper.
So it’s probably good that outbreak response protocols are also rapidly improving, forming our second line of defense if a superbug were to somehow emerge.
While it might seem like there’s water all over this planet, surprisingly little of it is freshwater that we can drink or use to water our crops.
And we’re already overusing what we’ve got.
Add our current overuse to ever-increasing atmospheric carbon dioxide levels and, well, we could be headed for worse-than-Tatooine conditions sooner than you’d think.
Right now, about a third of the freshwater we use comes from underground reservoirs called aquifers, but we're pumping them dry at a dangerous rate.
The rest of our water comes from lakes, rivers, and streams, and we’re sucking those dry, too.
For example, we use so much water from the Colorado River that it hasn’t naturally flowed all the way to the ocean since the 1930s.
Straight-up exploitation of water resources, especially by agriculture, is already causing havoc in places like California, and the bottled water industry is adding to the problem by taking advantage of weak water laws.
And remember, this is all just what’s happening right now.
One of the predictions from future climate modeling is an increase in both drought frequency and severity—which we’re already seeing.
And that’s just the beginning, because recent simulations now suggest that rising temperatures will wear away cloud cover.
Meaning, yes, we could just stop having watery clouds.
The researchers in a 2019 study in Nature Geoscience modeled what is likely to happen to stratocumulus clouds in particular as temperatures climb.
Those are the low, lumpy grey ones that makes places like Seattle so chilly all the time, as they reflect a lot of the sun’s warming rays.
And despite the increased moisture in the atmosphere as the air warms, the modeling suggests stratocumulus clouds become less stable with more carbon dioxide, until we lose them entirely at a concentration of about 1200 parts per million CO2.
The researchers estimated that this could happen in about a century at our current emission rates.
And if we lose our clouds, things really heat up.
Because of the loss of all that reflective surface area, the 2019 study predicted a global average temperature rise of about 12 degrees Celsius—8 degrees more than we’d see from CO2 alone.
And no, Paul, you can put away your maker hooks: you won’t be riding Shai-hulud in this desert.
Already, scientists have predicted that climate change could mean up to 30% less seasonal precipitation in many areas by 2100, and a loss of clouds would only make that worse.
It would probably push already-stressed freshwater
resources past the tipping point.
And ultimately, the loss of accessible freshwater would mean the collapse of our food production systems, billions of people dying of thirst, and perhaps even the end of humanity.
If there’s a silver lining here, it’s that while water doom is a genuine possibility in the all-too-near future, it’s avoidable through careful planning, regulation, and even just some mindful personal use.
Perhaps more than any of the previous scenarios, this is one where we really need to reflect on what we’re all doing, right here in the present, to prevent the end of days.
There’s also another way our wanton use of fossil fuels could extinct-ify us, and it’s totally separate from global warming or any effects on climate.
That’s because carbon dioxide has direct
physiological effects on human bodies.
And the way we’re headed, we could literally poison ourselves to extinction.
CO2 isn’t usually dangerous to humans directly—I mean, we’re constantly exhaling the stuff.
But when there’s too much of it, it can cause asphyxiation and even respiratory acidosis, which is when the blood becomes too acidic from holding onto so much CO2.
The symptoms of CO2 poisoning range from reduced
cognitive function to seizures and heart failure.
At least, that’s what we see in cases of acute exposure when people are breathing in high concentrations for short durations of time.
Less is known about the physiological effects of chronic exposure—elevated but less extreme levels of CO2 experienced for very long durations.
And, in fact, we simply haven’t studied the health consequences of lifetime exposure to CO2 at the levels we can expect in our atmosphere in the coming years.
We’re at a little over 410 parts per million CO2 now, but in the next 80 years that could climb to over 1000 parts per million.
800 parts per million could cause widespread
dizziness, headaches, loss of concentration, and generally not having a good time.
By 1000 ppm, cognitive function starts to
drop off and pulmonary and neurological problems begin.
And given that these symptoms are known from short-term studies, it’s very possible that continuous, decades-long exposures will mean even more severe effects before concentrations get that high.
So by the time atmospheric carbon dioxide reaches the levels projected for 2100, huge swathes of the population could die from heart and respiratory failure, even if none of the other four horrible scenarios we discussed come true.
That said, it’s also possible that our bodies
will be able to adjust to the increase in carbon dioxide, like how some people have
adapted to the low oxygen at high altitudes.
But the rate of increase for atmospheric CO2 is getting less gradual all the time, making a “just get used to it” solution less likely.
Our only real move here is to cut back on
our emissions, like, yesterday, ideally.
In the end, though things may look scary, it’s important to remember that we’re not necessarily doomed yet.
And we certainly don’t need to completely freak out and, say, stop all technological advancement.
An extinction event caused by advanced technology
is nowhere near as likely as those same technologies dramatically improving our lives.
We’ve just got to look out for the Dick
Joneses and Dennis Nedrys of the world.
And we’ve got to plan ahead in general, and take sensible steps in avoiding ushering in the apocalypse—
things like using water and other resources in sustainable ways, being careful and thoughtful about new technologies, and committing to reducing our carbon emissions on a global scale.
You know, the kind of stuff that we’re either already doing or should be doing, even if the end of times is a long way off.
If you want to learn more about these and other existential threats to humanity or the planet, you might like the videos offered by Curiosity Stream.
CuriosityStream is a subscription streaming service that offers over 2,000 documentaries and non­fiction titles from some of the world's best filmmakers, including exclusive originals.
They have videos on nature, history, technology—even society and lifestyle.
And many dive deep into the big issues facing our species and the Earth in general in the near future.
For example, you could learn more about molecular manufacturing by watching their Curious Minds: Nanotechnology video collection.
These short videos feature experts in the field explaining various aspects of nanotechnology, from what kinds of products already use it to how it will revolutionize medicine and even whether it could be used to create advanced weapons.
You can get unlimited access to this collection and thousands of other videos for as little as $2.99 a month.
And as a SciShow viewer, your first 30-days
could be completely free if you sign up at curiositystream.com/SciShow and use the promo code ‘scishow’ during the sign-up process.
[OUTRO ♪]
