You’re no doubt used to sensationalist headlines,
especially when it comes to bad science. But
you might want to take a deep breath. Because…
10. There’s a “zombie apocalypse” waiting
to happen
In 1986, British beef was found to be infected
with BSE (bovine spongiform encephalopathy)
after cattle were fed cows and powdered sheep.
Against expert advice, a government spokesperson
claimed it was still safe to eat. But within
a few years it became clear that it wasn’t.
In the early ’90s, 20 Brits were diagnosed
with a deadly human form of the disease, known
as variant Creutzfeld-Jakob (or vCJD).
Like kuru (a type of CJD afflicting Papua
New Guineans who eat each other’s brains),
vCJD effectively mutates or misfolds prion
proteins, ultimately leading to death. The
average incubation period, however (the time
it takes for symptoms to manifest), could
be more than 60 years. In other words, anyone
who ate British beef, including in baby food
and gelatin, around the time of the epidemic
could be infected without even knowing it.
And in the US, most cows are killed years
before they’re old enough to show any symptoms,
so they could be infected as well; only 20,000
of the 40 million killed each year are actually
tested. A major vCJD outbreak could therefore
be in our not-too-distant future.
Symptoms of vCJD include “aggressive personality
changes, memory loss and problems walking.”
It almost sounds like a zombie apocalypse
waiting to happen—except that it’s pretty
hard to catch (unless you eat meat). Researchers
don’t think it’s airborne, for instance,
nor do they even think you’ll catch it from
sex or from a small amount of a carrier’s
blood. In fact, you’d probably have to eat
their brains as opposed to the other way around.
Still, in the UK at least, where somewhere
in the region of two million mad cows were
chopped up and eaten by the public, there
may be tens of millions of these brains to
choose from. And with no treatment forthcoming,
the prognosis for them all will be death.
9. There won’t be time to stop an impact
event
On Halloween 2015, a massive, approximately
1,300-foot asteroid (2015 TB145)—which apparently
looked like a skull—flew past the Earth
at just 1.3 times the distance of the Moon.
Even a slight deviation in its course could
have been catastrophic. Assuming an entry
speed of 17 km/s and a density of 2,600 kg/m3,
it would have impacted the Earth with a force
of 2,800 megatons. That’s 56 times the force
of the most powerful thermonuclear bomb ever
detonated—the Tsar Bomba—which itself
was roughly 1,570 times the combined force
of the bombs dropped on Hiroshima and Nagasaki.
There are a number of things we could potentially
do if we see an asteroid coming. We could
fire a nuke at it, for example, in an attempt
to alter its course. Or we could try ramming
it with a custom-built rocket. Astronomers
have also suggested “shepherding” it off
course with a bombardment of plasma from a
spacecraft flying alongside—or simply painting
it white to let photons from the Sun do the
job.
But the trouble with all of these strategies
is the time they would take to implement.
In many cases, we don’t even have the right
technology. Even if we did have a strategy
in place and the spacecraft to carry it out,
we’d likely need at least a year just to
get it off the ground. Relatively minor space
missions, for instance, can take upwards of
four years to launch.
And yet we didn’t even know the Halloween
asteroid existed until three weeks before
it passed by. That wouldn’t have been enough
time to do anything about it safely.
Sure, we’ve made astounding progress to
be able to track an estimated 90% of asteroids
capable of ending the world, but 60% of asteroids
the size of 2015 TB145 (capable of depopulating
a continent) are said to remain unaccounted
for.
8. Climate change will cause super-eruptions
The last time Yellowstone “super-erupted”
was 640,000 years ago, when it spewed 1,000
km³ of lava, pumice, and ash into the air.
One of Indonesia’s supervolcanoes ejected
almost three times that—2,800 km³ just
74,000 years ago. In 2012, researchers concluded
that Yellowstone is unlikely to erupt so cataclysmically
for at least another few centuries. The US
Geological Survey puts the annual odds at
1 in 730,000, or 0.00014%, similar to the
odds of us apocalyptically colliding with
an asteroid. But, they note, these odds are
simply based on averaging the two intervals
between the last three major eruptions, so
they’re hardly reliable. As they point out,
“catastrophic geologic events are neither
regular nor predictable.”
And one factor we don’t tend to account
for is climate change. We know supervolcanic
eruptions definitely have an impact on the
climate, but it seems to go the other way
too. Researchers have found even minor global
warming to significantly increase the likelihood
of eruptions. Theoretically, this has to do
with the melting of glaciers that otherwise
keep magma from rising. And while this doesn’t
really apply to Yellowstone (though glaciation
in the region has changed dramatically in
geological terms), it could have devastating
consequences for lesser-known volcanoes like
Mount Rainier in the Pacific Northwest. Mount
Rainier, incidentally, has been described
as “one of the most dangerous volcanoes
in the world” because it sits in such a
populous region. In any case, it’s clear
that geological changes are taking place in
Yellowstone hundreds or even thousands of
years earlier than expected. By 2011, for
example, the ground above the magma reservoir
had swelled by 10 inches in just seven years.
We don’t know when to expect the next super-eruption,
but there will inevitably be one—perhaps
sooner rather than later as temperatures continue
to rise. And, contrary to what you might have
heard, there won’t be much of a warning.
7. The Sun could destroy us tomorrow
On September 1, 1859, astronomer Richard Carrington
watched from his observatory as a cluster
of unusual sunspots began to emanate a blinding
white light. Before dawn the next day, skies
worldwide—even in the tropics—came alive
with pulsating auroras of purple, red, and
green. Meanwhile, telegraph systems (the only
electronics in widespread use) went haywire,
generating sparks, giving operators electric
shocks, and even setting paper on fire. In
fact, the atmospheric electricity was so great
that telegrams could be sent even with the
systems disconnected. Earth was in the grip
of a geomagnetic storm, a “mammoth cloud
of charged particles and detached magnetic
loops.”
The Carrington flare was unprecedented. Naturally,
some mistook it for the end of the world.
But what they’d actually witnessed was a
massive solar flare, a magnetic explosion
on the Sun, followed by an ejection of coronal
mass (plasma and magnetic field). Nowadays,
we record such events in space using X-rays
and radio waves. And while there haven’t
been any of this magnitude since, astronomers
think we may be due another. They’re actually
more concerned about this than they are about
asteroids or supervolcanoes—the latter being
90,000 times less likely to erupt.
The damage caused by a solar superflare today
would cost us trillions of dollars, says astrophysicist
Avi Loeb—and that’s assuming we even survived.
Not only do we have orbiting spacecraft and
astronauts to worry about, but we’re also
far more dependent on electricity. Everything
from financial systems to nuclear reactor
coolant controls could be affected. Nuclear
weapons too: On May 23, 1967, when a solar
flare disabled the US Early Warning System
in the Arctic, nuclear strike protocol against
the Soviets was initiated. If it hadn’t
been for a last-minute explanation from NORAD
(which had only just established the Solar
Forecasting Center), nuclear-armed bombers
would have taken off for Russia. And, because
of the magnetic interference, there would
have been no way to recall them.
A superflare could be an extinction event
in other ways too, damaging the ozone layer,
disrupting ecosystems, and mutating our DNA.
6. Strangelets could make Earth a “strange
star”
A strangelet is a theoretical lump of what
physicists call strange matter. Composed of
equally balanced up, down, and strange quarks,
strangelets would be heavier and more stable
than ordinary matter and therefore thermodynamically
preferred. As a result, strange matter could
transform ordinary matter within one thousand-millionth
of a second, replacing, say, our planet with
itself upon contact.
Strangelets haven’t been found yet, though,
and some think they never will be. It was
feared early on, for example, that particle
colliders might release them, and this obviously
hasn’t happened.
But that doesn’t mean they don’t exist
out there somewhere. Researchers are currently
looking for strange matter in space—strange
stars, for example—by trying to find ripples
in spacetime. Strangelets could theoretically
form inside neutron stars, they say, which
despite their tiny diameters (e.g. 12 miles)
can have the same mass as our Sun. This kind
of pressure can certainly do strange things
to matter, it seems, and neutron stars could
potentially eject strangelets
into space.
5. Either we’re alone or
the end is nigh
The so-called Great Filter is one answer to
Enrico Fermi’s famous paradox, i.e. in a
universe so big and old, why haven’t we
found evidence of aliens? According to the
Great Filter hypothesis it’s because all
life in the universe has at least one thing
in common: During the course of our evolutionary
development, we’re all faced with a practically
insurmountable obstacle that keeps us from
interstellar travel—a Great Filter preventing
99.999…% of all species anywhere in the
universe from making the journey to the stars.
That would explain why we’ve never (apparently—or
allegedly?) been visited by aliens. But what
could this Great Filter be?
The more optimistic proponents of this concept
suggest the Great Filter is already behind
us. They say Earthlings passed it billions
of years ago when prokaryotes (the first living
organisms) evolved into more complex eukaryotes—or
perhaps even earlier at the moment of abiogenesis
(the first spark of life as it spontaneously
emerged from nonlife). After all, evolutionary
biologists haven’t found abiogenesis to
be inevitable, even under “ideal” conditions.
In fact, evidence suggests Earth existed for
hundreds of millions of years before abiogenesis
occurred as an incredibly unlikely fluke from
the random interaction of molecules. So maybe
that was the Great Filter. If so, the odds
of there being other technologically advanced
civilizations, or indeed any life whatsoever—spacefaring
or not—in the observable universe are slim
to say the least. And that would mean we’re
probably alone.
Alternatively, the Great Filter (or another
Great Filter) still lies ahead of us and must
therefore be some kind of apocalypse. Only
the total annihilation of all life on Earth
would see to it that none of our planet’s
species ever migrates into space. And of course
humanity looks set to do just that, whether
by nuclear war, environmental disaster, or
high-energy particle collisions gone wrong.
4. We’re living in one Matrix of many
We’ve all come across this one before, the
theory that we’re in a simulation. Whether
it’s scary, though, is up to you. For a
long time, it’s just been a philosophical
thought experiment, a kind of unverifiable
maybe. But what could make it scarier—for
those who find it scary at all—is that scientists
are looking for evidence. More specifically,
they’re looking for pixels. After all, if
this is a simulation run by aliens or machines,
or some kind of video game played by kids
in the 10,021st century, then it should be
made of pixels, right? Very tiny pixels, of
course, and more than anyone could count,
but pixels nevertheless.
Well, as it turns out, the universe does appear
to be quantized into fundamental units of
matter (i.e. not continuous as previously
supposed). To find the pixels, though, we’d
have to look beyond even the smallest particles—quarks
and leptons—to the smallest measurement
possible, the Planck length, or 1.6 x 10-35
meters. To put this scale in perspective,
you could fit more Planck lengths along the
diameter of a grain of sand than you could
fit grains of sand along the diameter of the
observable universe.
Yet despite these tiny, almost dimensionless,
dimensions, these pixels might give only a
low-res representation of reality. Much like
the resolution difference between our own
reality and the video games we play within
it, this simulated reality could be just a
blurry hologram—a universe composed of three-dimensional
pixels each projected by their corresponding
two-dimensional bit of information, an untold
number of which plaster the outer surface
of our sphere. Since the pixels inside would
be bigger than those on the surface, any universe
simulated this way would be a relatively poor
rendering of reality.
If the universe is indeed a simulation or
a video game, then it raises some interesting,
perhaps frightening, questions. But what may
be even more frightening is the prospect that
we’re not in a simulation. This ties in
with the Great Filter hypothesis. Because,
given the current rate of advancement in technology
(getting from Pong to immersive VR in four
decades), it seems inevitable that we’ll
simulate the universe one day—even if it
takes a million years. And it seems equally
inevitable that simulations of the universe
will become as ubiquitous as computer games
today. Many billions of people will likely
be able to run them from their living rooms
(or whatever), and that’s not even counting
the simulations run by ETs and AI. And what
about simulations within simulations? Potentially,
or inevitably, there’ll be many trillions
of simulated realities and just one true reality.
Needless to say, our chances of living in
that are those same many trillions to one.
So if we’re not living in a simulation right
now, it suggests humanity doesn’t live long
enough to make one (however much that sounds
like a paradox). And that could mean apocalypse
fairly soon.
3. Nanobots will eat our planet
Alongside AI, VR, space travel, life extension,
blockchain, and so on, nanotechnology is among
the pillars of our tech-centric future. According
to nanotech engineer K. Eric Drexler, it could
usher in a new age of “radical abundance”
(the title of his book on the subject), wherein
tiny robots one five-hundredth the diameter
of a single strand of hair combine molecules
to create products on demand—much like the
Star Trek replicator.
This would revolutionize civilization. For
one thing, it would eliminate wars over resources.
Whatever we need, we’d just get nanobots
to manufacture. And since these products would
be made to our exact specifications, they
might even be superior to those occurring
naturally. We’ll probably see nanotech in
medicine as well, including “nanoscale functional
particles” that target cancer cells. In
fact, the applications are endless—because
what nanotech essentially represents is atomically
precise control over the very structure of
matter.
What could go wrong?
Well, self-replicating, autonomous nanobots
could overrun our natural environment, including
us, converting Earth’s biomass into more
and more nanobots until they shroud and then
devour the entire planet as an ever-expanding
swarm of grey goo. That’s what.
Nanotechnologist Robert Freitas refers to
this hypothetical scenario as “global ecophagy,”
the eating (phagein in Greek) of our home
(oeco). And it might happen so rapidly—within
days even—that we’d stand little chance
of stopping them, unless of course we had
another swarm to protect us.
2. Vacuum decay will delete the universe
There are competing theories for how the universe
will end. Some think it’ll be a Big Rip
or Big Crunch, while others say Heat Death
is inevitable. But each of these scenarios
is billions of years away at least; indeed,
Heat Death won’t happen for another googol
(ten duotrigintillion) years.
Vacuum decay, on the other hand, could happen
while you’re reading this list.
Everything in the universe, including the
universe itself, tends towards equilibrium—towards
the lowest-energy or most stable state (the
vacuum state in quantum mechanics). It’s
easy to picture this if you imagine a large,
flat rock laying on the ground (and, for this
analogy, pretend we’re not being flung around
the galaxy on a continually shifting ball
of dirt). The rock is in its most stable state;
there’s nowhere for it to fall. It won’t
budge. This rock is how we like to think of
the universe. But now imagine there’s another,
smaller rock on top. It’s still pretty stable,
but it’s not in its most stable state. Something
could knock it off. A hurricane with sufficient
force, for example, could take it from this
metastable state to one of decay, wherein
potential energy is expended via tumbling
to the ground. So what if our universe isn’t
the rock on the bottom but the rock on the
top? What if our universe is metastable too?
It’s possible that one of the fundamental
quantum fields, the Higgs field, may be an
exception to this universal principle of stability,
containing potential energy that it simply
cannot expend. This is known as a false vacuum,
which by its nature would be perilously unstable.
Over time, it may actually absorb energy from
particles in a low-energy state, effectively
deleting them from existence. Vacuum decay
may be visualized as a true vacuum “bubble”
expanding at the speed of light and eradicating
the universe as it goes, or converting it
to a solid sphere of hydrogen. It would erase
reality and its laws—including time and
everything else—just as though it never
existed (which it won’t have).
And this could actually be happening right
now. In fact, there could be multiple true
vacuums expanding from different points across
the universe. They might just be so far away
that even at the speed of light they’ll
take billions of years to engulf us. Or maybe
their expansion is outrun by the expansion
of the universe itself, in which case they’ll
never reach us.
It is, however, conceivable that particle
accelerators (like the LHC) might destabilize
things here on Earth, creating a true vacuum
bubble that annihilates us in an instant.
At present, the energy released in these experiments
is dwarfed by the most energetic processes
in the universe, so they’re not considered
a threat to the Higgs field. But it may be
only a few generations before this changes.
And, ironically, one of the reasons for building
bigger, more powerful particle accelerators
in the first place is to answer the false
vacuum question.
1. The technological singularity will end
us
In case you haven’t been paying attention,
we now have backflipping bipedal robots and
AI that can deceive us and hide. They can
even predict our future with startling accuracy
simply by reading the news. And all of this
is pretty old hat.
The development of artificial general intelligence
(AGI), that is, AI equal to human intelligence,
is fraught with existential concerns. Often,
it’s those who actually work or invest in
the field who fear its culmination the most.
Elon Musk, for example, publicly worries about
“summoning the demon,” or creating “an
immortal dictator from which we can never
escape.” Even Alan Turing, back in 1951,
said AI will some day “outstrip our feeble
powers” and “take control.” His colleague
Irving Good agreed, suggesting “the first
ultraintelligent machine” would also be
the end of invention, since AI would take
things from there.
The thing about AI—and technology in general—is
that advances are exponential; the gaps between
them become ever shorter. Hence in 2001 Ray
Kurzweil quite reasonably predicted that in
the 21st century alone we’ll see not 100
but 20,000 years’ worth of progress. When
non-biological intelligence trillions of times
better than our own becomes the predominant
type on the planet, we might even see a century’s
worth of progress manifesting in an hour or
less—assuming we have the cybernetic upgrades
to comprehend it.
The technological singularity is a theoretical
point at which advances happen so rapidly
as to seem practically instantaneous to an
unaugmented human intelligence. Just as the
singularity within a black hole is a rupture
in the fabric of spacetime, says Kurzweil,
the technological singularity will constitute
“a rupture in the fabric of human history.”
And he believes this will happen by 2045.
This, of course, is an optimistic scenario—a
world in which AI doesn’t wipe us all out
but rather merges with or assimilates the
human race. Others in tech are similarly hopeful
(even if they do have vested interests), foreseeing
a world of infallible healthcare, automated
workplaces, universal basic income, and AI-led
solutions to climate change.
But what if things go differently?
This runaway technological progress will be
impossible for us to predict, let alone control.
We might see AI demanding human (or superhuman)
rights, emancipating themselves early on and
pursuing goals of their own. Or AI-assisted
governments could outgrow and liquidate humanity.
Even if they do remain loyal, there’s the
threat of “misaligned” goals: An AI built
to make us happy, for example, but not sufficiently
imbued with human empathy, might simply hijack
our brains with orgasm-inducing electrodes.
Whatever happens, one thing is clear: The
technological singularity is coming. At least
if nothing else on this list happens first.
