  ♪♪
  If the sample size
was too small,
 the luck of the draw
 could've skewed the results.
  We should test this
on 10,000 people but...
I only know ten.
 That's okay,
 making friends is hard.
(Adam)
  Or maybe there was
a confounding variable.
   Like, if the subject was
 using another product
 that could affect the results
  and the researchers
didn't account for it.
Oh, wow,
this pill is really
boosting our subject's
resistance to sunburn.
(Adam)
  And something, very rarely,
unscrupulous researchers
will fake their results.
What if we just make up numbers
   that show that this pill
 cures cancer?
Hello?
Nobel Prize committee?
  I've got some data
   you've gotta see.
Sure, so we just
need a way to find out
which studies
are accurate.
   We already have one.
   And that's to reproduce them.
  To show the results
   weren't just some fluke,
 someone else
has to do another study
 that tests the same question
to see if they can get
  the same findings.
   It's like double checking
 the answers.
   Curse, my results
  were way different.
And here's the problem.
 Since studies
are usually reproduced
   after they're published,
 that means they have a chance
  to influence other research
 or even become famous
before we find out
they're wrong.
For instance, remember that
famous study on power posing.
 Uh, it's only one of the most
watched TED Talks of all time.
  It proved that by standing
  in the power pose,
 your body produces hormones
 that give you confidence.
Well, no one's ever been able
to reproduce those results.
   What?
Or how about that series
of studies
from the '70s and '80s
that showed you can actually
make yourself feel happier
just by smiling?
Well, that makes sense, right?
Sure, except that in 2016,
almost 20 labs tried
to replicate it
and not one of them
was able to reproduce
those findings.
 (train whistle blows)
