This is CBS News color coverage of man on the moon
sponsored by Kellogg's
Kellogg's puts more in your morning.
Here from CBS News Apollo headquarters
at Kennedy Space Center
correspondant, Walter Cronkite.
Fate has ordained that the men who went
to the moon to explore in peace
will stay on the moon to rest in peace.
These brave men,
Neil Armstrong and Edwin Aldrin
know that there's no hope for their recovery.
Hello Neil and Buzz
I'm talking to you by telephone from
the oval room at the White House.
And this certainly has to be the
most historic telephone call
ever made from the White House.
Lift-off. We have a lift-off.
Thirty-two minutes past the hour.
Lift-off on Apollo 11.
What a moment, man on the way to the moon
As we approach the moon, the moon
will gradually grow larger and larger in size
To this decision that I have made
To this decision that I have made
Neil Armstrong and Edwin Aldrin
know that there is no hope for their recovery
Will stay on the moon to rest in peace.
These brave men, Neil Armstrong and Edwin Aldrin
know that there is no hope for their recovery
but they also know that there is hope
for mankind in their sacrifice.
Love it.
And I didn't expect to,
I was kind of waiting to get annoyed
because there's been a number of pretty
irresponsible, deepfake creations
but I found myself focusing much more on
the speech and enjoying the speech
and feeling what an amazing opportunity
it was to see that speech spoken
in the way that felt believable.
I mean, I was kind of moved.
I was like, if that was me and I wasn't dead.
I was like, 'Oh no, what happened to them?'
I think it's very convincing and
the quality of the fake videos they
generated is superb, so it was very,
very difficult to find anything, you know
visible artifacts in the videos.
If I'd walked into the room and looked at it,
I would've thought it was Nixon.
People believe things
because of contextual clues.
They rely on the fact you've got all
the peripheral cues of reality.
You have the CBS News, you
have the anchor beforehand.
And those make a huge difference
because we rely on those cues 
to tell us if something's real.
So if you get all those cues, right, and then
you put a bit of fakery in the middle
it's quite likely people will be persuaded.
Because I knew it was a fake.
I was looking for some of the telltale
signs that if I was watching it on an
even bigger screen, I might have been able to see
missed movements of the mouth.
When the camera zoomed in, the shadow on the
the collar was something that
in my mind looked a little artificial
it looks a little artificial,
but that's not something I would clock
if I was watching it on a telephone
or if I had been watching it on a tube television.
I found myself thinking
about the value of deepfakes
in immersing people into an alternate reality.
So long, of course, as they know that, that's
what they're about to be immersed in.
I first and foremost found myself just
feeling like props for the techniques
and the work that goes into that
simulacrum.
And again, a kind of curious anticipation
of the fun and education 
that could be had with it.
I also couldn't help in a way
that maybe I wouldn't have,
if I'd just been reading the text as
Bill Safire wrote it about thinking about
the oddity of they're supposedly still alive,
but running out of oxygen and they can't return.
And speaking about everything
they've done in the past tense,
like what an odd speech that you may only pick up
if you see it delivered that way.
It was a great fake.
I've seen a lot of bad ones
and we've seen the technology develop so quickly.
So that deepfake was incredibly
convincing and provokes a conversation
not only about that moment in time
but of course about the problem of misinformation
which I think you're ably doing here.
Deepfakes are audiovisual works in which
the image of somebody is synthesized
and can be used to make people look
as if they're saying or doing something
that they might not have really said or done.
The term deepfake is a combination
or a portmanteau of the terms
deeplearning and the fake.
It's a fake because it's synthesized,
it's computer generated
and it's deep because it's based in
a particular artificial intelligence technique
called deeplearning.
Deeplearning is an advance in artificial
neural network based approaches.
Originally, there were based on a very
much simplified model of the neuron.
In fact, we now know that the neuron works
with much more nuance
than we have in artificial neural networks.
In typical artificial neural networks
We try to find statistical patterns,
regular patterns within the data
and map that with some kind of output.
So what do those data points
have to do with the video data?
Imagine every single pixel in video data
when you freeze it at
one particular instance,
then you can say that a video
is a sequence of these frames.
Each one could be composed into
a number of different pixels.
The input data in this case, the source is
the image of an actor, Louis D. Wheeler.
He is giving the same speech
that Nixon would have given
if the moon landing had been
unsuccessful, you know, tragically.
The output data or the target
is a synthesized image of
Richard Nixon giving that same speech.
So instead of having the full video data
you have a smaller set of data points
that might represent the key features
such as the chin, the mouth, the nose, the eyes.
It might also recognize other patterns
that a human wouldn't necessarily 
recognize within the data
that the computer can latch onto and so to speak.
The case of audio works in a 
very similar kind of way.
You take that rich data, which is the audio data
of the actor giving the speech.
And then you reduce that down to key features.
You think about that as an audio
voiceprint of the initial actor
then you take that audio voiceprint and
you're trying to synthesize it
to match the audio voiceprint of Richard Nixon
himself, giving the speech.
You could imagine the deepfake as
creating a synthetic Nixon
that takes this rendering of this beautiful speech
and Richard Nixon's other speeches
and merging them together.
So now you have that speech being given
in the way that Richard Nixon 
normally would speak.
In the case of deepfakes
there's a lot of anxiety about
them being used for misinformation.
People think about fake news.
They think about all types manipulation.
Whether that's political or otherwise.
And so there's a lot of apprehension
around this technology.
So when we look at the
information ecosystem now
it's hard not to be depressed
because of the scale of different types of
misinformation and disinformation.
So whether it's conspiracies, lies,
false content labeled as satire when it's not.
Is it imposter content when somebody
uses a logo that you recognize
but it's completely fabricated?
Is it a trending topic on Twitter, but that's
actually been pushed by a ton of bots?
Is it an outrage meme on Facebook
that's circulating widely?
There's so many different elements of this.
And then deepfakes has taken
all of that and said that
and then, you know, plus plus plus plus.
If deepfakes had come along 10 years ago, say, and
there hadn't been this moment of recognizing
how people had weaponized the internet
we would've been like, oh, new technology,
like Photoshop, but worse.
But now it's like,
Oh, are you kidding me?
I mean, to be clear,
deepfakes can be really valuable.
It's the deepfakes that harms, hijacks careers,
that ruin people's reputations,
that so distort reality.
You could have a deepfake
that if time just right
in a very sensitive situation
could lead to a riot.
We're in a tinderbox right now in
US polarized politics and society.
And we've got to be on the lookout
because it may be not just the ruined IPO
and not just the thrown election
but the physical harm that comes with riots.
The deepfake is the promise of using
whatever it might be
generative adversarial networks or
other machine learning techniques
to make it push button.
You don't need to hire
and find the actors.
You don't need James Cameron and
a quarter of a billion dollar budget
to pull off Titanic.
It's somehow taking it 
from the layers of sophistication
down to the high school science fair
down to an app, face swap
that there can be as much or more
of what you are subjected
to during the day fake
as real in realms that we
thought of as the real before
because anybody can generate it.
Maybe that's what gets people nervous.
I don't know if we're completely understanding
what it means to give this
capacity out to the public.
It could open up a whole
other realm of creativity
but at the same time, I think
about the people I study
who are people who do media manipulation
and disinformation for a living.
For them, it's not about creativity.
It is about hoaxing, scamming,
manipulating vast publics
into mass hysteria in some instances.
And I don't know if we're entirely
as a society ready for
that kind of deployment
given that we are also in the midst of a
different kind of communication crisis
with the loss of gatekeepers and the
rise of social media as a distribution system.
Deepfakes are posing a quite urgent threat
to the trustworthiness of our online environment.
Deepfake videos, per se,
just the fake video of themselves
or the audios themselves are
actually not that hard to detect
even by visual inspection
many of them if they're not like
what I mentioned, highly crafted
but the problem is they're coming to us
at volume and with very fast speed.
And I think particularly important is
that it doesn't need to be perfect.
I think a lot of the emphasis on deepfakes
in that kind of when we look at them as like
wow, these beautifully made objects
like your deepfake
which is amazing, it's totally convincing,
but in a lot of the contexts I work in
it doesn't need to be totally
convincing to do damage.
And in that case, you know,
the easier it gets to make deepfakes,
the cheaper it gets
all those attack vectors open up as well as
the ones that use really well done
deepfakes in very specific ways.
Right now we're, I would say on a precipice
you need a little bit more
data than most people have.
You need a little bit more computing
power and skills than most people have
but those who are motivated
to use deepfakes by and large use them
to make pornography.
So of course the biggest use of deepfakes
that never gets discussed is
actually taking women's faces
or bodies and using them and
creating pornographic content.
And I think most women don't necessarily
recognize how vulnerable they are
because of the amount of imagery most
people have out about them online through
Instagram post, or, you know,
it just isn't a conversation that people
are necessarily having.
Technologies as they come on board are often
used to abuse people at their most vulnerable
what drew us was invasions of what I
would call invasions of sexual privacy,
misusing, exploiting, appropriating
someone's sexual identity
and forcing them into being a 
pornstar that they never chose to be.
As someone whose written a 
whole lot about online harassment
stalking, threats, and
nonconsensual intimate imagery.
To me this was of course yet another story
about using a technology
to embarrass shame, terrorize women,
you know, marginalized people.
What happened was the kind of deepfake
Reddit forum porn basis of deepfakes
then got lumped in with very,
very, very smart academics.
Who'd built algorithms with
really well-meaning intentions.
And these two worlds have merged
and there's this kind of
unintended consequences
of some of those algorithms
getting out into the wild
from really nerdy, lovely academics who
never thought about how it might get used.
Women's lives are being impacted every single day
and I think if men were having
their bodies used in the same way
we would have a lot more legislation 
we would have
there'd be more action if it was happening to men.
And right now deepfake sex videos
is actually pretty big business.
There are four websites that
have about 15,000 videos.
99% of those videos have women's faces
inserted into porn without their consent.
As we start to move out in scale
there will have to be
and there will be cottage industries for this
where people are making deepfakes
that are highly personalized
and do the same kind of damage.
So you can imagine a company saying
Yeah, send us half a dozen pictures of
your ex or whoever, and
we'll make a deepfake of them.
And the other company might also
make a deepfake of them
riding a horse through Fantasia,
what have you, as a birthday card.
And so you can imagine all
these different use cases of it
where we actually haven't
worked out what's at stake
which is in many respects, identity theft.
not your financial identity, but
your actual personhood.
We don't even talk about it like identity
theft, but that's what it is, right?
It's actually the essence of who you are
is what you look like and how you show
up in public and the words that you use.
So for someone to be able to
steal that and change that
your identity, it's more than
who you are, it's your legacy.
And if this is what you become known for,
and if this is what is public about you
it can be very damaging.
And that to me is what's most concerning
about how this industry might develop
if there are no guardrails.
Deepfakes are detectable.
So whenever any little steps 
that is where
aspects of the physical world, or
aspects of the human body
or the generation process,
all the signals, or the models
they leave a certain kind of trace
and all these little things will show up
probably very subtly in
the final produced videos.
We know where to look, we can use
algorithms to detect those things.
Do you think it would be possible to
since you have some detection technologies
run those detection technologies
on the Nixon deepfake?
We could try. We never, we haven't tried it.
We'll try and let you know.
Alright.
So we took the two videos you
sent to us and, we did the analysis
ran our deepfake detection algorithm on them.
So here are the detection results
what are you are seeing here
on the left panel is the video itself
on the right, is our analysis.
It's analyzing every frame in that video
so we ran the first video
and as you can see, it is highly
likely to be an original video
because it doesn't have that kind of artifacts
we typically see in a deepfake video.
On other hand we analyzed the other video
we see very different results.
And so you're seeing here
the score is mostly concentrating around zero.
And that means that the algorithm actually
detects a lot of artifacts that it
usually sees from deepfake videos
in the training video site.
So bottom line, the real one appears to be real
via your detectors and the fake one
appears to be fake.
The fake one appears to be fake, yes.
We have an adversary.
We have a group of people trying
to beat our detection algorithm.
What are we trying to do is
not completely eliminating
or stopping deepfake videos.
Even in the ideal case for the completely
automatic generated fake videos
our algorithm can never achieve
a hundred percent accuracy.
So there's always some room for mistakes.
We're just trying to raise the bar.
So that if somebody wanted to make a fake video
and they want to sort of make this
convincing for a lot of people
then correspondingly this person has to
put a lot of effort into generating it.
Aside from that truly kind of cat and mouse
Oh, how was it generated?
Oh can we trust the Geiger counter we have on it
that says it's radioactive
with deepfake-ishness?
What's the provenance?
Where did this thing come from?
If this speech were really delivered
wouldn't there be an old New York Times
headline from the next day
that says Nixon gave this speech?
And in fact, even 10 seconds
of sort of checking it out
would let you know that either the entire world
is wrong, and this is the one thing that's right.
Or this is the thing that's wrong and
you know it's a deepfake.
How do you train people to spot deepfakes?
Is to really embed it within.
Do you understand where this came from?
Is it trying to elicit an emotional reaction
and make you share it quickly?
Are there corroborating sources?
So it's media literacy but really reinventing it
for how social media works
which is like the seven-second
moment where you share.
And I think that's when you also have to
look at the responsibility of companies.
Platforms make all the difference
in the spread, the virality
of powerful, especially
negative and novel deepfakes.
So I've been working with
Facebook and Twitter for the last
I want to say 10 years on online safety.
The advice that we gave is that not
all deepfakes should be banned
but only deep fakes that cause harm
individual harm and social harm
and that do not constitute parody,
satire, or part of a historical lesson.
And we can't trust an
algorithm to solve the problem.
We need an expensive solution by which
I mean human beings, content moderators.
So the reason that people are frustrated
with the platforms is because they say
Why can't we just tweak the algorithm
and make all the false stuff go away?
Well, the problem is the majority of
content that we see is not true or false.
It's somewhere in the middle.
And it's this concept of the
weaponization of context
genuine content used out of context.
The most effective disinformation
is that which has a kernel of truth.
That's why the platforms struggle so much.
And I feel like there's a certain naivety
around deepfakes and synthetic media
which is like, well, that is a clear
bucket and we will not allow that bucket.
Does that mean they're not going to
allow your film? Maybe. I don't know.
And I think we need more of
those case studies.
I think we will see better and more
effective labeling because
human brains require these heuristics and mental
shortcuts.
And right now on Facebook, Twitter, TikTok,
there are no heuristics.
It's just what you see with your eyes.
And that's why we're struggling so much.
So we have all these studies about
social psychology, about how
even if you say something is a lie,
if it confirms your own beliefs
confirmation bias, we still do believe the lie.
Even if we're told it's a lie.
We're kind of the bug in the code.
Human beings, we're the ones
liking clicking sharing
we're the ones buying it and we're
the ones passing it on.
It's not the technology itself.
We are the problem. Right?
And so yes, education could be
counterproductive
because the liar can seize on it.
Don't believe your eyes,
it's all nonsense, fake news.
I think we're more sophisticated than that.
We're more sophisticated customers.
So my hope is that with enough
lead time and education
I'm not going to say we're going to
forestall damage, but we can minimize it.
And that's, I guess, my greatest hope.
I find myself thinking in the realms
of disinformation as a societal danger.
What we've seen so far has been people ready
some people ready to be
persuaded by extremely thin evidence
generated quite shallowly.
For that group of people inclined
to believe that way,
susceptible to believing that way
better verisimilitude through deepfakes
isn't really the variable.
Of course, we also live in a world in which
people may not look at the story at all.
They're only looking at
the headline.
And again, if that's the case, the deepfake
doesn't need to be so deep to get people
having a notion in their mind that
maybe takes them in a direction away from reality.
I think we're already starting to see the way
the idea about manipulated media or deepfakes
is being weaponized, irrespective
of whether a deepfake.
So there's all this controversy about, you know
should this video that's edited or this
video that looks like it's slowed down
or this video that's maybe deepfaked
should it be taken down?
So we're going to see it this year,
lots of debates where all sides
weaponized the idea that it could be fake
or that anything that looks like
it might be fake should be addressed
for their own political ends or their own ends.
I think on a two-year horizon it does seem
likely we are going to see more
used in a range of malicious settings,
just because of the technology trends.
Like the way in which we're
seeing an increase in access,
the app-ification of tools
the speedy improvements in audio
and also in kind of multimodal techniques,
the ones that are video and audio combined
those are the ones where
once you start to have that
that's really sort of giving
you a whole package.
So it does feel like on a two-year
horizon, we're likely to see it.
Whether that is anywhere near the scale of
all the other ways people manipulate media
probably not,
but it'll just be another part of this
that we have to deal with.
Yeah. In my imagination
everybody gets a soul and decides 
that they're going to play fair.
In the dystopia that I live in every day
we are headed towards
hyper-personalized content marketing
that we have never asked for
but are subject to.
And that could include
anything from
inserting people's pictures into
advertisements for X, Y and Z products
all the way to personalized pornography.
And if you can't figure
out how to do it at home
there will be industries that will do this for you
Any deepfake has within it this kernel of
a very nefarious and dangerous
problem for society that we
haven't really thought much about
and I would hate for us to get into
the position where there are
so many social harms caused that we have
to shut down most of what
we value about social media, which is
the free and open communication
in order to stop some of these businesses and
political operatives from winning.
So we've always had this
essential trust in what we see.
And there's lots of literature about
how we are more trusting of visuals
even though we can critically say, well maybe
it's been cropped and framed and edited.
We still have this trust.
The fear with deepfakes is that
that trust that we have developed
whether it's about believing a war crime
or police brutality or any kind of
protest footage, to all of a
sudden have people say
we don't know whether that's true.
And so it's a kind of the shrugging
that I'm most concerned about
which is, it won't be any
one particular moment
it will just be a gradual sense of
you probably can’t trust that.
Deepfakes are coming we can’t stop them.
They are absolutely coming. 
Because it's all happening so quickly.
I worry that we're going to take the wrong turn.
We're not going to do
something we should have done.
And the historians will look back in 2050
and say they had this really sweet moment
between 2020 and 2022 and
they totally dropped the ball.
Yep.
I think we're going to see both the
really destructive uses of the technology
and really pro-social uses of it.
A) You don't want to ban technologies because
it's really what we as human beings
building them do with them.
A knife is really safe and wonderful when
you use it in the kitchen
to cut up a chicken you're going to roast
but it's actually a problem
when you use it to stab someone.
And the same is true with this
deepfake technology.
We can use it in really pro-social
ways to serve as lessons of history.
That was fascinating.
Seeing a speech he didn't give and
gets us to think about humanity
and the importance of the Apollo,
what it meant for all of us.
And at the same time you're teaching us a lesson
about how we can be so taken
by a video and believe it, and it's not true.
And in many ways it's such an important
part of the human rights story
using video and audio to show the world
viscerally human suffering
and human rights violation.
It's been incredibly important to the story of
the recognition of human rights violations.
So my great hope is that we don't
lose that as an important tool.
I don't think we're going to change our
gut reactions to audio and video anytime soon.
I think there is going to be something
interesting about how people use deepfakes
for satire and parody to
poke fun or challenge the powerful.
That is kind of the flip side
of mis and disinformation.
And often they get confused
or people gaslight you into
believing one is the other.
So I hope we'll see that
and I think that's going to be one of the
most interesting spaces is working out
when how do we protect that while also
not letting it turn into an excuse that
anything goes to present people
saying something they never did.
Yeah, I think it's a magic piece of film.
And I think it's a reminder that whilst
there's so much fear around this technology
there were also amazing opportunities
that I think maybe we haven't explored as
much as we should have done.
We've never been very good at teaching people
how to make sense of media that they consume.
And I would argue back in the days of Photoshop
we never really explained what Photoshop was.
Most people couldn't afford it
and didn't know that it existed.
And then there was this whole
conversation with young girls being like
my thighs don't look like the
thighs of the front cover of Vogue.
And it was like, well, you, know they have
cellulite, but they get Photoshopped out.
And people were like
what's Photoshop?
Then there was this kind of like awakening
in the late 1990s of what Photoshop was.
And so I think my challenge with deepfakes is
I don't think we should be shutting them down
I mean, I think this film in particular
is just a reminder of how magical
they could be and will be.
But then we have to recognize how are
we going to label that.
We need to help people understand,
give them context
recognize that when we're overwhelmed,
we don't stop and think
we need to build in friction to the system.
There's a whole host of different
things rather than go
Oh my goodness, we can't have deepfakes
because we as humans have gone through
different moments where we've had to evolve
just the speed of evolution now
is making us, we can't cope.
And that's partly why we're talking in
the ways that we're talking about speech
where we should say,
this is a good thing, but
we're not doing it, I don't think, in a
responsible way, we're doing it too quickly.
We're not researching this adequately.
When you hear that elegiac speech
and that speech that imagines this
alternate history that could have happened.
then to me, you're looking at the
expressive potential of the medium.
It's not just about creating
a deepfake to fool people.
It's creating a deepfake
to ask broader questions.
It's asking questions about,
what would have happened?
It's asking questions about
what's the role of this technology
within our society these days?
And it's asking questions about
what can we do as individuals in
the face of this technology?
And ultimately aiming to empower
people that is to say that
you can be critically aware.
You can be discerning around this technology
and helping you to better understand
what role you as an individual can have.
And I think that's one of
the great powers of the arts.
That's one of the things that I most
love about this project is that it's
using deepfakes as a medium in the arts
to address the issue of
misinformation in our society.
