(presenter)
Our second keynote of the morning.
So, hands: how many of you
use Google Chrome?
How many of you use Chromium?
How many of you
remember that moment
when Flash crashed the page
that you were looking at
in Chrome or Chromium,
but the browser didn't crash,
and you had never
seen that before?
The open source project, Chromium,
and then google Chrome,
how they package it
as their browser,
is a really impressive feat
of engineering,
and this morning,
we have one of their engineers
to talk with us.
Parisa Tabriz is the engineer
who leads the team of engineers
who work on Google -- Chromium's
and Google Chrome's security.
Parisa Tabriz.
[applause]
(Parisa Tabriz)
Good morning.
Good morning, good morning.
Cool, the slides work.
Good morning.
I'm really excited to be here.
It's my first PyCon.
I had an incredible time yesterday
talking to people,
meeting new people,
soaking up the Python community
which was really especially
welcoming and friendly.
So, thanks for that.
Thanks to Brandon for the intro
and inviting me,
Thanks to Alex Gainer
who also invited me last year --
I couldn't make it.
And Hade Audrey Roy Greenfeld.
Check out the Cookiecutter booth
if you haven't already.
But, that's awesome.
And thank you all for coming
and sticking around.
I didn't know I was going to be
actually talking after Guido.
I think I'm even more nervous
than I was coming in,
but excited to help kick off
the second day of PyCon.
So, my talk is titled
"The Hacker Spectrum,"
and for this talk,
I'm going to share some stories
about people that break software,
about hackers,
and why I think
being hacker-friendly
can lead
to better software security.
So, I worked at Google.
I've worked at Google
for the past 10 years.
This is actually my job title,
and I chose -- when I started
at Google,
my title was
Information Security Engineer
in the information security
engineering team,
which I thought was just sort of
boring, drab, and meaningless.
And so I picked
a more fun job title.
At some point,
I went to a conference in Tokyo,
and my colleague said,
"Well, business cards
are a big thing there,
"so you should get
business cards made."
And the title stuck.
My job has changed.
My day-to-day work has changed
a lot over the past 10 years.
I started as an engineer
in a team of hired hackers,
with just sort of
this broad mandate of:
make Google's products
more secure --
which at the time
was just Gmail and Search.
I worked as an engineer
doing that for about five years
and then moved to Chrome,
and I manage the Chrome
security engineering team today.
So, this talk
is going to go over
some of the things I've learned
about software hackers,
security hackers,
and their motivations
as well as methods.
We should start
with a definition.
The term "hacker" is thrown around
to mean a lot of different things.
In fact,
in the keynote yesterday,
Lorraine actually proposed,
you know,
her definition of hacker, too.
I don't expect to converge
on any single definition ever.
It's a really overloaded term.
But for the purpose of my talk,
we're going to --
I'll propose that a hacker
is just someone
who thinks outside of the box.
I don't assume
too much more than this.
So Bruce Schneier
is a photographer,
widely regarded
security specialist,
and he says
that a hacker is someone
who discards
conventional wisdom.
It's someone
who sees a set of rules
and wonders what happens
if you don't follow them?
A hacker is someone who
experiments with the limitations
of systems
for intellectual curiosity.
Anyone identify
with that broad definition?
Cool, lots of hands.
I want to talk to you
about computer security today.
So I'll propose that the hackers
that I'm going to talk about
are people who figure out
how to use computers or software
in unintended ways.
So usually they accomplish this
by exploiting some bug, or flaw,
or vulnerability in the design,
logic,
or implementation
of a software system.
And you'll notice
that I don't assume
any single motivation
or intention here.
Who are these guys?
I'm sure many of you
recognize them
as a much younger Steve Wozniak
and the late Steve Jobs.
So, trivia.
Before starting Apple,
these two actually built and sold
digital blue boxes,
which were devices
that exploited flaws
in the telephone systems
to make free calls.
In particular, you were able
to make free long distance calls
by exploiting different frequencies
that were used as control logic.
These guys were phone phreaks.
That was the term for hackers
that exploited telephone systems
at the time,
which was about the 60s or 70s.
And indeed,
they stole phone service,
like all of the other
phone phreaks of the day.
But that probably wasn't
their main driving force to do it,
and it wasn't the driving force
of the rest of the
phone phreak movement either.
Their motivation wasn't damage,
but power and knowledge.
Knowledge of how
the telephone systems worked
and the challenge of figuring out
how to bend it to their will.
Computer hackers
are from the same mold,
but aim to explore
the maze of the Internet
instead of telephone systems.
Now, if you have an iPhone
or a MacBook in your lap today,
and I see tons of them
in the audience,
you'll have to admit
that there is some benefit
to actually tolerating
and nurturing the hackers.
When I use the term "hacker,"
I'm not assuming
any single motivation or objective.
And I also don't assume
any simplistic value judgment here.
So it would be really convenient
to think of a world
of good hackers and bad hackers
or "white hats" and "black hats"
as they're sometimes referred to.
But we all know that real people
are just a bit more complex
than that.
To me, "hacker" is a skillset
and a mindset more than anything
and one that can be developed
and that I'll encourage you all
to develop.
But I also think it's important
to look at
some of the
independent objectives
for why I've recognized
that people hack software today.
And that's
what we'll go over next.
So I want to go over
each of these objectives
with some supporting stories.
Some hackers are motivated by
more than one of these objectives,
but it's definitely not the case
that all hackers
are motivated by all of them,
in my own experience.
And I can't even promise
this is comprehensive,
but it probably hits
most of the primary objectives.
So let's first talk about hackers
that break software
because they ultimately
want more secure software.
How do you decide what's actually
installed on your computer,
on your laptop, or your phone?
Maybe you have some trust built up
in the brand.
What is that trust based on?
Or maybe you have, you know,
that paranoid security friend
who you go to to ask,
"Is this secure?
"Should I use this or this?"
At some point
down the transitive trust chain,
it's likely that someone
has assessed
the actual security
of the software themselves
by trying to actually
hack into it.
So threat modeling
is the name of one technique
a lot of security professionals
will use
to assess the security of a system
or piece of software.
You can perform threat modeling
in a lot of different ways,
but it usually boils down to
a couple of core, you know, tasks.
One: figure out the assets
in the system
that you're trying to protect
and that someone
may be trying to get after.
Two: decoupling the system
into individual components
so you can reason about
how components work together.
Three: identifying
possible threats.
And then four: assessing
the robustness of defenses,
or where you potentially
need to be adding more.
So, understanding the weaknesses
of software and systems
and how to exploit them
is one of the best ways that you
can actually convince yourself
that something is secure.
So let's do a quick example together,
all 3000 of us.
And I want you
to play the part of hacker
and think about how you would
hack into this vending machine
to steal snacks,
the primary asset
of this vending machine.
I'm assuming that most of you are
familiar with the vending machine,
and at least some of you
have never done this in real life.
But I've yet to give this example
with someone not saying,
"Well, this is how you do it,
and how I did it when I was young."
So, would you attack the
physical security of the machine,
using the trusted
sledgehammer attack?
Perhaps you would want to be
a little bit more sleuthy,
avoid attracting attention
and go after the mechanical
or electrical components
of the machine.
A lot of machines
now have a credit card reader,
and so maybe
you can go after that
and do some sort
of skimming attack,
which we know other large companies
have been victim to.
Is this machine
connected to the Internet?
Did we hit
the remote accessibility jackpot?
Lots of things
that you might think of.
There's no right or wrong answer
with this.
And I use this often in classes
when talking to anywhere
from engineers to policy makers
who are thinking about technology
to just show that it really is --
each person brings
their own perspective
to an exercise like this.
And there's a lot of different ways
you can do it.
I've heard some
extremely creative answers.
One person suggested filling up
the whole machine with water
and then letting the water out
and all of the snacks
to come with it.
I'm not sure if it would work,
but I would love
to see it in action
[laughter]
Anyways, you know,
if you look online
there's tons of information
you can find
in how to actually hack into
specific vending machines using --
you know, trying to break into --
like, find the administrative code
and break into the machine,
lock pick, things after --
attacking the
electrical components of a system.
Why am I going to this?
A classic approach
that just involves basic currency
is using a coin
that's made from a similar blank,
but has a vastly different value.
This is a 2 Euro coin.
It's worth approximately
two US dollars and a quarter.
And these are
a couple of examples
of coins that are
active international currency today
but have been worth vastly less.
So: travel tip.
If you -- I highly recommend
double-checking your change
when you go to Europe,
because I actually received
a Mexican peso at some point
when I was in Europe.
If you leave
with knowing nothing else,
then know that there are --
that a lot of currency
fits that same mint.
So, to be clear,
I'm not advocating anyone go
and rip off vending machines.
This is just a toy example
to get you to think about
possible ways
to break into a machine,
because it will help you
enumerate defenses
and mitigations to such attacks
by real attackers.
Or just say, "You know what?"
"It's too unlikely of an attack
"that it's not worth
implementing a defense."
But it gets you thinking
about what possibly could go wrong.
Now, how easy
was this exercise for you?
Is it something that comes,
you know, sort of naturally
and that you found
a fun thing to do?
For most hackers,
they can't even help about
thinking how to break the system.
And I think this is really distinct
from the engineering
or more traditional
developer mindset
where you're thinking
about how things can be created
or made to work.
And I know
that's simplifying things,
but really the hacker thinks,
you know, almost obsessively
about how things
can be made to fail
and what assumptions
and what layers of abstraction
have holes in them
that, you know, can be exploited
to get this system to do something
that wasn't intended
by the people that created it.
Now, you don't have to exploit
the flaws that you find,
and this is my own
personal opinion coming through:
I don't think you should.
I think you should use those
to make the systems better.
But if you don't actually
look at systems that way,
then you won't notice
a lot of the security problems
in the first place.
So, I'd encourage you all
to exercise your hacker mindset.
Exercise it today
as you walk around the conference
or work on projects.
And I think, even if you don't
want to be doing this full time,
thinking about how things can break
and security,
and defenses
that are put in place --
it can actually make you
a more sophisticated consumer
and citizen as well.
All right.
The next hacker group
I want to talk about
are those that most closely align
with the term's meaning
when it was originally coined.
In the 50s, on MIT's campus,
a great hack
meant a practical joke
or a great feat
of technical skill.
The word "hacker" began to mutate
during the late 80s and early 90s,
and what once had been considered
a compliment among programmers
had essentially become
a byword for cybercrime.
The hackers that do this
for the enjoyment of the challenge
are people
that use playful cleverness
to maybe achieve some goal,
but it wasn't motivated
by profit or malice.
They appreciated the art
and beauty of computers
and the challenge to do something
that, again, wasn't expected.
This is a picture
of George Hotz.
He goes by
the hacker handle geohot.
In June of 2007,
George became the first person
to carrier unlock an iPhone,
and he went on to develop
a number of software tools
to do iPhone jailbreaking.
Jailbreaking is when you remove
the limitations imposed
by the operating system.
In 2010, he announced his retirement
from jailbreaking,
saying that it just wasn't as fun
as it used to be
and people were taking too serious
something that he used to
just do for fun.
Towards the end of 2009,
geohot announces efforts to hack
the Sony PlayStation 3,
a gaming console,
which at the time
was widely regarded as being
the only fully locked
and secure gaming system.
Geohot started a blog
to document his progress,
and about five weeks later,
he announced success,
and published details
of how he was able to get root
on his own Sony machine.
He announced success,
and Sony took geohot to court.
Subsequently, geo made a video
on YouTube.
It was a video that was pretty much
rapping about the disaster of Sony,
and Sony, in turn, demanded
that social media sites,
including YouTube, hand over
all of the IP addresses
of people that visited this --
George's social page and videos.
So, let's summarize this:
Hacker breaks into
unbreakable gaming system
that Sony kind of promotes as
an ungamable breaking system,
publishes how to do it
in his blog and videos,
and then --
company goes after hacker
for violating DMCA
and copyright infringement
and Computer Fraud and Abuse Act.
CMU professor ended up issuing
a statement of support for George
as an expression of free speech.
Ultimately, they ended up
settling out of court,
and as part of the agreement,
George agreed to never hack
Sony projects again.
George has been employed
at Facebook,
he's done internships at Google,
and he has shared some really
creative research with us in Chrome
about how something was vulnerable
and could be exploited,
that we use to make Chrome safer,
by not only fixing the flaws
that he found
but also introducing mitigations
that kind of address a class
of issues that he found.
So, to me, he's someone
that has certainly helped us
make software more secure.
At the end of 2011,
hackers actually
broke into PlayStation network
and stole personal information
of some 77 million users.
At the time, it was, I think,
one of the largest
personal data breaches.
George denied any responsibility
for the attack
and was quoted as saying,
"Running Homebrew
"and exploring security
on your own devices is cool."
"Hacking into
someone else's server
"and stealing databases of users
is definitely not cool."
I include this as just really
one anecdote
of a hacker
that I know personally
but as someone who really is
doing this for the challenge
and the enjoyment of it.
There's no interest in profit,
but being able to do something
that you can be told that --
you know, you were told
can never be done.
If you want
free security assessment,
just tell a hacker
that they can't break into it
and you'll get it.
[laughter]
Now he's working on
self-driving cars
from his own garage --
building a self-driving car
from his own garage.
Anyways, most of the hackers
that are motivated
by these first two objectives
are responsible for some of the
brilliant and secure technology
that we benefit from today.
And a lot of the people
I work with --
or have hired to break software
because of the challenge or desire
to secure it, do exist.
And so, personally, I think
that our policies
really need to encourage
and foster the skills
of these creative people.
And today I do worry
about some of the policies in place
as being overly broad
and under specified,
where making, you know,
terms of --
violations
of Terms of Service acts
can actually be
considered a crime
and subject
to overly harsh penalties.
Of course, not all hackers
are altruistic.
So, let's talk about
some of the objectives of hackers
that have caused harm
to the Internet and its users.
Our third category is the hackers
that have something to say
that use their skills
for political protest or action.
So I was planning on
talking about Anonymous,
and I imagine that a lot of you
recognize Anonymous
as the loosely organized network
of hacker activities,
activists, or hacktivists.
And especially because Anonymous
and activities
associated with Anonymous
are still popping up
in current events.
For example, the group allegedly
launched a campaign called Op Trump,
aimed at taking down the presidential
candidate's online footprint,
back in December.
But then yesterday,
I actually learned
that there was a keynote
at Montreal PyCon last year,
and Gabriella Coleman
talked extensively about Anonymous.
So I'll just point you to her video,
if you haven't seen it already,
which goes over some of the more
notable acts of hacktivism
by just them,
and, in particular,
in defense of free speech
or democracy.
Anonymous is just one
of the notable actors
in the hacktivist movement.
Other prominent collectives
include the Syrian Electronic Army,
Lulzsec, the Lizard Squad,
all of whom have conducted
well-publicized publicity stunts
and distributed denial-of-service
attacks on government,
religious,
and corporate websites.
Coincidentally, I actually noticed
a report on hacktivist trends
published two days ago.
So, this graph is from that.
I haven't had a chance
to look into the sources,
but their title is,
"2016 Seems to Be
a Down Year in Hacktivism,"
which sounded like
a stock report to me.
But I feel like, also,
there should be some way to, like,
take your middle name
and street address
to figure out what hacktivist group
you would belong to.
But, that's I guess for the --
maybe a project.
Anyways, acts of hacktivism
tend to be a significant nuisance.
And they often induce
less financial damage
because their motivation
is really just to make a point
and draw attention to it.
But because anonymity
on the Internet
makes attribution really hard,
and in some cases impossible,
it's really difficult
to effectively thwart,
especially because it can be
really easy
to bring down a single website
with a denial of service attack
if you command a bot
or some other form
of large computation resources.
All right.
Hackers exploiting systems
to steal money or steal data
that they then can monetize,
gets a lot
of mainstream attention.
So, cybercrime today,
it results in huge amounts
of financial damage.
It's really hard
to get good data on this.
It's often flawed
or hard to come by,
but the most recent
plausible figure I found
was published from
The Washington Post,
which said that
about $500 billion annually
we are losing due to cybercrime,
which represents
about 1% of global income.
This is an example
of a piece of common malware.
Hopefully, no one here
has run into it,
although I would be surprised
if that was the case.
What happens is the user
is infected with malware.
And it encrypts all of their
most important files
on the victim computer:
pictures, movies,
music files, documents.
The malware then demands payment
via Bitcoin
and installs a countdown clock
on the victim's computer
that ticks backwards
from about 72 hours.
Victims who pay a ransom
will receive a key
that unlocks their encrypted files,
and those that let the timer expire
before paying
risk losing access
to all of their files, permanently.
So, I've actually read
some recent variants of this malware
that are more lax in the time
they allow people to pay,
or will accept payment
via other forms.
But I imagine it's only
because no one can figure out
how to use Bitcoin in 72 hours.
[laughter]
But I do -- actually I do find
that the usability of malware
has really improved
over time too.
[laughter]
Which is interesting,
just taking an agnostic view
about how this evolves
just as well as other software.
Anyways, it's unfortunately
an example, really,
of how easy it can be
to conduct mass extortion
on the Internet.
Now, there are more legitimate ways
to make money too.
This is a Photoshop
of a technical program manager
at Google
for advertising,
kind of internally,
our Vulnerability Reward Programs.
What these are are where vendors
will actually reward hackers
that responsibly disclose
security vulnerabilities to them,
and what that typically means
is a security researcher or hacker
will let the vendor know
about a security bug they found
under testing conditions that are,
sort of, you know, specified.
You can't go after
some other victim's account.
You have to play by the rules
of the program.
The vendor will then
reward that researcher
based on typically
the severity of the bug.
There's a link for Google's
Vulnerability Reward Programs,
but there's hundreds
of software vendors
that reward hackers
in this space today.
I remember when
we launched this for Chrome
and it was actually
fairly controversial to do this.
But today, again,
hundreds of such programs.
And just a few months ago,
the pentagon actually launched
their own
Hack the Pentagon program.
So -- even seeing government,
pick up this explicit intention
to work with hackers to try to make
their software more secure.
And I think
it's generally considered now
to be an industry best practice.
Another means for hackers
to make money
is to actually sell their bugs
or weaponize bugs
also known as exploits or 0-days.
This is a cover
from Time Magazine.
They did an article in 2014
on the growing market
of selling exploits or 0-days
as weapons.
The Time piece interviews
Aaron Portnoy.
He started a company in Texas
called Exodus Intel.
And the company's
mission statement,
I'm just going
to read this verbatim, is,
"To provide clients
with actionable information,
capabilities and context
for our exclusive
zero-day vulnerabilities."
What does that actually mean?
It means they find bugs
and sell exploits
that give customers access
to a victim's computer.
Now, exploits
in a popular application
can go for as much as
hundreds of thousands of dollars.
So this isn't small change.
They only give the vulnerabilities
with working exploits
and don't bother
telling the vendors
about the actual bugs
for the ones that they don't use.
And they take pride
in the quality of their exploits,
just like
any other software vendor
takes pride in the quality
of their software.
And in the case of exploits,
it means that it's reliable,
and that this exploit --
if it's sold as something
that can exploit Internet Explorer
on these operating systems,
they take pride
in that quality and reliability.
I expect that some of you
have been following
or followed the public saga
over encryption play out
earlier in the year
where the FBI
asked for Apple's help
to break encryption
on an iPhone
owned by a terrorist
in San Bernardino.
The FBI implied
that they paid 1.3 million
in exchange for the exploit
that they eventually used
to hack into the iPhone.
Again, we're not talking about
small amounts of money.
It is interesting
to get concrete data on this.
It's not like
you can go to an Amazon
and actually find
a full market on sales for this.
So a lot of the data is gathered
from, you know, connections
or in this case, actually
came straight from Comey,
but it's a lot of money.
And exploit-for-hire
is becoming an increasingly
"acknowledged" service, I guess?
And it certainly adds
an interesting dimension
to the economics of software
security and revenue stream
that is available for hackers that
are primarily motivated by money.
So, the last group of hackers.
This group of hackers
I want to talk about
are those
seeking information or data.
Now that's a broad term.
And it can certainly include
just the quest for knowledge
about how a system works.
But a more nefarious objective
might be seeking out information
about others, on others,
perhaps e-mail,
or online browser behavior
or browsing patterns.
So we've seen a lot of headlines
over the past two years
about intelligence gathering,
and a lot of public discussion
around the means
and legality
around current methods.
I'm not going to dive
into the controversy
around US intelligence
and gathering.
So I want to share an example
of a pretty massive incident
launched from another country
that was very likely motivated
by information gathering.
This is a screen shot of a Chrome
security interstitial
for an invalid server certificate.
In particular, it's a screen shot
of the interstitial
you would have possibly seen
if you were trying to access Gmail
in 2011 in Iran.
So, the screen shot
that we have now
looks a little bit different.
But I'm sure that many of you
have seen, at some point,
a warning that says,
"You are trying to access a site
over HTTPS
"and there's something wrong
with their certificate.
"Do you want to proceed?"
and I'm guessing that a lot of you
clicked that you want to proceed.
And possibly, and hopefully,
it was a benign situation.
Maybe you were at the hotel
or at the airport
and trying to access a page
over SSL
before clicking through
the captive portal page.
Anyways, a lot of people
have seen this,
and it is also
potentially evidence
of a man-in-the-middle attack.
So SSL relies on certificates
to actually establish
a secure connection.
And if there's something wrong
with the certificate,
the browser will show
this warning,
because it can't tell
if it's actually
an active
man-in-the-middle attack,
or if there's something wrong
with the website.
We end up
showing this warning a lot.
People click through it,
and we know that warnings
aren't a silver bullet.
So, a couple of years ago,
we actually
added a feature in Chrome
that's called
certificate pending.
And what this does
is it actually restricts
which certificates
a site will accept,
and alert Google
when it sees rogue certificates
for domains
that it shouldn't accept.
An early version of this feature
led to detecting a large-scale
man-in-the-middle attack
targeting Google users in Iran,
presumably to monitor
citizen's email.
This is a picture taken
from a post-mortem of the incident,
showing about 300,000
unique requesting IPs
for these rogue certificates,
more than 99%
originating from Iran.
After some investigation,
we discovered the certificates
had been issued by DigiNotar,
which is
a certificate authority
which had been compromised
months earlier
and has since
gone out of business.
This was a quote
left by a presumed Iranian,
about the digital monitoring --
about what digital monitoring
can result in for him.
I'll just read it...
...Seeing something like this
is very scary and sobering.
I'm actually half Iranian,
but have lived
my whole life in the US,
and really just had
a very different perspective
on threats to information,
you know,
with some of the growing up here
than if I had grown up in Iran.
So, things like this
are very scary and sobering to me.
Here's the list once more.
Again, some hackers
have many of these objectives.
Some focus on one,
but almost all
think outside of the box
and approach problems
from a different point of view.
People are complex;
hackers are complex.
Now, I want all of you
to be hackers, too.
Even if you don't want
to do this full time,
I want you
to think like a hacker,
because I know that it ultimately
will make
whatever you're working on
more secure and robust,
and reliable.
I want to point
a couple of resources out
that I think would be especially
relevant for this audience.
There's tons more online.
I think a lot of --
Now, information security
and hacking
is becoming more
of an established curriculum
with certifications and degrees,
but there's so much information
that's online,
so use your favorite
search engine.
Anyway, this is the first thing
I want to point out,
and I will, of course,
share these in the slides
and on Twitter,
but this is the XSS war game.
Anyone know
what cross-site scripting is?
Awesome, OK.
So, hopefully you're using
a templating system
that takes care of
most of this for you.
But if not, cross-site scripting
is the most common type
of security bug found today, by far,
not just in web applications --
all over.
This is a training game
that not only teaches you
about cross-site scripting
and the different ways
that it can occur
in a web application,
but also guides you through
increasingly difficult levels
to exploit cross-site scripting.
And, I think it's really
the best way to understand
how, when building
the application,
you need to sanitize
and escape input.
There's actually
an intro before this
called "Learn the Web
Like a Hacker,"
that we ran
at a children's version of Defcon,
which is a large
security conference.
And there were --
I think five years old
was the youngest
that we had someone doing this.
But they learned
about all the HTML tags,
and how you can actually
build up a web page,
and do cross-site scripting
on that, too.
So, if you're
finding this challenging
or if you have any kids that you
want to also learn about XSS
and web security like a hacker,
we have a precursor to this
as well.
Published a code lab
a couple years ago.
It's called Gruyere.
This is something I worked on
with two other engineers.
It's written in Python,
posted on ep-engine.
Why is it called Gruyere?
Well Gruyere
has teeny tiny holes in it,
so the application is a code lab
that has intentionally
security holes
that you go through the guide
and learn about
and then try to find
and then try to fix.
Why is it called Gruyere?
The second reason --
well it was actually
called Jarlsberg at first.
And this is a cheese
that has much more distinct holes.
We launched
and the Jarlsberg manufacturers
came after us
with trademark infringement.
[laughter]
And so, we had to find another
holey cheese to name it after.
Technically, I'm going to
just preface this,
because I know that everyone
always points it out to me.
Technically, not all Gruyere
has holes.
French Gruyere styles
must have holes
according to French agriculture.
Swiss don't.
But yes, that is
the backstory on that.
This is published
a couple of years ago.
And for better or worse,
it's just as vulnerable
and also just as relevant today
as it was back then,
so check it out.
It will give you more training
in cross-site scripting
as well as teach you
about some
of the other vulnerabilities
that are common
in web applications
like cross-site scripts,
cross-site request forgery,
other types of input injection.
And once you're good
at finding web security bugs,
then go on over
to either some of Google's
Python-based applications
or other companies
that have a vendor
vulnerability reward program
and try to make some money.
I know we've paid out
for XXS and XXRF in YouTube,
and I'm sure
there's other bugs, too.
So participate
in a reward program,
and that's kind of
the best practice.
How many of you
are familiar with pickle?
Lots of people?
Are any of you using pickle
on untrusted data?
You're all liars.
[laughter]
I am sure
that there is at least --
at least one person
in a group this large
that is using pickle
on untrusted data.
So, in the documentation,
we have a red border,
so that means
you definitely should do it.
But it does mean
that some people won't.
Pickle is not secure against
maliciously crafted inputs.
Again, maybe nobody in here
is doing it,
but I'm sure there are projects
on Github that are.
There was a great talk in 2011
about how to exploit pickle
and write malpickles
which are pickle-based exploits.
Despite the security warning
in the documentation,
again, I'm sure
people are using this.
So, I encourage you
to verify your own usage
or look around on Github
for projects
to improve their security.
And, that's all I had.
So, if you took a small nap,
and I bored you during the time,
wake up.
And my goal would be
for you to leave
with kind of these points.
One: hacker is a mindset
and a skillset.
Curiosity is not,
in and of itself, a crime.
Hackers (people)
are complicated.
There's no such thing
as black hats and white hats.
There's a lot
of different objectives
for why people get into this.
And there are a ton of people
who do it
just because they want
their software to be more secure
or because they enjoy
the creative challenge
of finding out
how to use software
in ways that weren't intended.
I really do think
that the hacker mindset
can be developed.
There's a ton
of interactive code labs to do it,
but really the best way,
is to think about your project
with the hacker hat on
and instead of thinking about
how to create something
or making something to work,
think about how you
might actually break it
if you did have
some nefarious purpose.
And then last but not least,
pickles are delicious
and convenient,
but dangerous to use
on untrusted input.
So, I have time for questions,
but I know we may be
a little bit off schedule,
so if that is of interest,
I'm happy to do it.
I leave it to our MC.
(presenter)
The, uh -- it looks --
Thank you very much.
[applause]
We are at time so we --
if you have questions,
discussions, come on up.
Two quick notes.
I told you yesterday,
that we would have two boards
for Open Spaces,
one for the next day
and one for the current one.
We don't have two boards.
We looked through all of our stuff,
and we only have one,
so sign ups will be each day
signing up
for that day's Open Spaces.
I did not --
Py Ladies auction is tonight.
Even though it raises money
for the Py Ladies,
it is open to anyone.
It is down in the F rooms
at the other end of this facility,
6:30 p.m.
It's not sold out.
So you'll be able to just show up,
pay the $5 charge
for the snacks you'll be eating
and get into it.
If you have a talk
in the first tier of talks,
head on over to the green room.
Otherwise, head on over
to the Expo Hall
and have your morning snacks;
see you later.
