So let me briefly with that,
uh, introduce Professor Boneh.
It seems like every time we introduce him,
there's a new line uh,
added to his, his resume.
So the first thing that- to just say,
his research focuses on applications of cryptography and computer security.
This work includes cryptosystems with novel properties,
web security, security from mobile devices, and cryptanalysis.
Here at Stanford, he heads the applied cryptography group,
group and co-directs a computer security lab,
as well as more recently he's, uh,
a co-director and a co-founder maybe of the Center for Blockchain Research.
He's the recipient of numerous awards including the ACM Prize and the Godel Prize.
Uh, in 2011, he received the Ishii award for
Industry Education Innovation. I've already touched on it.
I think Professor Boneh will touch on it as well.
But I just want to emphasize this point the amount of,
of work he does to make his research,
uh, available to the public.
In addition to the online graduate courses that he teaches,
he's co-developed the self-paced Online Certificate in Computer Security.
He's created a massively popular MOOC.
He's talking about doing more.
He's also co-written, uh, uh,
a free graduate-level textbook on cryptography that you can find online.
So his con- contributions are numerous.
It's not just the webinar,
and, uh, with that,
I'm going to hand over the presentation to,
uh, to professor Boneh.
Awesome. Thank you. Thank you very much and,
uh, uh, thanks all for joining us.
Excited to be here. Welcome to this webinar.
I guess I'll talk a little bit about some of our recent work and also welcome you to our,
our courses later this year.
So with that, let's get started.
Um, so actually we're gonna turn off the video just so you can focus on the content.
In my experience, it's, it's easier to focus on content if
you just listen to my words rather than you see me flailing my hands.
Uh, so the first thing I wanted to mention is, uh,
we have a pretty active security lab here at the University of Stanford.
Uh, this is the, uh, computer security lab,
and it's part of the Computer Science Department.
So I just listed some of the faculty who uh, who work in the lab.
So myself, you're gonna hear more about my work in a minute.
Uh, Zakir works on measuring security on the Internet.
Uh, Matei works on big data and security.
Dawson works on sta- on static analysis for code.
David Mazieres works on, uh,
consensus protocols and building in- secure systems from insecure components.
Phil Levis works on IoT Security.
John Mitchell works on protocol design,
and Mendel Rosenblum works on virtual machines and security.
So there's, uh, a lot of um,
different angles of computer security that are being addressed by different faculty.
Uh, as you can see here, my work primarily focuses on applied cryptography, blockchains,
web security, uh, and that of course expands to,
uh, many different areas of security as well.
So I'll tell you a little bit about our recent work and,
um, then I'm happy to take questions and we can do a Q&A after.
I'm really looking forward to your questions,
so please go ahead and kind of submit your questions as we go through the material.
Uh, I did want to mention some of the courses that we teach.
Uh, so, uh, let- let's see.
So we- i- it- they kinda span all, uh,
a- all the way from, uh,
from freshman undergraduates all the way to the graduate program.
So you can see that, um,
there are kind of our core main courses on
computer security that we expect all undergraduates and computer science to take.
Uh, they're all recorded and available to you as well.
Um, we teach our, uh, uh,
Internet cryptography class whi- which is
basically how to use cryptography in the real world.
More recently, we started teaching
a blockchain technology class which is a lot of fun to teach.
It's basically kind of a combination of
distributed systems and cryptography and of course economics and uh,
legal aspects of, of crypto- cryptocurrencies.
That's a lot of fun to, uh, teach.
Actually we're teaching this, uh,
class CS251 right now this quarter.
And then we have a number of graduate co- courses,
uh, in cryptography and computer security as well.
Great, um, Pax do you wanna say something
about the graduate certificate or do you- would you like me to go over it?
Uh, sure. I mean- I- we- so we have two certificates,
the graduate certificate as Professor Boneh pointed out.
There's a core number of, uh,
of graduate offerings that are available online.
And so this is a certificate you can take by,
I believe it's four graduate offerings.
If you complete all four of them and then you earn a certificate, um,
you get Stanford academic credit,
and you'll get a, a Stanford transcript.
So it is the same class that on-campus students are taking.
Um, and it's just you're watching the lectures remotely.
And we also have a professional certificate,
and we'll circle back to this at the end.
Uh, there are links to these in your console.
The professional ce- certificate is a little bit less rigorous.
It's still technical, um, a little bit shorter.
And it's, uh, it's
basically self-paced online modules that teach you the fundamentals of computer security.
Great. Okay. Thank you. All right.
So with that, let's dive right in.
So I'm excited to tell you about some of our,
about some of our recent work.
So that what I'd like to cover and hopefully we'll have enough time for
this is- let's just start off with talking about
second factor authentication and what are called the universal second factor tokens and
some security issues with those and ways to
potentially improve security of this second factor tokens.
So we'll talk about that. Uh, then I'll talk about,
uh, two more topics more briefly.
I'd like to tell you a little bit about how to, um,
accelerate ML classifications when that's,
uh, being done in the cloud.
And so this is kind of a cute application of cryptography.
And of course, you know, these days,
we can't give a talk on cryptography
without talking about- a little bit about blockchains.
And so I'll tell you a little bit about how, um, uh,
an approach to scaling the blockchain using a concept called an accumulator,
uh, as well as what's called the snarks.
So we'll discuss this at the very end.
And hopefully we'll have a little bit extra time in,
in that which case there's a fourth topic I'd like to tell you about.
Um, doesn't- I'm, I'm just gonna talk through it without slides.
It's kind of a cool way of,
of, uh, uh, encrypting data in the Cloud.
So yeah, so these are kind of the recent things we've-,
we've been working on this past year,
and so let's, uh, dive right in. All right.
So the first topic I'd like to start with are
the second factor authentication tokens and
how the security and this is a system called True2F that, uh,
we developed with, uh,
to a couple of our students,
Emma Dauterman and Henry Corr- Corrigan-Gibbs, uh,
along with my colleague David Mazieres and,
uh, Dominic Rizzo who's with Google.
And this was published, uh, in last, uh,
last May at the,
uh, Triple E Security and Privacy.
So i- if in case if you want to read more about what I'm about to say,
you can always go back to the paper and,
and get the full details.
Okay. So let's, uh, let's,
uh, uh,get right to it.
So, uh, I'm sure you've- you all know what second factor authentication tokens are.
I have a picture of one of them here.
And it's kind of remarkable that last year,
Google put out these really, uh,
powerful press releases saying that since
they have mandated the use of these tokens on their,
uh, all, all their work devices,
essentially their employees have no longer been, uh, phished.
Yeah, this is a good protection um,
for our, uh, uh, uh,
protecting long-term credentials, uh,
from phishing attacks or malware attacks, um, and so on.
And so you can see these press releases kinda explain that,
uh, Google has been very successful in getting these deployed all across our campus.
And in fact, as a result,
they've had essentially zero,
uh, successful phishing attacks against them.
This is kind of a remarkable statistics.
So I just wanted to share that facts,
or at least those, those press releases, uh, with you.
So let's talk a little bit about how these U2F tokens, uh,
work and what model- what's,
uh, what is the security threats that they're trying to protect against.
So the problem that we're dealing with is that,
uh, when you login to a remote system through your browser, you know,
you're typing in your password into the browser but you have no idea really, uh,
maybe your browser is infected and there is malware that's
actually stealing your password as you're typing it in.
Maybe you're actually not at the right site.
You know, you're trying to log into your bank but it's actually you are
sitting at some phishing site and now you're just typing
in your banking password into your phishing- into
the phishing site and you've just given away, uh, your password.
So the problem is that essentially we can't really trust
passwords that are being typed into- into the browser.
We- we really-, uh,
we're kinda putting them at risk and we'd like to protect those, uh, those passwords.
So second factor authentication tokens, uh,
these U2F tokens essentially are a way to ensure that even if the browser is infected,
so this is the symbol that I'll use for mal- for malware,
even if the browser is infected,
the malware cannot steal your long-term credential.
Maybe they can get your password but
the password is not enough to login into your account.
Uh, and so in addition,
there is this U2F token.
The U2F token, uh,
stores a secret credential as well and in order to log into
your account you have to use both your password
and the secret credentials stored on the U2F token.
Yeah. So the U2F not tokens have,
um, a couple of security capabilities.
The first one is,
the last thing we want is to make
these U2F tokens make it possible to track you across different websites.
So it's important that, um,
even though this U2F token presents a credential to the website,
it's important that it's a different credential for every website.
So you cannot be tracked, uh,
just because you're using a U2F token.
So here in this example I have down here,
you can see the U2F token on the left.
You're trying to login into a service and I'll use GitHub.com as,
uh, as the classic service that,
um, would use these U2F tokens.
But you can imagine the same- the same U2F token would also be used to, uh, uh,
to log in to your bank and you wanna make sure that
the bank working with GitHub can't tell that it's you connecting to both locations.
It's just a matter of, um, you know,
interesting privacy that these tokens provide.
The second property they provide are what are called these, uh, counters.
And the reason for the counter is essentially to prevent the device from being cloned.
So if the U2F token somehow,
you know, you leave it on your desk,
somebody takes it and clones it and then gives you the
original token but now they have a clone of your token,
the idea is that they should not be able to use
the clone of the token to login as you because
the token is going to present a counter to every- on every login that it's- uh,
sends to the servers.
If the service ever sees the same counter twice or it sees a gap in the counter sequence,
that means that there are two tokens that are being used as you, uh, and that,
you know, it sounds the alerts to tell you
that your token has been cloned and you have to reset your token.
Okay. So those are kind of
the security threats that these tokens are trying to prevent against.
So they provide privacy.
So A insures your password is not stolen,
theft of your password is not sufficient.
B, it doesn't violate your privacy.
And three, uh, it, uh,
ensures that a cloning- a cloning attack will be detected.
So I want to tell you a little bit more about how this protocol actually works.
How does U2F, uh,
actually log you in to a remote site.
And so in the U2F protocol,
there are basically two- two steps.
Actually, there are three steps.
The first step is to initialize the device. So let's- let's do that.
So when you initialize the device,
basically the device, uh,
generates a secret key that it stores locally.
But once you initialize a device, there are two steps.
Yeah. The first step is to register the device,
the token, with the remote service.
Again, GitHub.com in this case.
So let's walk through abstractly how this works.
And I'm gonna- I'm only giving a simplified view of the protocol
just we kind of understand the basic idea of how this protocol works.
So effectively what happens is the service will send the identity of the service,
yeah, GitHub.com in this case, to your browser.
Your browser will send that to the device, um,
along with a certain challenge which I won't talk about here.
What the device will do then is it will generate a service specific key.
Yes, so here it generates
kind of a secret key that's specific to the service you're trying to log into,
uh, and will send the corresponding public key.
Yeah, so this is the public part of the secret key,
um, or rather maybe I should be more precise,
the device will generate what's called a secret public key pair and it will
store the secret key internally and store and send the public key,
uh, back to- to the service.
Okay. So what it sends over is the- the public key that,
um, the service will use.
It sends a signature indicating yes,
this is a key that was really generated by the device,
um, and then it sends over this handle.
So a handle, it's kind of interesting.
So these devices don't have a lot of storage on them.
So it's not- it's unreasonable to expect them to
store the secret keys for all the services with which you authenticate.
So instead, what the device will do is it will go ahead and
encrypt this secret key, this SKID.
It will encrypt it using its long-term secret key,
the SK that's on top, um,
and then the handle basically will contain the encrypted, uh,
service specific secret key encrypted under the general secret key stored on the device.
And you can see this handle is gonna be sent all the way back to
the service and the service will store the handle,
uh, for the device.
Okay. So the service will get the public key,
the signature indicating that this is an authentic public key,
and the handle which is an encrypted- encryption of the SKID for the service.
So later on, so this a registration phase.
Yeah, this is where you register the token with, uh, the service.
Later on when you wanna authenticate- oh, by the way,
I should mention this public key that's being sent to the devi- to the service is
different for every service and that's the part that prevents linking of the public key,
uh, linking your, um,
your identity across multiple sites.
Okay. The next thing that happens is when you want to actually login to a service,
login to GitHub.com for example,
what GitHub will do is it will send its identity,
yeah the GitHub.com fact,
a challenge and a handle,
the handle back of the device.
The device will use the handle to regenerate, uh,
the service specific secret ID and then it
sends back a signature and the challenge saying,
yes this is, um, uh,
this is me authorizing- proving that I know the secret key and I'm-
I'm actually authorizing a login into the service using my identity.
You can see that there's a counter attached here,
this is the anti-cloning counter that I was mentioning.
So this counter is incremented for every login to the service.
So the service can then tell if it ever sees the counter of the same counter value twice,
it can tell the device has been cloned and then,
um, the device needs to be reset.
But the point is, the service will then verify that
the secret- that the signature that you see here,
secret ID really is a signature on the challenge and then,
uh, if so it would allow the,
uh, login to- to proceed.
Okay. So that's the U2F protocol, uh,
again simplified version of the U2F protocol, uh, so you see how it works.
Yeah, it has these two parts;
one for registration and one's for authentication.
Great. So now that we understand how this- how the protocol works,
um, let me just mention that there are some problems with this protocol.
And the problem in particular is the fact that the security model that
U2F is intended to defend against is the browser being infected.
And you notice if the browser is infected,
that does not let the attacker steal
the device's secret key because the infection does not affect the device.
However, what's been shown is that there are
sometimes flaws in the device itself that can end up leaking your secret key.
So this is a well-publicized- publicized example from a year or two ago where there was,
uh, an issue with how Infineon generated its public keys.
These Infineon chips were used in some of these U2F tokens.
As a result these U2F tokens were- let's just say not as secure as one would expect.
Um, so this actually caused the massive recall of a lot of the tokens.
They had to be reissued, um, and so on.
Yeah, so this is one example where the token itself can go
wrong and that would compromise security of the U2F protocol.
But there have been a lot of other examples of issues with tokens.
For example, the token might use weak randomness and if it- if it uses weak randomness,
the secret key that the token uses itself is not going to be
particularly secure and again remote attacker might
be able to extract the secret key on
the device just [NOISE] by listening to interaction with a remote service.
Okay. So there are all sorts of issues,
security issues with the tokens.
And so what we'd like to propose is sort of a strengthening of- oh actually sorry, um
Let me explain, ah, even ho- how these things can actually, ah, cause harm.
So the problem is,
it's not as bugs and the token itself.
In fact, you know,
these tokens are manufactured in as some third party facility.
Um, and- and in some sense the users are expected to trust that,
that when these devices, ah, you know,
appear in their mailbox the devices themselves can be
trusted and be trustworthy to log them in into remote services.
So you may have heard there had been some issues with
supply chain attacks and so what if the device itself has been engineered?
Yeah, the device has been engineered to actually leak your secret key.
So for example, what happens if the devices shows up in your mailbox?
You initialize it.
But in fact, ah, somehow the attacker
needed so that even though you've initialized the device,
the attacker will know the secret key that's gonna live on your device.
And if the attacker knows the secret key on your device,
they can effectively login as you ah,
to any site where they also know your password.
Yes, so we'd like to make sure that's,ah, that cannot happen.
It turns out that's actually not so difficult to defend against making
sure that the secret key and the device is unknown to the attacker.
I'll show you how to- to do that in a minute.
Um, but it turns out even that's not enough, right?
So suppose the secret key is secure on the device.
There- there is even worse problems that can happen.
So I told you that for every service,
the device has to generate a public key and then it sends that to the service.
Well, what if the public key is generated in such a way where the bits of
a public key leak the device's long-term secret key, right?
Now that's another way for ah, ah, um, um,
you know, a rogue device to leave the secret key that's on the device.
Even worse, what happens if- when the device generates a signature?
The signature itself leaves the long-term secret key.
And let me remind you that U2F uses a signature scheme called ECDSA, ECDSA.
In ECDSA, there is randomness that's being used to generate ECDSA signatures.
Well, what if the randomness of the device users is such
that the resulting ECDSA signature leaks to device a secret key.
Again, no one can tell that this is happening,
but the remote server will learn your secret key.
Okay, so these are kinda, ah, concerns that we have,
ah, about the security of the devices.
And our goal of course is to secure devices against those types of attacks, okay?
So that will- that's where we're headed.
So um, yeah, so we're not saying this is actually going on in the real world.
It's just it'd be nice if, ah,
as a customer when a device appears in my mailbox,
I would have some guarantees that even if the device is malicious it
cannot leak my long-term second factor sequence, okay.
So this is where the system True2F comes into play,
that's, ah, augments U2F.
And so let me explain, let me tell you what True2F does.
So just to review,
the basic U2F security model says,
''If your browser is infected then malware on your browser cannot steal your- your,
your U2F long-term secrets.''
What's True2F does is it says, ''Yes, this is great,
this is a wonderful security model and as we saw it's very successful.
When Google deployed, it was very successful, but we can do better.
So how can we do better?
What we'd like to do is actually augment the security model and
say that even if the device is malicious but the browser has not been infected,
even in that case the user should have security,'' okay?
So the malware actually lives in the device, um,
and even though that device now is completely compromise,
an attacker still will not be able to compromise the user's second factor token.
Yeah, that's kinda of our goal.
So it's not replacing U2F,
it's basically augmenting the security model that U2F is designed to protect against.
Now, one thing I'll point out is- ah,
in fact if both your browser and yours tokens are- are basically compromised,
you know, then we can't help you.
Why, you know, we- we give
up and at that point the attacker has you if you compromise, ah,
both your browser and the token,
there is sort of no Root of Trust and we can't really protect you from the attacker.
But as long as one of them is safe and the other one is compromised,
True2F will still provide security. So how do we do it?
Well, so there are sort of two principles that we follow in the design.
So again this is an augmentation of U2F,
it's backwards compatible with U2F.
Uh, just trying to improve the security that the token provides for you.
Okay. So the first principle is what we'd like to do is essentially,
every response that the token sends
back to the browser can be verified by the browser, okay.
The browser does not know the token's secrets.
Even though the browser doesn't know the token secret,
it can verify that everything that the token did is correct.
Yep, so for example,
when we generates an ECDSA signature,
the browser should be able to verify that that ECDSA signature
was generated using truly random randomness, okay.
That's kinda of the goal we're headed forth towards.
And the way we do that by the way is using, um,
using techniques that come from the world of zero-knowledge proofs,
where effectively the device can prove to
the browser that everything it did is, it did correctly.
So that's, ah, that's principle number one that
the device will prove it did things correctly to the browser.
So as long as the browser's honest,
the device cannot cheat.
The second principle is,
there are places where the device needs randomness.
For example, when the device generates its initial secret it needs randomness, okay?
And so today we basically rely on the device to generate randomness on its own.
In True2F what happens is,
the browser contributes randomness to the process,
um, so some randomness is generated in browser,
some randomness is generated on the device,
and the combination of the two sources of
randomness are- are used to generate the secrets on the device.
Yeah, so as long as one of them is truly random,
the total result on the device will be truly random.
And the interesting thing here is the device has to
prove that it didn't just ignore the randomness that came from the browser,
it has to prove that it actually used that randomness, okay.
And again, this goes into the realm of
zero-knowledge proofs where it proves that it generated as
public-key correctly using the randomness from
the browser without revealing the secret key to the actual browser.
So how do we do it? Well, so there are three cryptographic tools that are- that,
ah, we employ here, yeah.
So I'll- I'll again I'm gonna keep this at a high level.
So I'll walk through these,
um, again at a very high level.
If somebody is interested at the end,
I can maybe go into a bit more detail but for now
I'll just describe these at a high level, okay.
So the first step you remember is the device has to
generate a secret key during initialization time.
We'd like to make sure that secret key is well- you know,
it is- as has high- high entropy
and no one outside of the device will know what our secret key is.
So when we do that is again like I mentioned,
we use entropy that's generated both on the token and on the browser.
Those- that entropy- that joint entropy
is used to generate the device's long-term secrets,
and then the device proves that it actually used that long-term secret.
Yeah, so again this is using the concept of zero-knowledge proofs, where, um,
I can prove to you- the device can prove to the browser that it did- did
things correctly without actually revealing any of its secrets to the browser.
By the way, I have to say zero-knowledge proofs are
a truly fascinating- fascinating area of cryptography.
It's kind of a remarkable fact that anything that you can do,
anything you can prove,
you can actually prove in zero-knowledge.
And if- this is something that should be discussed in
our crypto class and quite a bit of depth in that,
ah, this is kind of become
a fairly important tool especially in the world of blockchains.
The world of zero-knowledge proofs allows you to
do private transactions on a public blockchain.
Yeah. So it's quite fascinating that is
tool that's been developing in the crypto world for
20-30 years now has become very practical
and very widely needed in the world of watching, okay.
But that's only the first part.
So Cloud- provider key generation is the first part.
The second part, um, is- remember when we
did that- when we register a device with a particular website,
we have to, ah, we have to generate a public key,
this pk_ID that's specific to the website, okay.
So we'd like to make sure that the public key is random so you cannot be tracked.
Um, at the same time,
we needs to be verifiable by the browser.
So here we have, ah, a cryptographic mechanism we call a verifiable identity family.
But really what it does is it's a way for the device to generate a unique public key per
service derived from its long-term secret key
that the browser can verify was generated correctly.
Again, the browser doesn't know the secret key but it can verify that this pk_ID,
this service specific public key was generated correctly and is truly random.
Yeah. So that we do,
ah, on the browser.
So that's the second step, that's during registration.
And finally, during- during login,
during authentication, we have to make sure that
the ECDSA signature is generated correctly.
And again we use something similar to
collaborate- collaborate- collaborative key generation
to make sure that the ECDSA signature was generated correctly.
As we put these three components together,
this- the interesting thing here is that this forces changes on the device, yes.
So you have to, um,
augment the design of the device.
Um, you have to,
ah, augment the browser.
Ah, so we have a browser extension that sits in your browser and
interacts with the device using the pot- the tools that we just described here.
Interestingly, the service need- does not need to change at all.
So GitHub.com doesn't change at all.
These are really only client side changes,
which of course makes it easier to deploy, yep.
So this is kind of an idea.
So the- the goal here is to augment
the U2F security model to provide security also against a malicious device.
So, you know, of course we'd like to implement and experiment with things.
So, ah, so, um, we implemented this,
this as you can see here,
this was implemented on a tests development device.
So you can see the USB,
on the right of the picture you can see the USB token that plugs in.
But this is a development device, um,
obviously you can take this and shrink it into
just a regular form factor for a U2F token.
But for us, you know, development device was
sufficient just to demonstrate that this works.
And so you can see performance numbers here, ah, is effectively.
Let's see, the top row shows you- I won't go into too much detail here.
I'll just tell you the top row shows that during registration time,
registration takes 234 milliseconds as opposed to 204 milliseconds.
So you add 30 milliseconds to registration time to do all this extra work,
ah, to randomize and verify.
So 30 milliseconds is not something that the ordinary user will notice,
so seems quite reason- reasonable.
During authentication time, you can see it goes up from 171- from 147 to 171.
So again we're adding roughly 30 milliseconds to authentication.
Again, not something that is a, um,
typical user will notice.
So what this says is essentially using the existing hardware on
U2F tokens just by changing the software on the tokens and, um,
embedding an extension in the browser,
you can actually provide security against the malicious token basically for free,
yet the user will not notice any of these changes,
um, and that will strengthen the- the system, okay.
So, ah, we hope this will actually get deployed, adopted,
its demonstrated using these kind of more events cryptographic schemes,
we can achieve better security, okay.
So I- I guess I have a link to the paper here in case you're
interested in- in how learning more about how this works.
Um, yeah, so it's basically a way to make ah,
U2F tokens resistance that is backwards.
We hope that these things will actually get adopted by the U2F standards.
So we'll see- we'll see if that happens, um.
Then we can all have stronger U2F tokens.
One thing that I'd like to talk a little bit about the future
here in that I imagine many of you actually use
U2F tokens today and many of you are probably annoyed that you have to sacrifice um,
USB ports for the token, right?
You put the token into the USB port,
and then you can't use the port anymore.
Well, so I'm hoping- I'm hoping this is coming in the- in the short-term where these,
ah, second factor authentication tokens will actually
be built into our laptops and built into our phones.
So all we would have to do is just you know,
use the say the Mapbox fingerprint reader, um,
and that's how we authenticate the- the U2F token
built into our laptop to authenticate ourselves beyond the password.
The interesting thing is even if we live in a world
where U2F is embedded in our laptops and in our phones,
even then we still- we still need U2F tokens to be,
ah, you know stand-alone devices,
because we need them for backup, right?
If somebody steals our laptop,
we'd still like to have some U2F token that we can go back to.
Maybe we store it in a safe somewhere or maybe we stored in our drawer at home.
We'd still like to have a U2F token that we can go back to and
re-login and re-authenticate once we buy a new laptop, um,
and so these tokens are still going to be with us,
and so we still need to implement something like True2F
to protect us from tokens that we might not fully trust.
So that's interesting. The other point I'd like to make is that
once we have U2F tokens embedded into our laptops and phones,
it's interesting that exactly the same technology can be used for other things as well.
So in particular it's- its fascinating that the whole True2F technology,
what I just described, ah, using an untrusted token to login on your behalf.
That exact set of tools could also be used to support a cryptocurrency,
or cryptowallet in our phones and our laptops if you are so- so inclined to do.
So what do I mean by that?
This approach to collab- collaborative key generation
that th- the wallet will use to generate its,
um, ah, public key where funds will be deposited.
Again, you'd like that to be done collaboratively
between the laptop and the security token.
This verifiable address generation, well,
that's not something that's actually done in cryptowallets as
well we'd like that to be verifiable as well,
and so that could also be used for crypto wallets and finally you'd like
your ECDS- ECDSA signatures that are used in many block chain
projects you'd like those to be properly generated and not leak your secret keys.
So again that would also be done in a collaboration between the token and the browser,
and so if you were so inclined to run a wallet and store it in your laptop.
Everything that I talked about applies equally well also to crypto wallet.
So it's interesting when you work on one thing for authentication
and it turns out to be applicable to a completely different and unrelated area.
Yeah, so that's- that was kind of a coincidence that um,
we were- were happy tha- that happened.
Okay, that's all I wanted to say about U2F tokens,
um, hope you guys have some questions,
I'm happy to discuss this in more detail later on.
But I want to switch gears now and talk about something completely different, okay?
So now for something completely different.
So the next thing I'd like to talk about is something that we did
a few months ago on a way to do- a way to outsource um,
Machine Learning classification to the Cloud.
So this is a system called Slalom was developed with my- one of
my PhD students for intern year making the paper appeared recently in ICLR.
Let me explain what the problem is,
so we're shifting gears to something completely different,
but again I'd like you to know that these tools exist and I hope
these tools will be useful to you in your- in your job.
Okay, so what is the problem we're trying to address here?
So, um, so here's the- here's a very common situation.
Imagine you have the task of classifying images,
I don't know maybe you have a database of thousands of images
and you'd like to run a Machine Learning classification model on those images.
So you've already built your model, um,
you just like to run it on those images.
So very often what people do is they will upload the model to the Cloud.
The Cloud has you know,
racks of GPU's that are ava- available for classification.
So you can run your model on the thousands of images that you
have using the Clouds' GPUs and get the results quickly.
Yes, you can see the input is x, the output is y.
You know everybody's happy,
the Cloud generates the results quickly because it has all the GPUs,
maybe you don't have the GPUs, um, ah,
and so you allow- you know you leverage
the Cloud hardware and you get classification done quickly.
There's only one problem with this picture, which is, well,
you know, not everybody fully trusts the Cloud,
and so the question is can so-
in this picture basically you have to upload your model to the Cloud,
you have to upload all your data to the Cloud.
The question is can we do any better?
Yes, so what we'd like to do is we'd like to A,
make sure that we have integrity.
In other words the Cloud- the results we get back from the Clouds are correct, right?
So we can trust that the Cloud correctly rendered that classification algorithm.
We'd like to have privacy, right,
under the Cloud can't actually see the data that we're trying to classify,
and we might also like to have model privacy or
the Cloud doesn't even learn the model that they're using.
So a very common approach for doing that,
this is again- this is a very standard,
ah, concern in industry, you know,
we're-we're outsourcing all of our crown jewels to the Cloud,
we'd like to have some better control of what the Cloud does with all these data.
So a very common or at least an approach that's up and coming is well,
let's try to, um,
let's try to limit what's Cloud users can see,
Cloud administrators can see by using hardware enclaves, okay.
So imagine what we do is we put everything in an enclave.
So what's an enclave? I imagine I get many of you have heard for example of Intel SGX.
So that's a standard enclave architecture where the idea is that, um,
inside the processor you can sor- sort of creates an enclave that's, ah,
the code running in the enclave and the data that's
the code operates on are sort of tamper proof,
ah, so the outside world can't see
the data and cannot modify the data and cannot modify the code.
And so this allows you basically to
encrypt your model and only decrypt it inside of the enclave.
Encrypt your data only decrypts the data inside of the enclave,
run the classification job inside of the enclave
and send the results encrypted back to the client,
the client will decrypt and get the results in the clear.
So by doing this everything in the Cloud is encrypted except whats
inside of a hardware enclave at which point
Cloud administrators can't actually see the model,
they can't see the data and they can't interfere with,
ah, the classification process.
There's only one problem with this picture,
the Cloud invested all this money in building the GPU infrastructure,
and now that we get everything inside of an Intel enclave,
the GPU infrastructure is standing there,
is sitting there completely idle.
We can't use it anymore because everything has to run inside of the enclave.
By the way I wanted to mention that Intel SGX is only one enclave architecture,
there are many other enclave architectures available,
AMD has an architect enclave architecture,
risk-five has an enclave architecture,
and so these enclaves are going to be all around us.
They're going to be all available,
and we might as well- we might as well, ah,
put them to use and this is one example that we'd like- we'd like to put them to use,
but you can see the problem GPU's don't have enclaves.
Yes, and so we'd like to take advantage of
untrusted GPU even though everything is operating inside of an enclave.
Okay, so with this picture we have security but now we've lost performance.
We have a trade off here between security and performance and these kind of
trade-offs is exactly wher- where cryptography can come in and help,
and so, um, what Slalom does,
is effectively a cryptographic protocol between
the enclave and the GPU rack that allows the GPU to
speed up the classification process without looking at
the data in the clear and without being able to tamper with the results.
Okay, so let me explain what I mean by that.
So the classification process is essentially running through layer of a neural net.
Every layer of a neural nets is some sort of a linear process yes,
kind of a very skewed matrix-vector product.
So what we can do is effectively um,
do everything inside of the enclave where things run slowly.
But the matrix-vector products that are done at a different layer of the neural nets,
those we're going to outsource to the GPU but we're gonna do it in
such a way that the data being outsourced to the GPU is- is
encrypted so the GPU cannot see the data in the clear and
the results that come back from the GPU can be verified quickly by the enclave.
This we're not relying on the a and the GPUs for
integrity and the GPU itself does not see the data in the enclave.
The one issue here is that for this to work
the model- the GPU does need to know what the model is.
Okay, so in fact we install on the model doesn't
to be public so we don't achieve model privacy,
but we do achieve integrity and privacy.
Okay? So this is well, I hope this is relevant um,
to things you are trying to do in your own organizations,
and if this is something you're interested in please come talk to us,
we'd be very happy to help deploy this in the real world.
So let me tell you just in terms of performance what do we achieve by doing this,
and so the experiments we did we're using Intel SGX and Nvidia GPU,
and the comparison here is comparing a set
effectively Slalom which uses the SGX enclave along with the GPU,
and we compare that just to doing everything inside of the enclave without using
the GPU which is what you would need to do today if you were gonna do things security.
And so we have here three examples of Neural Nets,
VGG mobile net and ResNet,
those are standard image classification Neural Nets, um,
and you can see the- the bar on the left,
ah, means that we just use column for integrity.
So we just wanna make sure that GPUs are giving us
the right results back but we don't care- care about privacy.
You can see we're getting an order of
magnitude speed-up over doing everything inside of the enclave,
and the bar on the right shows us we do integrity and privacy.
You can see, ah,
this is supposed to be like 10 to the first number is supposed to be 10, not one.
Yes, so we're still getting an order of magnitude improvement over
doing everything in the- in the enclave somehow the one became.
This was supposed to be a 10, not a one [NOISE].
So you can see that these kind of methods really do speed
things up over doing the- everything inside of an enclave.
And so again, I hope this is something that you can put to use
and we'd be more than happy to help make it real.
One thing that I wanted to point out is security is never ever,
ever free and so if you compare this to doing things insecurely, that is,
just doing everything on the GPU so that there's no data privacy and no data integrity,
things are much faster.
So even Slalom, even though we're able to outsource a lot of the GPU,
Slalom is still slower than doing everything on the GPU.
So, uh, that's, ah,
my brief description of Slalom and now we're gonna move on to yet another topic.
Yeah, so I wanna tell you a little bit about blockchains.
I guess these days we can't give a cryptography talk and not talk about blockchains.
So let me tell you about, we have many, many different,
uh, cryptographic blockchain projects that we're doing.
You know the- it's amazing, the blockchain is
an incredible consumer of advanced cryptographic techniques,
so we're having a lot of fun designing cryptographic techniques that can then be used,
uh, by blockchain projects.
Here I want to tell you about just one of those,
um, one of those projects,
it has to do with scaling the blockchain, uh, which again,
is- is something that I think many of
you- it's a tool that many of you should know about,
uh, so let me explain how this works.
So first of all,
let me remind you how the blockchains- how blockchains work today.
So this is how- this is the typical picture for a public blockchain.
So Bitcoin, Ethereum, and many, many others.
Effectively, you know, we have a set of miners that create blocks on
the blockchain and then we have a- a set of users that creates transactions.
The way transactions are submitted to the blockchain today is effectively
by the users sending all their transactions to the miners.
The miners then verify
all these transactions and the transactions get posted onto the blockchain.
Effectively, this is kind of, uh, how things works.
It's not- it's not exactly how things work but you can think of it,
um, abstractly as though that's what's happening.
That's so, everybody verifies, uh,
transactions and then those transactions get, get posted.
So you look at this picture and you say,
well, this is- that's so odd.
Why is everybody verifying- verifying a transaction, uh,
seems like it's enough for
the just one person would verify the transaction and then everybody else,
potentially we trust that one person.
Well, today we can't do that because we don't have
a single- we don't want to have a single point of trust.
So this approach, of course,
is expensive and creates a lot of replicator work.
So the question is, can we do better? Yeah, can we do better?
And the answer is we can,
again using crypto magic.
Okay. So let me talk a little bit about this crypto magic that's used here.
And this is a beautiful, beautiful idea is due to Barry Whitehat from last year,
it's a project called Rollup.
And the idea here is to have basically
one server called the Roll_up server that's gonna verify all the transactions,
so here the picture now is,
all the transactions will get sent to this Roll_up server.
Uh, however, if the Roll_up server is the one that verifies the transactions.
Wait a minute, when I- when I say verify transactions what I mean is that,
you know, there are- money is not being created,
money is not being lost,
there is enough balance to cover transactions by pairs,
uh, and so on and so forth.
Basically the rules of the blockchain are being followed.
So the Roll_up server will verify that all the transactions that are valid,
but how are miners going to trust that that was done correctly?
And the answer is, the Roll_up server will verify the transactions and
produce what's called a short cryptographic
proof that that verification was done correctly.
Okay. So this is done using an absolutely magical cryptographic,
uh, tool called the SNARK.
The SNARK stands for succinct non-interactive arguments as knowledge.
Um, you know, it's a complicated, uh,
sentence but what it does fundamentally is not difficult to- to- to explain.
Effectively, it's a way- so
this proof pi is a SNARK that proves that the Roll_up server did
verification correctly and this proof pi
is short and is very easy for the miners to verify.
Yeah, so the Roll_up server does all the work,
the miners can very quickly verify that that proof is correct.
Okay. So what happens next is this,
um, proof gets sent, you know,
so a summary of the transaction plus the proof gets sent to the miners,
now the miners just verify the proof and that is very easy to do.
Yeah. If you like, uh, a little bit of math here,
uh, if the time to generate- to,
to verify the transaction is T,
the time to verify the proof is logarithmic n_T.
Yeah. So effectively, it's
an exponential speedup in terms of the work that the miner think to do.
Yeah, we went from linear time to verify
all the transaction- transactions to logarithmic time to just verify the proof.
Yeah, this is the magic of SNARKs and that- that's what the Roll_up server does.
Okay. So in effect, if you batch 1,000 transactions together into one proof.
Effectively, this is a 1,000 x
reduction in the amount of work that the blockchain has to do.
So it's kinda an amazing way to scale, uh,
to scale the blockchain and I can tell you there are actually a number of projects
on route now- on route now to deploy this so that we'll have this running soon.
Okay. So, uh, so this is beautiful, beautiful, beautiful idea.
Uh, there's one problem and the problem is generating the SNARK proof,
generating the proof pi is kind of expensive,
expensive in terms of computing time.
So the Roll_up server effectively would have to use
lots and lots of GPUs to generate this proof pi
in a reasonable amount of time so that the transactions along with
a proof can be posted to the blockchain without too much delay.
Right? If the- if the Roll_up server took 24 hours to generate the proof,
that would be impractical, right?
Because, um, the transaction would get posted, uh,
to the Roll_up server but it would only appear on the blockchain
24 hours later while we'd like to have it appear on the blockchain right away.
So we'd like to have these proofs be as fast and
as cheap as possible to generate and that's basically,
uh, where, uh, our work comes in.
So we have a way, uh,
to speed up this proof generate- this proof generation by an order of magnitude over,
over what Roll_up does.
Specifically, the cryptographic tool that we use is what's called an accumulator.
I don't- I don't think I'm going to have time to explain what an accumulator is
but it's one of these things that we discussed in the crypto class.
So if you want to learn more you can either read our paper or,
you know, again we're very happy to discuss these things in the class.
Uh, so these are gonna,
kind of very pretty cryptographic ideas that can, can- again,
can be used to scale up things like
a cryptographic blockchain but potentially
they could be useful for- in other scenarios as well.
All right. So I was thinking of telling you about
one more topic but I see that we're at time now, so, uh,
maybe I'll just mention it as a teaser and if somebody is interested I can answer this,
I can talk about it in the Q&A session.
Uh, it speculates a way to update,
uh, to do, ah,
uh, key rotation in the Cloud more efficiently than what we can do today.
Uh, so I can either talk about that in
a future seminar- webinar or I'd be happy to point to our papers that,
uh, that describe this.
But I think with that I'm gonna stop.
I hope you enjoyed this, uh, short presentation,
it's a quick- quick review of things that we've
talked about and kind of the tools that are,
that are used in the work that I described,
are the things that we described in our various cryptography classes.
So if you're- if you want to learn more,
I'd be- you know, I'd be thrilled and happy to see you in any of our classes.
So I will stop here and hand it over back to you- [OVERLAPPING]
Thank you. Er, thank you, Dan.
So with that, I'm going to turn it over to some Q&A with Professor, uh, Boneh.
So- and then maybe coming to some general questions.
So the first place,
uh, to start would be the, the,
um, the two- the two a- the U2F, ah, tokens.
So a couple of questions came in regard to compromised hardware.
So just in general,
you said that if the hardware and the browser
is- are both compromised then you're out of luck.
What do you see as the future in terms of supply chain attacks and protecting hardware,
um, and- and verifying that hardware is not affected when it comes,
sort of, brand new?
I think that's, uh, that's a really wonderful,
wonderful topic and it's a wonderful topic
actually for future research- research as well.
So you probably have heard that there have been
supply chain attacks on, ah, motherboards where,
uh, extra chips were planted on the mother- on
the motherboard and opened up a backdoor if those had been deployed.
Um, and so it's pretty clear that supply chain attacks on crypto- on,
uh, computing hardware in general,
uh, is coming and we need- we need to do something about this.
We will focus here specifically on supply chain attacks
on one type of hardware namely the U2F token,
but I think the principles we developed here would apply more generally, right?
So the principle is everything that the hardware
does needs to be verified by a different piece of hardware.
Yes. And, uh, so that- that principle I think, uh,
can take us a long way although there's a lot of
research that needs to be done to make sure this,
uh, this actually is- is possible and works.
Uh, so in particular if we're dealing with crypt- with,
uh, hardware that manages secrets,
it's much harder to have another piece of hardware verify that, uh,
the- the first hardware is doing its job correctly
but that's where this- this- the- the role of zero-knowledge will step in.
Yeah, you can prove that you did things correctly
without revealing anything about the secrets that are on the hardware.
If the hardware doesn't manage any secrets,
you know, then there's, uh, then there's, uh,
there's a paradigm called invariant programming where you basically kind of run
multiple copies of- of- of the code potentially on different,
uh, pieces of hardware and you compare those one next to the other.
So this is all very expensive.
Yeah, obviously running, you know,
triplicating your data center and running everything three
times is super expensive but, uh,
you know, if we can't trust the supply chain,
those are type of steps that,
uh, would have to be taken.
So, you know, this is gonna take, uh,
it's a begi- I think it's a beginning of a long, long line of research.
What's the best way to defend against, uh,
malicious hardware and I think we are only the beginning of this.
I- I wish I had, uh,
the- the full answer here,
but there's a lot more research that's needed here.
And yeah, I mean, that's the sort of things that,
uh, we love to work on.
And you'd mentioned sort of building in the- the token into the hardware itself,
into the computer, into the laptop.
I mean, are there other ways to do this,
could you use a Bluetooth enabled sort of authenticator?
Oh yeah. A lo- a lot of these,
a lot of these U2F tokens today are already Bluetooth enabled,
uh, or not necessarily Blue- Bluetooth, it's,
it's a, uh, other- other protocol are enabled,
and the token itself and that's needed because of, uh, phones, right?
Phones don't have USB ports.
[LAUGHTER] So if you're gonna use a U2F token with
your phone because they have to communicate over some wireless protocol.
And so these U2F tokens already supports- support wireless protocols.
And you mentioned that they have limited storage capacity,
is there an opportunity for them to be infected, you know,
when you plug it in- into an infected machine,
is there a way for information to be stored in
the U2F token to then compromise it at that point as well?
Well, again, potentially, these U2F tokens are designed to, uh,
e- effectively not be compromised by the machine they're plugged into but of course,
you know, hardware and software is hardware and software they- it- it might happen.
Although presumably these devices if they're well-designed,
these devices are simple enough that, uh,
they're not going to be inf- uh, affected by malware running on the laptop.
There is concern about the other way around, right?
There's concern that you order your U2F token,
it appears in- in your mailbox,
you plug in into your laptop and the U2F token actually has
malware on it and it might affect your laptop, uh, that way.
So there is some, there's quite a bit of concern about that.
Uh, we basically try to make sure that the U2F token cannot leak
your second-factor secrets but
the U2F token might just infect your machine outright, right?
And that is, uh, again, something that's, uh
That's a matter of operating system security where the operating system on
your laptop needs to generally defend itself against token plugged into USB ports.
Um, and that's just a matter of time.
I mean, things get- things, uh,
get locked down over time. We're not there yet.
You know, today if you find a USB token in the parking lot,
the last thing you wanna do is plug it into your laptop.
Yeah. The best thing to do is give it to lost and found, [LAUGHTER] I suppose.
Um, and- and so, uh, so we're not there yet.
We still are not at the point where we can plug, uh,
untrusted devices in our laptop and assume things will just work.
But hopefully, operating systems will get to the point where this is- this is possible.
Think, uh, given time,
I wanna move on to the next one.
So, uh, Slalom, am I pronouncing it correctly?
Yeah. Slalom, yes.
So, uh, just a couple of questions.
And now this is where it's, um,
hitting against my knowledge.
So there's a question saying that there are a number of works on
private inference but not a lot on private training.
Can you comment on this?
What are some limitations on private training that we need to overcome?
Actually, there's a- there's- there's a fair amount of work on private training.
Uh, this- this falls into an area called federated learning.
And so this is something that you can- that, you know,
our audience can Google and,
uh, and read some papers about this.
Uh, so there are some beautiful examples of
private datasets that you would like to train on,
but those datasets can never be combined into a single location.
So there are protocols for doing federated learning.
Um, the results today are actually- [LAUGHTER] obviously,
they're not as, uh,
accurate as if you do centralized learning.
Um, but this- this- this is possible.
By the way I- I should say that another aspect of
privacy during training time is where you'd like to
train and make sure that the model that you produce at
the end of the training process doesn't leak the training data.
Yeah. And there's a huge amount of work on that.
Uh, so this is another a- aspect of private training,
um, and this is where, uh,
differential privacy techniques come into play.
There's actually again, a fair amount of work on this.
Aga- uh, lots of
very very pretty ideas and how to make
sure that the models does not leak the training data.
Um, that I woul- I'd be happy to go into
more detail but that actually is a topic for a whole other webinar because this is a,
you know, privacy and security of training is like
a pretty big topic in its own right, okay?
[OVERLAPPING] Anyway, I- I mean, maybe I'll mention one more sentence there.
You know, there have also been beautiful attacks,
very interesting attacks on the training process, where literally,
um, corrupting one data item out of
a huge dataset is enough to completely poison the dating process.
It's the- sorry, the training process.
It's a quite a remarkable- remarkable,
um, uh, result there.
And so that could be a topic.
You know, security in machine learning,
um, is a huge area.
This actually goes by the name of adversarial machine learning.
Um, and i- i- it's all about how do we attack and defend the training data- the training,
uh, process, as well as the classi- classification process.
We could easily do a whole webinar just on that.
That's a huge, huge area now.
That's great. I- I'm making a note of that.
[LAUGHTER] Sure, sure.
Um, then let's move on to blockchain. Uh-
Sure.
-so bockcha- you mentioned this I think within zero-knowledge proofs.
Uh, so the question is, right,
when you're publishing something to- to blockchain it's public,
is currently open to everyone.
How do you do that in a way that's encrypted or that's private?
Oh, you asking this question about how do you run,
uh, private transactions in a public blockchain?
Yeah, yeah.
Yeah, so generally- great, I love that question.
[LAUGHTER] So ge- ge- that's generally kind
of the area where- where zero-knowledge proofs shine.
Yeah. So the idea there is instead of putting your transaction data on the blockchain,
what you actually put on the blockchain is a commitment to your transaction data.
You can think of it as sort of, uh,
an encryption of the blockchain.
Uh, sorry, an encryption of your transaction but where there is never need to decrypt.
Yeah. So you put a commitment of your transaction and
a blockchain that leaks nothing about your actual transaction.
And then, um, what you do is you- you provide, uh,
in addition to the commitments,
you provide a zero-knowledge proof saying that- that this transaction is valid.
Whatever is committed is in fact
a valid transaction relative to the current state of the blockchain, yeah?
So this allows the blockchain to sort of, uh,
be universally or publicly verifiable.
Everybody can verify their transactions are- are correct.
So money is not being created,
money is not being lost, and the rules are being followed.
Um, but they learn nothing about the content of the transaction, okay?
So this is- this is like a killer app for,
um, for zero-knowledge proofs systems.
You know, again, I wish I could tell you more about that in this short webinar.
It's again, a wonderful, wonderful area.
Um, you know, it's we- in our blockchain class we're literally just doing this right now.
So we just devoted three weeks just to explain how these, uh,
zero-knowledge proofs work and how they can be used,
um, to do public transac- private transactions.
And the public blockchain,
it's not something that I can I- I- kind of explain
the idea but it's not something that I can explain in one minute.
It's- it's, uh, deeper concept to explain.
Um, and for that, you know, I hope, I really hope to see you in
our classes where we go into much more depth about this.
So one- one concept of the blockchain is that you have
the data distributed to- so that if one server is corrupted,
it- it doesn't necessarily corrupt the others.
This- this- am I- I may be misunderstanding this
so i- by taking the rollup server approach,
are you centralizing back the information?
No, no, so we're not. We're not, Yeah. So that's the whole point.
Even if the rollup ser- the rollup server is not a trusted server.
Even if the rollup server is corrupt,
it cannot issue fake transactions, right?
Because for every transaction that it's, uh, produces,
it has to produce a proof that that transaction is valid.
So the roll- this is- this is really important to understand,
the rollup server is a compute server.
It does not do anything trusted.
Everything that it does is publicly verifiable.
That's the whole point of the rollup server.
And what we were k- interested in is basically how do we speed that up?
How do we make it easier to produce these- these proofs.
Are there any current deployed blockchains using a rollup server that you're aware of?
Uh, so in fact,
rollup is being, uh,
it's a s- it's what's call a layer 2 mechanism.
So it's being implemented on top of Ethereum blockchain.
And there are a number of projects actually underway to,
uh, to make this available.
You'll see, you know,
later this year or maybe early next year,
you'll see some launches of these projects running.
In fact, we didn't- we're- we're over time.
So I wanna be respectful of [OVERLAPPING] your time and everyone's time,
just two quick last questions.
The first one because everyone was asking this,
um, any update on,
uh, Crypto 2 on [OVERLAPPING]?
Oh, boy. Oh, my God. That's a- [LAUGHTER] okay, excellent question.
Yeah. So I guess Crypto 1 is- is,
uh, a MOOC, um, um, cryptography.
And I promised to have a more advanced MOOC,
it's called Crypto 2. Uh, so that is coming.
Yes, I'm working on it, I promise.
Uh, however, I am also working on a textbook in cryptography.
This- it's a freely available textbook, available to anybody.
Uh, it's available at cryptobook.us.
Uh, free textbook.
Uh, and as soon as the textbook is finished,
uh, Crypto 2 will launch.
Basically, everything in Crypto 2 is based on what's in the textbook.
So I wanna finish the text book first and then we will do, uh, then I will do Crypto 2.
So hopefully, it's an- it's another year away, we're getting there.
Um, but yeah, yeah Crypto 2 is high on my priority list,
so I, um, I hope not to disappoint.
Well, I'm glad if- but we've got a lot of questions about it.
So I- I guess the last thing that I ask as we sign off it's
when you're looking at the horizon of cryptography and computer security,
what are you most excited or scared of,
in the next, let's say, four years?
Oh, my God, that's, uh, there's a lot of topics.
So I think, uh, adversarial machine learning is- is a pretty exciting area.
This is where effectively you wish, you know,
the community shows that machine learning
algorithms work fantastically well on random data.
They don't work so well on adversarial data.
And the question is how to make them robust?
This is a big, big area for computer security.
O- obviously quantum computing is something that we all, uh, are interested in.
So quantum computers, as you know,
will break a lot of the Crypto that's deployed today and the question is,
um, what is the best cryptography to move to?
So there are a couple of avenues to explore.
And so that's actually being actively worked on.
Uh, I guess I promised to talk about quantum computing
but we will do that in a future webinar.
Um, and, yeah, those are- I think those are- uh,
and- and I foresee, of course,
the blockchain generates a lot of interesting, uh, cryptographic questions.
So that's another fun- really fun area to work in.
Perfect. Well, I wanna thank you so much, Professor Boneh,
for your time today and the excellent presentation,
and we appreciate everyone who joined us remotely.
And we, uh, look forward to future webinars.
We already have a couple of topics it sounds like?
Yeah. So thank you very much,
everybody, and thanks for joining us.
Bye.
