SPEAKER: This is CS50.
DAVID MALAN: Hello, world.
This is episode two of the CS50 podcast.
My name is David Malan,
and I'm here with--
COLTON OGDEN: Colton Ogden.
You know, in the last couple episodes--
just to get us kicked off here--
we've had this theme of robocalls.
And recently I haven't been receiving
any robocalls-- at least not as much.
How about you?
DAVID MALAN: I'm sorry.
Yeah, I've stopped calling you.
I haven't actually.
After our scintillating segment
on episode 0 of CS50 podcast,
I actually installed an app that's
theoretically filters out some calls.
I don't know if it's
actually been working,
but I have gotten zero robocalls
in at least the past week which
was pretty remarkable.
COLTON OGDEN: What is the app?
DAVID MALAN: It's
something from my carrier.
You just download it from the
app store and it just works.
But apparently iOS-- the operating
system running on iPhones--
has some kind of support
for these third party
apps so that your calls get
temporarily routed to the app.
It then decides yay or nay, this is
spam, and then lets it through or not.
COLTON OGDEN: So this is kind of
similar to that Google service
they provided where you would have a--
you would have Google basically
act as a broker between you
and someone trying to contact you.
DAVID MALAN: I suppose.
This is client side, though.
This is local to your phone, I believe.
So it's your phone
making these judgments
even though the phone
app contacts a server
to find out the latest
blacklist of phone numbers
that are known to be spam callers.
COLTON OGDEN: OK.
So a little more secure.
You're not putting things necessarily
all on the hands of somebody else,
but it's still solving the
problem in kind of the same way.
Kind of a--
DAVID MALAN: Reducing it at least.
COLTON OGDEN: --level of indirection.
DAVID MALAN: Yeah, indeed.
And as I've said before,
I mean, frankly, I
could filter out 90%
plus of my robocalls
by just ignoring anyone who has the
same first six digits as my phone number
because this is, again, a sort of
scheme that people use to trick you
into thinking a neighbor is calling.
COLTON OGDEN: Right.
DAVID MALAN: And for
those unfamiliar, robocall
is typically a phone call generated
programmatically these days
by software or by some kind
of computational process
that's actually calling
you automatically
as opposed to it being a human
on the other end of the line.
COLTON OGDEN: Or David
writing a script in front of--
DAVID MALAN: I told
you I'd stop and I did.
COLTON OGDEN: Away from the
robocalls, because we definitely
touched on that a lot in
the last couple of weeks,
one other thing that we have touched a
lot on and is still coming to haunt us
is news regarding Facebook
and the sort of security faux
pas they've been committing
in the recent past.
DAVID MALAN: That's a
very nice way of putting
it-- the Facebook-- the faux pas.
COLTON OGDEN: Faux pas.
The thing that they
recently did was that--
and there's a couple of things
we have here on our docket--
but one of the things that we'll
start with is the plain text ordeal.
And they'd claimed, supposedly, they had
10,000 or 100,000 plain text passwords,
but it turns out they actually
had a few more than that.
DAVID MALAN: Oh, it was
like 11,000 or 12,000?
COLTON OGDEN: No, it was millions.
DAVID MALAN: Millions of
passwords stored in plain text?
COLTON OGDEN: Millions.
Instagram and Facebook.
DAVID MALAN: Jeez.
That's a lot of passwords.
And you'll recall that
this is apparently
the result, as best people can
glean, of some kind of logging
process or something like that
where it's not an attack per se.
It was just some poor
internal process that
was writing these things in
the plain text to log files.
But of course, I gather
from the articles
that I've seen, that
this has been happening
maybe since like 2012 in some cases.
And I think the risk there--
because correct me if I'm wrong,
they haven't required all such
users to change their passwords,
at least not yet--
COLTON OGDEN: That's correct.
DAVID MALAN: So they're
notifying folks, but even then,
if this data were sitting on
servers for that many years,
you can't help but wonder how
many humans may have had access.
And even if they didn't use
the information at the time,
maybe they did, maybe they only
accessed one person's account.
It's really hard to know.
So how do you think about
situations like this
where you know your password may have
been exposed but not necessarily?
COLTON OGDEN: I think the safest
route is just to change it.
I mean, it depends on how private
and how valuable that information
is to you.
How much you're willing to--
you know, it's a cost benefit
analysis type of deal.
And I think when it comes
to personal information,
I don't think it's necessarily
ever bad to be too safe, right?
DAVID MALAN: Yeah.
That's fair.
Well, and I think especially if you're
already practicing best practices
and not using, in the first place,
the same password on multiple sites
because I could imagine someone being
reluctant to change their password
if they use it everywhere.
Now, that alone we discuss
in CS50 is not a good thing,
but I do think there's some
realities out there where
people might be feeling
some tensions like,
oh, I don't want to do this again.
COLTON OGDEN: And there
are certainly tools
that help mitigate this problem,
too-- password managers, which
I don't know if we've
talked about on the podcast,
but certainly in certain lectures
we talked about it before.
DAVID MALAN: Yeah.
We've encouraged this,
and we've provided
CS50 students here at Harvard, for
instance, with access to such programs
for free.
1Password is a popular option.
Last Pass is a popular option.
Now, buyer beware, literally,
because there have certainly
been cases where password
managers have themselves
been flawed which is sort of
tragic if the whole point of this
is to protect you, but
humans make mistakes.
Humans write software,
and so that's inevitable.
But the idea of a password
manager, of course,
is that at least now you can have
longer, harder to guess passwords,
and trust your software to store it for
you rather than your own human brain
and, god forbid, the post
it note on your monitor.
COLTON OGDEN: True.
Yeah.
One of the features that most
password managers, including
ones that we've advocated, employ is
the generation of very hard to guess
passwords.
DAVID MALAN: Yes.
I use that all the time.
I mean, I don't know 99%
of my passwords these days
because I let the software
generate it pseudorandomly,
and then I use that to log in.
But there's, of course,
a risk there, right?
Like most of these programs have
master passwords so to speak,
whereby all of your passwords, however
many dozens or hundreds you have,
are protected by one master password.
And I think the presumption is
that that master password is just
much longer, much harder to guess,
but it's only one really long phrase
that you the human have to remember.
COLTON OGDEN: Right.
It's still a single point of failure.
If that gets compromises, if
someone gets access to that,
they get access to every single password
that you have stored in that database.
DAVID MALAN: Honestly,
or if you just forget it.
I mean, there's
definitely been occasions
where-- not in the case
of my password manager--
some accounts I just don't use for
very long and the memory fades.
So even then, these tools
do encourage you, though,
to store some kind of backup codes, or
recovery codes as they're often called,
literally the kind of thing
you might print on a printout
and then store in a vault or
safe, under your mattress--
just somewhere separate
from the tool itself.
COLTON OGDEN: Right.
Yeah, it's hard to be 100% safe,
but certainly it's a step, I think,
in the right direction.
Definitely a step above Post It notes.
I think we can agree on that.
DAVID MALAN: Yes.
I don't use the Post It notes anymore.
COLTON OGDEN: Another thing
off of the heels of that,
and that was off of a topic
we talked about last week,
it turns out Facebook
was actually asking
for people to provide
them not their Facebook
password, but their email password.
Their actual, completely
separate from Facebook,
email password to log in so Facebook
could act as a sort of broker for them
and log in and verify their account.
DAVID MALAN: Indeed.
And I think we already concluded
last episode that that's not really
the best practice.
COLTON OGDEN: Not a good move.
Facebook actually also admitted
that that was a bad move.
DAVID MALAN: But fortunately
nothing came of it and all is well.
COLTON OGDEN: Yeah.
No, actually it turns out that
Facebook "accidentally" imported--
I think it was about 15,000--
might be more than that--
[INAUDIBLE]
DAVID MALAN: Well, next week it's
going to be 15 million, right?
COLTON OGDEN: Sorry, I misspoke.
1.5 million folks' email contacts.
DAVID MALAN: That's a lot.
COLTON OGDEN: And this
was this was supposedly
an accident as part of their upload
process for this actual brokership.
But it's-- are we really that surprised?
DAVID MALAN: I mean, here,
too, humans make mistakes.
Humans write software.
And so even if this weren't
deliberate, these things do happen.
But to be honest, I
will-- so in that case,
the email vector is the
fundamental problem.
Logging into someone's
email account, thereby
giving them access to contacts, whether
it was Gmail or Yahoo or the like--
I mean, that is just-- that
alone should never have happened.
But it's gotten me
thinking-- and you and I
were talking before the
podcast today, too--
about how the permissions
model in software
these days really isn't making
this problem any better.
So for instance, in iOS-- the
operating system for iPhones--
there is in Swift and in
Objective C a function,
essentially, that you kindly looked
up that allows you to request access
as an application to a user's contacts.
And my concern, at least, the
deeper I've dived into this,
is that it's all or nothing.
It's all of the contacts
or none of them.
There's no fine grain permissions,
which makes me very nervous,
because if you have dozens, hundreds,
thousands of contacts in your address
book and maybe you want to call
someone via WhatsApp, or via Skype,
or you want to use some game
that uses your social network,
you can't, it seems, provide granular
access to those contacts saying,
OK, here's Colton's
name and phone number
and email address because I need you
to know that in order to talk to him.
But instead, it just opens up the
possibility that all of your contacts
get uploaded to that third party.
And honestly, that's the sort of
cat that once it's out of the bag
is not going back in because if they
store those contacts in their database,
you're never getting them back.
COLTON OGDEN: It's kind of like a
drinking from the fire hose analogy.
DAVID MALAN: Yeah.
I mean, that's really worrisome.
I think until consumers and users
start demanding finer grained control
over the data, things might not change.
And so here, too, I'm surprised
that Apple especially, who's
been more mindful perhaps than
some other companies of privacy,
still have this all or
nothing approach, right?
I technically would imagine seeing a
generic OS specific prompt allowing
me to browse through my contacts.
I select one or two or more contacts--
you among them, for instance--
and then I would expect the API call to
only pass along that subset of my data
to the application.
But instead, I'm pretty sure, as best
we could tell from the documentation,
it's getting unfettered read
write access to those contacts.
COLTON OGDEN: And I
mean, my first thought
about that is it's so much
easier just to click yes.
When Facebook or Apple says this
app wants to use your contacts,
you allow permission.
Rather than in the model you proposed
is certainly more secure, more fine
grained, but requires more
effort on the end user's part.
DAVID MALAN: Yeah, absolutely.
And you know, you got me thinking
that there's an opportunity here,
really, for what some
municipalities in the US
have started doing with
governmental elections
where you don't just
necessarily have a ballot,
but you might have access to a one
page cheat sheet for the candidates
where each of the candidates
has been allowed--
as I've seen in Boston, for instance--
to give a one or more sentence
description of maybe their platform
or something about them so that you can
at least understand the implications.
And more importantly, when
there's ballot questions
on the referendum like legal
questions, policy questions,
for the local municipality,
they often will
explain or have a third
party, an advocacy group,
explain both the pros and the
cons in, ideally, some neutral way
so that the humans like we
can vote on those issues
and still have some appreciation
for the implication.
But instead, the analog here
would be like asking someone
to vote on a governmental question
and just saying yes or no, that's it.
There is no fine grained
disclosure of the implications.
So it would be nice, too, if Android
and Apple really adopted the habit
or forced companies who wanted to
distribute software on those platforms
to say what does it mean if I click yes.
And maybe have a third
party group or community
of third parties responsible for
writing that language, not necessarily
the companies themselves.
COLTON OGDEN: That's true, yeah, because
on Facebook when you approve apps
you can see what it has
access to, but not necessarily
for what intended purpose.
DAVID MALAN: Yeah, indeed.
And that's fair.
In the web context, companies that
use OAuth, a very popular protocol
for authentication--
GitHub, for instance, among them--
does give you a bit more detail
as to what's being asked.
This company is asking for your email
address and your name and so forth.
But even then,
unfortunately, it tends not
to be fine grained opt
in or out for the user.
If you don't say yes to
everything that's listed there,
you just don't proceed further.
COLTON OGDEN: Yeah.
Presumably because they need
all those pieces of information
to give you the service you're
requesting which is tough.
DAVID MALAN: Or want.
I mean, they probably
make a calculated decision
that we don't really need this field,
but it would be useful to have.
We might as well get it when the
user consents in the first place.
COLTON OGDEN: And they
figure maybe most users
aren't going to sort of balk
too hard at having to give up
one extra piece of information.
DAVID MALAN: Yeah.
But that's a slippery slope, certainly.
COLTON OGDEN: Oh, yeah.
Absolutely.
And it kind of ties into the
idea we talked about last week
of you leave one little
piece of trust at the door
and you made the analogy
that you look back
and the breadcrumb trail is just vast.
DAVID MALAN: Oh and I've absolutely
clicked Yes on those things myself.
And even though now I'm much more
mindful, 2019, of what I click
and what apps I allow access to my
contacts, the cat's out of the bag
and somewhere out there is
a lot of that data already.
COLTON OGDEN: Yeah.
It's a tough trade off, certainly.
I do think a lot of it does
come down to accessibility.
I think it's preying on the how
much easier it is for people just
to click a button versus actually
consider the problem that they're
getting into.
DAVID MALAN: But to be fair,
not necessarily preying.
If the only API call I have is access
contacts or not, I as a developer,
I'm going to use that API call.
COLTON OGDEN: Yeah.
True.
I guess away from that
topic, away from Facebook--
because we've been
picking on Facebook a lot.
In the last few weeks, we've been
picking on Facebook consistently-- week
by week by week.
And it may just be a
rough period for them.
Away from Facebook, there was
another thing, interestingly,
that I thought was actually a
good move on Microsoft's part.
So Microsoft recently turned down--
let me just confirm 100%
which institution it was.
OK.
So it was California.
In California, there was
a law enforcement agency
who asked Microsoft to install
essentially facial recognition
software in their cars such that they
would be able to identify, presumably,
prospective criminals
or existing criminals.
And they rejected it outright because
it turns out the data sets were not
trained very well on these algorithms.
DAVID MALAN: Yeah.
This is an issue in computer
science more generally,
especially if
demographically it is skewed
toward male figures or white figures.
The data sets you might
be using to actually train
your data in a machine
learning sense might
be people who look like you because
they might be people who work with you,
for instance.
And so, frankly, to
their credit, I think
Microsoft seems to be
acknowledging this proactively
and mindfully so that
the software is not
deployed in a way where it might
mistake one person for another
until that data set is trained further.
With that said, I do think this
is still worrisome unto itself.
Even without bias, I'm
not sure I just want
every car on the street that
drives by me and you and everyone
to be able to detect who that person is.
That seems to be a very slippery slope
as well in terms of people's privacy
and so forth.
But I do think these are the
right questions for companies
now to be asking.
And Google's been mired
in a bit of this lately
with their artificial
intelligence panel.
They wanted to have a panel of
independent individuals weighing
in yay or nay on what
kinds of initiatives
Google should pursue when it
comes to artificial intelligence.
Sounds like Microsoft is exercising
here some of that same judgment.
But you know, it's one thing, I
think, for these big companies
to have the luxury of saying
no to certain business.
Frankly, I can't help but worry
that as AI and as facial recognition
becomes more accessible, as it
becomes more API based so that anyone
can use it, I wonder and worry
that we'll lose the ability
to say yay or nay in
quite these same ways
if it's just so accessible to
anyone who wants to deploy it.
COLTON OGDEN: Yeah.
A lot of this technology
is very frightening.
And Microsoft is definitely being
very considerate in their judgment
and thinking very far ahead.
One of the things sort
of related to this--
not necessarily completely related
to this incident in particular--
is the deep faking.
Are you familiar with this where--
DAVID MALAN: The video.
COLTON OGDEN: Yeah.
People can be faked.
Algorithmically, machine
learning algorithms
can train themselves
to generate video that
looks like somebody speaking when in
reality it was actually their face,
but being programmatically adjusted
to be speaking somebody else's words.
DAVID MALAN: And it certainly
works even more easily with words
alone, when you don't have
to mimic their movements.
COLTON OGDEN: Yeah,
they're gesticulations.
There was a thought that I had
recently about things like CCTV.
You know, a lot law enforcement
relies a lot on CCTV footage.
DAVID MALAN: Closed circuit
television for those unfamiliar.
COLTON OGDEN: Right.
So if they see a suspect, or
if there's a crime scene--
let's say it's at some
restaurant or some hotel--
and they have CCTV video footage of
the same person sort of at the scene
at the same time as maybe
the same events took place.
That helps them pinpoint down
that that might be a suspect.
Now--
DAVID MALAN: That's like every
Law and Order episode in fact.
COLTON OGDEN: Right.
Exactly.
But the thing with
deep faking is that it
could be all too real
that these sorts of videos
get edited to actually put not
the actual actors in there.
DAVID MALAN: Yeah, absolutely.
And I think right now most
people, ourselves included,
can notice something's
a little off here.
But honestly, the software
is only going to get better.
The hardware is only
going to get faster.
So I agree.
I think this is becoming
more and more real.
And it literally is a
computational way of putting words
in someone else's mouth.
One of the first proof of concepts
I think a few years ago now
was having President Obama say
things that he hadn't actually said.
And it was striking.
And it wasn't quite right.
You could tell it
seemed a little robotic.
But it was the beginning
of something quite scary.
And I at least most recently
saw the graduate students
who had published the work on
them dancing and transposing
one human's dance movements onto another
even though that second person wasn't
actually dancing.
They appeared to be.
COLTON OGDEN: Yeah.
It's very frightening.
It makes me wonder how
we're going to actually get
better at detecting the real versus
the fake when it comes to this,
or whether we're going
to have to sort of leave
the realm of video footage as evidence
altogether if it gets too realistic.
DAVID MALAN: Indeed.
For future CS50 podcasts I
don't even need to be here.
I'll just say astute things digitally.
COLTON OGDEN: I'm going to have
to get the David J Malan voice
algorithm implemented.
DAVID MALAN: Well, fun fact, on
Mac OS and probably on Windows,
there are commands
where you can actually
have your computer say things already.
Now, they tend to be fairly
synthesized robotic voices,
but that's a fun little
program to play with.
In Mac OS, if you have
a terminal window open,
you can type the command
say, S-A-Y, and then a phrase
and indeed, it should say that.
COLTON OGDEN: Have you ever
aliased anybody's computer
to say something at a particular time?
DAVID MALAN: I don't know
what you're talking about.
But if I were to, I might-- when they
leave their keyboard unattended--
create what's called an alias in Linux
or Mac OS whereby one command actually
gets substituted for another and so a
funny thing might be to do every time
the person types L-S to have
preemptively alias L-S to be say--
where the computer then says
something mildly funny--
and then proceeds to show the
directory listing for them.
That would certainly be
theoretically possible.
COLTON OGDEN: Away from Microsoft.
So here's something actually completely
on the surface unrelated to what
we might have been, at least in
the tech realm, been talking about.
But there's been this
interesting study in Scotland.
So in Scotland, as of 2008, they started
to employ a checklist for surgeries,
and it was a World Health
Organization sponsored checklist.
And it was basically just
a set of steps before,
during, and after incisions, which just
basically is a bunch of sanity checks
done with the anesthesiologist,
marking incisions,
where they're supposed
to be made, having
the nurse ask questions of the
patient as they finish their surgery.
And startlingly, it looks
like post surgical deaths,
as of the beginning of that up until
now in 2008, they fell by one third--
33%-- well, roughly 33%.
DAVID MALAN: Really
just making sure people
were doing what they
should have been doing,
but providing a protocol,
if you will, for such.
COLTON OGDEN: Yeah.
DAVID MALAN: Yeah.
I mean I think--
I'm no expert-- but I believe this is
what the airline industry has actually
done for some time.
No matter how many times the pilot
and co-pilot have flown a plane,
my understanding is that
they're indeed supposed
to take out that the flight checklist
and actually check off verbally
or physically every little step that
should be checked before taking off.
COLTON OGDEN: I certainly hope so.
Fingers crossed.
DAVID MALAN: Well, you would
hope so, but there, too, right?
If someone's been flying
for five, 10, 20, 30 years,
perhaps you might assume that
they know what they're doing.
But you don't want to miss
that little thing, certainly.
So I think the real
takeaway for me is that this
is seems a wonderful adoption
of a technique that's
being used in other industries.
And even us, in our much
safer realm of an office,
use programs like Asana or GitHub or
Trello or a bunch of other websites
or web based tools that provide, really,
checklists or [? con ?] [? bond ?]
boards or project boards so to speak
where it's really just lists to help
you keep track of things you
and other people are working on.
COLTON OGDEN: Right.
And I think it applies very well to
things like software development where
you have the typical paradigm
of software development,
at least as it's taught in
university a lot of the time,
is start with a project spec
up top, work your way down
until you have individually
solvable tiny pieces.
Those seem to fit well into the
model of tasks and the to do list.
So it seems like in technology this
is a very appropriate way of modeling
how you build things.
DAVID MALAN: Yeah, absolutely.
And the interesting thing there
I think is that there really
are different mental models.
Like I for instance, I'm a fan
of the checklist model, something
like Asana.com which we've
used for a few years.
Some of our TFs are now at the company.
And it's just kind of
the way my mind operates.
It's how I would operate if I
were still using paper pencil.
But I know a lot of other people
prefer more project boards
where you have everything
in column format,
and as you work on something
you move things from left column
to middle column to right
column and so forth.
And I tend to prefer the
sort of top down approach,
but other people seem to
prefer the side to side.
So it's an interesting
software design question.
COLTON OGDEN: Maybe it's people
that are more visually spatially
learning-- or not learners, but
I guess workers and developers--
maybe that just resounds better
for them versus a checklist model.
Everybody's brain works
a little bit differently,
but there's certainly
a breadth of options.
DAVID MALAN: Well, and the project
boards really allow for more states,
so to speak, right?
Like something like a checklist
in Asana, or even in Notes,
the sticky program in Mac OS
where you can check things off.
That's on or off--
like done or not done--
whereas the boards allow you to
proceed to an intermediate state,
another intermediate state, and
then finally the final state.
So I do see a use case there, certainly.
So sometimes just checking
things on and off makes
perfect sense for protocols like
checking that everything's gone well
with surgeries, checking that everything
is ready to go with an airplane.
But for projects, you might actually
want to say, well, we've begun,
but we're blocked on this issue.
This is in progress, but
there are these bugs still.
So I think that mental model
makes sense in other use cases.
COLTON OGDEN: Sure.
Yeah, no, it's definitely
cool to see that something
that simple was able to reduce--
that's an amazing thing--
33%-- one third--
with something as serious as deaths.
DAVID MALAN: Yeah,
it's a little worrisome
that they weren't having
checklists beforehand.
COLTON OGDEN: Yeah.
Makes you wonder a little bit.
DAVID MALAN: But when n
is large, so to speak,
whether it's patients or
data sets or the like,
you really do start to
notice those patterns.
And so when you turn
knobs, so to speak, and try
interventions and really do
experiments, can you really
see the impact of those results.
COLTON OGDEN: Indeed.
So earlier we talked well
about Microsoft, so let's--
the theme of this podcast
is that we don't always
have the most optimistic
things to talk about,
so why don't we turn the knob in
sort of the opposite direction.
DAVID MALAN: Well, no one
writes articles that say,
everything was well this week.
COLTON OGDEN: Yeah.
Not necessarily.
Microsoft did well with their
facial recognition decision,
but it turns out that they
had a security vulnerability.
So they have a service called Microsoft
Tiles which is no longer in use
is my understanding, but was
temporarily in a state of flux.
So you might be able to speak
on sort of the low level
details is a little bit better,
but my understanding of this
is they had a CNAME record, which
is a canonical name record, which
allows them to essentially point one
URL, one subdomain URL to another URL
and make it seem-- it's basically
like the alias in terminal.
DAVID MALAN: Yeah, so you don't
have to hard code an IP address.
It can resolve, so to speak,
to another domain name.
COLTON OGDEN: Exactly.
And they had an Azure domain that was
CNAMEd to some other Azure domain that
was actually serving
the Tiles information--
the Tiles being those sort of square
widgets that was really famous with
Windows 8 where you can
see services, icons,
and do different things in emails--
and it turns out that they were
vulnerable to a subdomain takeover
attack because the subdomain they
were CNAMEd to was actually--
they lost ownership of it.
DAVID MALAN: Yeah, no, I mean this is a
risk anytime you start playing with DNS
and you rely on a third party to
implement some service of yours.
And this happens all the
time, frankly, with a lot
of Cloud based services,
some of which we
ourselves have used, whereby you
might want to use your own domain--
for instance CS50.io-- and you might
want to use a subdomain therein--
like foo.CS50.io-- but have foo.CS50.io
resolve to some third party service
where they has white labeled
their product for you.
So you go to foo.CS50.io and you see
CS50's version of that application.
And if you go to bar--
well, if you go to bar.example.com
see someone else's use
of that same service.
So DNS enables for
this, but of course you
have to trust that the underlying
host is not going to change ownership,
become malicious, or so forth.
And so generally speaking, putting third
parties within your domain or subdomain
is not the safest practice, as
it sounds like they gleaned.
COLTON OGDEN: Yeah.
They were able to--
the people that wrote the article--
this was golem.de which
is a German website--
German newsgroup-- they actually
bought the domain and were able to push
arbitrary content to these Windows
Tiles services which is pretty--
I mean, you can imagine
if a bad actor got
involved the types of
phishing attacks they'd
be capable of orchestrating with this.
DAVID MALAN: Yeah, absolutely.
And I don't doubt that there
are folks out there who
have little scripts running just
checking when certain domain
names expire, for instance, so that
they can snatch them up and actually
repurpose them.
And this has happened even
with source code libraries
out there where someone
has turned over the keys,
so to speak, to a popular
open source library,
and that second person has maybe
injected some advertising into it,
or even some malicious code.
And if you have all of these other
unsuspecting folks on the internet
depending on that third
party library, they
might not even realize the next time
they do an update of the library
that someone else is taking it over
and its intent has now changed.
So I mean, we talk about
this a lot in CS50,
the whole system of trust on which
computing these days is built.
There are a lot of threats that you
might underappreciate until it actually
happens to you.
COLTON OGDEN: Trust was the
theme of the last episode.
DAVID MALAN: Indeed, and it
seems we can't get away from it.
But of course, this is
where all the articles
are being written each week because
there's a lot of threats out there.
COLTON OGDEN: Yeah.
No, it's great that
people are-- and this
was a good actor it looks like
by the sound of the article.
They bought the domain.
They didn't orchestrate any attacks.
They made it clear.
They actually contacted
Microsoft about it.
Microsoft did not respond, though
they did delete the record.
So it's nice to know that
people are looking out
for all good intents and purposes.
Maybe not as proportional
as the bad actors--
it's hard to say I would imagine.
DAVID MALAN: Absolutely.
COLTON OGDEN: But it's good to see.
And I guess we'll end the
podcast on a bit of lighter news.
DAVID MALAN: I don't know.
I think this was the
best article this week.
COLTON OGDEN: It's a good article.
But we do a lot of tech.
I'm into games.
There's been some cool game
stuff going on recently.
So one of the things that you actually
noticed before stream was actually
that Infocom, a company that was famous
for producing a lot of text adventures
in the '80s, they actually open
sourced a lot of their classic titles.
DAVID MALAN: Yeah.
Or at least, I think to clarify, a
lot of the source code is available
and someone open sourced it.
I think it remains to be seen
just how long it will stay alive.
These games, to be fair,
are decades old now
and it's more of a historical interest,
I think, than a commercial interest
that people might have.
But yeah, I remember as a kid in
the 1980s growing up with-- we did
have graphics and monitors, but not all
games used graphics on those monitors.
And so Hitchhiker's Guide
to the Galaxy, which
folks might know by author
Douglas Adams is a wonderful book
and a series about the same.
There was a text based adventure
game whereby you type in commands
and navigate a text based world
where you only know where you are
and what you're looking
at and where you can
go based on the feedback you're
getting textually from the program.
And there was another one called
Zork which was very similar in spirit
and it had a map not unlike
the two dimensional worlds
you talk about in your games class
where you can go up, down, left, right,
and so to speak virtually, but you
can only do so by saying walk right
or walk left or open door or the like.
And the fun thing, I think,
about it back in my day
was it really did leave
things to the imagination.
And oh my god was it probably
easier to make these games
because you can still
have the storyline,
you can still create the world, but
you don't have to render any of that.
You can leave it to folks' mind's eyes.
So it's fascinating to look at
some of these things written
in a language called ZIL--
Zork implementation language-- which
seems to actually be list based,
which is a functional language.
Though it's actually-- I was reading--
I was going down the
rabbit hole and reading
some of the history-- it's actually
based on another language, in fact.
But it's a fascinating way, if
you poke around the repositories
online, of representing two
dimensional non-graphical worlds.
So if you Google Infocom
open source games,
odds are it will lead you to the GitHub
repository or any number of articles
about the same.
COLTON OGDEN: Yeah.
GitHub is amazing.
I mean, there are there are a
lot of other games on GitHub
that are not officially
released by the company,
as people have reverse engineered
games like Pokemon, for example.
Amazing to read through a lot of these--
DAVID MALAN: Pokemon.
That's your generation.
COLTON OGDEN: Yeah.
No, it's good stuff, though.
We actually have a Pokemon
p-set in the games course, too.
Yeah.
No, it's very cool
just how nice GitHub is
in terms of making it accessible to
a lot of things that are things--
I would say a lot of it you kind of
have to learn through trial and error
a lot of the times.
A lot of companies, especially in the
'80s, didn't really release their game
code-- and '90s--
didn't really release their game code.
Only in the 2000s and 2010s
did game programming really
start to become accessible.
DAVID MALAN: That's interesting.
Yeah, no, I mean, it's
quite the thing to pick up.
So hopefully that will
engender all the more interest
among aspiring programmers to actually
contribute to those kinds of things.
COLTON OGDEN: In more recent
news, the PS5 was also announced.
DAVID MALAN: The PS5.
COLTON OGDEN: Which is pretty cool.
DAVID MALAN: PlayStation 5.
COLTON OGDEN: Yeah, PlayStation 5.
And they're actually going to have built
in ray tracing as part of their GPU
which is pretty fascinating, where
basically the camera shoots a ray of--
not light, but a ray to basically
look for the nearest object in space
and calculate for every
pixel on your screen.
And they're going to
be in AK resolution,
so that's a lot of ray tracing.
But they're going to be--
it's going to be pretty interesting.
I don't think I've seen a GPU yet
that has built in ray tracing,
so that might be the first generation.
DAVID MALAN: That
sounds very complicated.
I'm going to play Zork this weekend.
COLTON OGDEN: [LAUGHS]
But no, it's cool.
It's a nice little dichotomy there.
But I think that's all the
topics we have this week.
I mean, we covered the
gamut, really, with a lot
of the things we talked about.
DAVID MALAN: From the sad to the happy.
So we'll be curious to see what
comes up again this coming week.
COLTON OGDEN: From the old to the new.
DAVID MALAN: There we go, too.
COLTON OGDEN: So good.
So thanks everybody for tuning in.
DAVID MALAN: Yeah, indeed.
This was episode two of the CS50
podcast with David Malan and--
COLTON OGDEN: And Colton Ogden.
DAVID MALAN: Thanks for tuning
in and we'll talk to you soon.
COLTON OGDEN: Bye bye.
