Good evening everyone and welcome to
the Marian Miner Cook Athenaeum.
My name is Bruno Youn and I'm one of
the Athenaeum fellows this year.
I'll start this introduction with a conversation
I had last Friday.
I overheard a student being frustrated with VMock.
Now for those who don't know,
Vmock is a resume reading program used
and highly recommended by CMC's Career Services,
and it gives your resume a score
based on how good it is, essentially.
It's often helpful but it is notorious for docking points
where it shouldn't, as it did to this student.
I took a moment to empathize with her frustration
and she was struggling for words for a bit
but she eventually came up with, it's so AI.
That's an interesting use of AI as an adjective to describe
an automated resume reader
that does not always understand context.
Its work needs to be checked by humans.
Hardly Skynet material.
The same could be said of AI in a more general sense
and here to argue along these lines tonight is Gary Smith,
Fletcher Jones Professor of Economics at Pomona College.
He received his Ph.D. in Economics from Yale University.
He was an assistant professor there for seven years.
And he's won two teaching awards and written or co-authored
more than 80 academic papers and 13 books.
Most recently the AI Delusion,
which is for sale in the lobby as you walk out.
And he has published statistical and financial research
in the New York Times, the Wall Street Journal,
CNBC, Scientific American and a host of other
widely read and well-respected publications.
By the way, now is the time to adjust your seat
if you haven't done so already.
As a reminder of the rules,
I ask that you treat the Athenaeum
like the movie theater in two ways.
One, audio and visual recording
are strictly prohibited, as always.
Two, please take this opportunity
to silence and put away your cell phones.
Just like at the movie theater,
using them greatly detracts from the experience.
Now, unlike the movie theater,
you get to ask questions at the end.
Now, without further ado,
please join me in welcoming Gary Smith.
(audience applauding)
Thank you.
Thank you.
As you may know I went to Harvey Mudd College,
a long long time ago.
And I was on the debate team at CMC for four years.
I also did water polo and swimming at CMC for four years.
And so I have a lot of fond memories of this place,
and it's nice to be back.
AI you all know is artificial intelligence.
We getting the echo chamber here,
or is that just me hearing the echo?
We're okay, on the sound?
Okay.
So AI is artificial intelligence.
And the idea that is circulating is that computers
are as smart as humans, or even smarter than humans.
And that's what the AI Delusion is about
is that that is a delusion.
Now I'm not saying computers are stupid, okay.
Although sometimes they do stupid things.
I joined LinkedIn for some stupid reason and
I get these things, people want to connect with me
all the time and I, okay blah blah blah blah blah.
And then one time I got invited to connect with myself.
Now that's computer stupidity,
that's the human's little bug in the software,
and that's not what I'm talking about.
And I'm not talking about useless.
Computers are very useful, I use them every day of my life.
80 research papers, 13 books, every single one of them
I couldn't do within a computer.
And I'm not talking about word processing,
I'm talking about mathematical calculations,
I'm talking about statistical calculations.
Talking about Monte Carlo simulations.
Some of these things literally I could not do in my lifetime
if I was doing it with pencil and paper.
Okay, computers are extremely useful.
What I'm talking about is the idea that computers
are intelligent in any meaningful sense of the word.
That computers have anything resembling human intelligence.
Now part of the problem I think is,
big word here, anthropomorphization.
Which is the tendency of humans
to attribute human-like qualities to animals,
to trees, to gingerbread cookies, and to other things.
And so we all know the childhood story The Three Little Pigs
and we think nothing of the fact that these pigs
are walking around on two legs, they're wearing clothes,
they're carrying bricks and lumber and straw.
They build houses, they have different work ethics.
They meet a wolf and they outwit the wolf
and they win in the end.
And we think it's fine, we think it makes perfect sense.
Well we shouldn't do this to computers,
because it makes them very angry.
(audience laughing)
Seriously, we see movies with R2D2, C3P0
and it seems perfectly natural that they walk around
and they talk like humans, they think like humans,
they plan, they have emotions, they have feelings.
And it seems perfectly normal to us.
Then we go a little bit farther.
Computers are so smart,
maybe they're gonna come to their senses
and realize the one threat to their existence is us.
And so they'll have to enslave us or exterminate us.
Well that's total bullshit, okay?
Computers don't know what humans are.
They don't know what computers are.
They don't know what the real world is,
they don't know what survival is.
They don't know what they would do to plan to survive
to get rid of humans, they know none of that, okay?
The real danger is not that computers are smarter than us,
but that we think computers are smarter than us.
And so we trust them to make important decisions
they shouldn't be making.
Scrabble, everyone knows the game Scrabble, right?
You got these little tiles, they got letters on them,
they got numbers on them.
You put the tiles together to spell words
which are in the dictionary
and you get points based on that.
Different points for different letters,
you get double word, triple word, stuff like that.
The greatest Scrabble player of all time, Nigel Richards.
The most amazing thing he did
was memorizing 386,000 words
in the French Scrabble dictionary.
Nine weeks later he won the French Scrabble Championship.
Even though he did not understand a single word of French.
He knew bonjour, and he knew the numbers
for reporting his score, but beyond that
he did not understand a single word of French.
They said he played French Scrabble
just as well as he played English.
He was quick, decisive, and triumphant.
He played like a computer.
And I would say a computer is like Nigel Richards.
They can put letters together and they can spell check
whether they're in a dictionary,
but they have absolutely no idea what the words mean.
Example of this.
There's a CS professor at Stanford called Terry Winograd,
named Terry Winograd, and he's got these things
called Winograd schemas.
I can't cut that tree down with that ax, it is too thick.
What does it refer to?
The tree, right?
Because we're humans, we know what a tree is.
We know what an axe is, we know what cut down means
and so we know it must refer to the tree.
What about this one?
Well now it refers to the axe.
Computers can't tell the difference.
Computers don't know what it refers to in these two
different versions of the sentence,
and that's called the Winograd Schema Challenge.
When you have a sentence with a word like it,
what does it refer to?
They have a competition called the Winograd Schema Challenge
and there's a $25,000 prize for any algorithm
can get 90% of these questions right.
What does it refer to?
In the last competition the highest score was 58%.
And the lowest was 34%.
Because computers don't know what words mean, okay.
Like Nigel Richards they can put letters together
and they can check a dictionary
to see if the letters are in the dictionary,
but they don't know what the words mean.
Now Oren Etzioni who's a professor of computer science
at the University of Washington
and head of the Allen Institute for Artificial Intelligence
quipped, how could computers take over the world
if they don't know what it refers to in a sentence?
And that goes back to my point,
they don't know what humans are,
they don't know what computers are,
they don't know what the world is.
They don't know what survival is.
Here's an example from Roger Schank.
He is one of the pioneers of artificial intelligence.
And in the early days what they were trying to do
was build computers that would think like humans think.
And it's very very difficult
and they couldn't make much progress there.
And so computers took a little detour
which is let's do things that are useful.
Like spell check and search.
Okay?
So here's an ad,
IBM claiming that Watson can understand, reason, and learn.
Okay?
Schank.
It would have made me laugh if it had not made me so angry.
Watson is a fraud.
I'm not saying that it can't crunch words
and there may well be value in that.
But the ads are fraudulent.
Bob Dylan, Nobel Laureate, first one rock and roll singer.
The enduring themes in his work is time passes
and love fades.
Now if any of you are familiar with Bob Dylan
that's not what he was writing about.
He was writing protest songs about civil rights
and about the war in Vietnam.
And so take a passage like this,
come gather 'round people wherever you roam,
and admit that the waters around you have grown
and accept it that soon you'll be drenched to the bone.
Now we get the gist, right,
and we could argue about the details and stuff,
but we get the point.
A computer would have absolutely no idea
what these words mean.
Nowhere in Dylan's work is the word Vietnam mentioned
or civil rights.
But people knew that's what he was writing about.
A computer would have no idea.
A computer could spell check these words
or count the number of times the words are used,
but would have no idea what this passage means.
Here's an example from Doug Hofstadter.
When he was 35 he wrote this book Godel, Escher, and Bach.
And he won a Pulitzer Prize, a National Book Award
and he was set up for life.
He's been at the University of Indiana now
for several years, he as appointments
in six different departments,
although he seldom visits any of them.
He has his own little, he doesn't teach any classes.
He has his little house where he works
with graduate students trying to build computers
that mimic the human brain.
Okay, and as I said, the industry passed him by.
They went off to do things useful
and they stopped trying to mimic the human brain.
So here's an example he comes up with.
In their house, everything comes in pairs.
There's his car and her car, his towels and her towels,
his library and hers.
What does that mean?
Well we read it and we know what it means.
There's two people, apparently a male and a female.
They share a house, but everything else is separate.
Okay.
Now you give this to a state of the art translation program
like Google Translate and what does it do?
It looks at the sentences one by one
and looks at the words one by one.
It picks out the nouns and the adjectives,
finds the equivalent in another language
and then puts it in some grammatically correct order.
So what Hofstadter did was take this one,
translate it to French, Google Translate,
and then go back into English,
and see what you end up with.
The first sentence is absolutely perfect,
and Google Translate often comes up
with perfect translations.
It went to French and back with no change.
The second sentence, there is his car and his car,
his towels and towels, his library and his.
Now part of it's the masculine and feminine thing
got all mixed, confused Google Translate completely.
But the bigger point, which Hofstadter makes is,
Google Translate does not make any attempt to understand
what the person is saying.
It has no idea what these passages mean,
because it has no idea what these words mean.
All it can do is take little parts of it, granular things,
and put them in another language, okay.
And sometimes the results are perfect,
and sometimes they're ridiculous.
The longer passages I invite you to look at
on German and Chinese, which are, I mean they're long
so I'm not doing them here, but they're absolutely
preposterous what Google Translate came up with.
Here's a nice analogy here.
You got Roger Schank and Hofstadter
and the other pioneers of AI,
and they're down here on the ground
and they're trying to figure out how to get up in the air.
Maybe all the way to the moon.
And it's really really hard.
Really really hard.
So what people did is give up on that.
Instead they said let's climb a tree.
Let's spell check, and do search of the internet
and stuff like that that's useful and makes money.
And so they climbed the tree.
And it is useful.
Maybe there's some fruit up here,
maybe you're safe from wild animals up there.
But what happens when you get to the top of the tree?
How you gonna get to the moon from there?
And you're not.
And so the argument is, Schank and Hofstadter and others,
is what you got to do is start over again,
and try and do what they were trying to do originally.
Which is mimic the way the human brain works.
Be able to think, to understand the world,
understand what words mean.
Have common sense and wisdom, have critical thinking.
Here's a little thing I drew.
What is that?
We know immediately, right,
because we see the skeletal essence.
We see this rectangle, we see these circles,
we see that line, we see this text,
and we know immediately how they fit together.
These could be pies or bowling balls or Frisbees,
but they're probably wheels, right,
'cause they're sitting below the little rectangle.
So it's probably a wagon.
There's probably two wheels on the other side
even though we can't see them.
There's probably a cavity in the middle.
We wouldn't be surprised at all if we looked inside
and saw some kittens or toys or rocks.
If a grownup and a child came along,
we wouldn't be surprised if the child got in the wagon
and the grownup pulled it.
If the grownup got in the wagon and the child pulled,
we'd laugh, okay.
Because we know what a wagon is.
We know, this looks pretty primitive,
pretty homemade, right?
But it's in pretty good condition,
so it's been well taken care or.
Or it's brand new.
If there was a person standing there we might think
that he or she owned the wagon.
We might think we could buy it for 50 bucks
or a hundred bucks or something like that.
If the wagon was on a hill
we might think that if we got in it would roll down hill,
that'd be dangerous.
We have all these thoughts because we know the world,
we understand the world.
We know this is a wagon and we have thoughts
associated with wagons.
What about a computer?
What computers do is they look at individual pixels
and they've been trained on lots of different pictures
where they take the pixels and they map them in various ways
and then when they see something new they take these pixels
and they map them and try and find something
that matches pretty well.
So I used a deep neural network at Cornell
to try and guess what this is.
This guess was, it was 98% sure it was a business.
And I think the rectangle with the script
kind of confused it and it kind of ignored
the two pies and the handle.
I also gave it to Wolfram's Deep Neural Network.
It said it was a badminton racket.
Now humans know better.
We're not gonna play badminton with that thing
because we know what it is, and we weren't trained
on a million pictures of wagons, right?
We saw the skeletal essence, we saw the rectangle
and the circles and the handle and we know what it is.
This is the basis for those CAPTCHAs.
You take letters and computers train on letters
and if they see letters like what they've been trained on
they know what it is.
You take them and put them in a different font
or you twist them around a little bit
or make them different sizes,
and they no longer can figure out what it is.
We got these ones now with the little three by four boxes
and you click on the box it has a car in it.
And if the car is partially obscured by a tree or a building
or something like that, computers don't recognize it
'cause it wasn't anything they trained on.
Okay, and I'm not saying that image recognition software
won't ever be better, it's always getting better
but the point is it makes no effort,
no effort whatsoever to understand what it sees.
It's just mapping pixels.
The same way Nigel Richard mapped letters.
This is Carnegie Mellon University.
Paper's was written, this was one of the authors,
Mahmood Sharif, and they trained a computer on him
without the glasses and him, a celebrity.
And they showed it the picture and they put the word
Carson Daly, the words, and that picture
and the words Mahmood Sharif.
And the computer program did its pixel matching
and it was 100% sure every time you showed it one
it was 100% correct in what it identified as.
Then you put on these little funny glasses
and you start mixing around with the colors up there
and all of a sudden the computer is 100% sure
that that's Carson Daly.
Because computers don't know what faces are,
they don't know what glasses are.
We look at this, we see the glasses,
and we look beyond the glasses.
We look at the face.
We look at the skeletal essence.
We look at the nose, the mouth, the ears, the hair,
and we know that those are not the same person.
Here's another author.
Exact same experiment.
Perfect recognition when the pictures are the same ones
they've been trained on.
Then you throw in some glasses and all of a sudden
it thinks this is Milla Jovovich, okay.
Because again, computers don't understand what they're doing
they're just mapping pixels.
Making something out of nothing.
We're driving down the road, we come to an intersection,
we look over there, see if there's a stop sign
or a traffic light.
We see the familiar shape, the familiar colors,
the familiar letters and we know it's a stop sign.
And we know there are consequences if we don't stop, okay.
We might get hit by cars going the other direction.
Computers trained on stop signs recognize them pretty well.
Except you disturb the picture a little bit.
Put a little peace sign down here and all of a sudden
it doesn't know it's a stop sign anymore.
Because it hasn't been trained to see pixels,
to see peace signs on stop signs.
There's a recent paper where they took things
like the stop sign,
they changed one little pixel.
Forget about the peace sign, changed one little pixel,
something we wouldn't even notice
and the computer no longer knew it was a stop sign.
74% of the time.
Change five pixels you can fool it 87% of the time.
Now one of these articles was co-authored by a guy at Google
and they were called adversarial attacks.
With the obvious implication that people that want to do
bad things can go around putting peace signs on stop signs
and self-driving cars could be chaotic.
Also makes something out of nothing.
What's that?
Black and yellow lines, right?
Carnegie Mellon Deep Neural Network
said it was a school bus.
It saw the black and yellow and somehow
in its pixel recognition it came up with school bus.
Now we know what a school bus is, computers don't.
We know school buses have wheels, they have windows,
they have doors, they're shaped like that.
We know that's not a school bus.
This one's even crazier, what's that?
Nothing, right, modern art, maybe.
State of the art deep neural network,
that's a cheetah.
Because computers don't know what a cheetah is,
they've just been trained on pixels and words
and try and match the two.
We know it's not a cheetah because cheetahs have four legs,
they got a big tail.
They got a neck, they got a head, they got ears,
they got eyes, they got a mouth.
We know that's not a cheetah, computers don't know it
because they don't know anything about the world.
Data mining and knowledge discovery.
So Wired's a great magazine, okay.
This is not great journalism.
But it's very commonplace these days.
And that's part of the motivation for this book.
Correlation supersedes causation, science can advance
without coherent models, unified theories,
or any mechanistic explanation at all.
All you need to do is get a bunch of data
and find some patterns.
Who needs theory?
'Kay?
The Economist, another great journal.
Tone down the theorizing.
Puritans, creating models before testing them.
The new breed ignores the white board,
chucking the numbers together,
letting computers spot the patterns.
Okay, and I'm gonna argue that's dangerous for two reasons.
One of which is computers don't know what they found
because they don't know anything about the real world.
And the other thing is you can always find patterns
even in random data.
Okay, so I put together a model to predict stock prices,
okay, I teach finance and so make some money,
get some kind of AI model here.
And so this mutual fund prospectus said
computer algorithms, well that sounds good.
Complex, well that sounds good.
Computerized system, that sounds good.
Eliminating any subjectivity of the manager.
Just turn it over to the computer
and let it pick stocks, okay?
Well I said, okay I'll try that.
So I found 50 possible variables predicting the S&P 500.
I estimated models with one to five variables.
There's more than two million of them,
but again with computers I can do that real quickly, right?
And I came up with this model, which
stock prices depend on C, M, A, L, and R.
And it's pretty good, it was like a 60% correlation
between actual and predicted.
Kind of missed that here.
It was pretty good.
What were those mystery variables?
Well that's what data mining comes up with.
You ransack a whole bunch of data
you're gonna find correlations
that make no sense whatsoever.
And a computer won't know.
A computer doesn't know what a stock is,
doesn't know what a stock price is.
Doesn't know what a temperature is.
It knows how to spell the word,
but it doesn't know what it is.
It doesn't know what these places are.
It has no idea what determines stock prices.
It has no idea whether this relationship makes sense or not.
Okay, I'm gonna do some serious stuff now.
Get some real models that fit well.
This time go with a hundred variables, okay?
.88 correlation, this is one to five variables
out of a hundred.
AI.
Really really really good, okay.
Another thing you could use AI for is to identify
what causes heart attacks.
And so I got some data on 1000 heart attack victims.
I got 100 household expenditure categories.
And then who needs theory?
Let the AI algorithm loose, and find the patterns.
And it found that these heart attack victims
spent more on fish, men's footwear,
and less on pork, cheese, and cleaning products, okay?
And you could think to yourself, okay after the fact
I'll make up a theory, knowledge discovery.
Fish, maybe that's healthy.
Footwear, maybe these people run around a lot.
Pork and cheese must be bad for you.
Cleaning products too.
Okay, pretty good.
Facebook to predict burglaries.
Here's a town where they identified
the 100 most popular nouns,
50 most popular adjectives, 50 most popular adverbs.
Then looked on Facebook for 10 weeks
how often these words showed up
compared to the number of burglaries committed the next day.
And so which words are most useful
for predicting burglaries.
The two most helpful words were
day and most.
Okay.
And maybe you laugh or maybe you think of a reason.
You know, who needs theory, you don't need a theory,
but maybe you can think one up, okay.
.96 correlation.
Pretty good model, huh?
I could probably sell it.
Was this knowledge discovery or noise discovery?
Once we got past the temperatures,
all three of those models were completely random data, okay.
That fish, that day, the most.
That footwear, it's just random data.
'Cause you can always find patterns in random data.
And AI is really good at that, computers are really good
at finding patterns among random data.
And that's what they did here.
Well is that enough?
How good do these models do with new data?
Well here's my stock market model.
And correlation, that's the negative sign up there.
It was negative related to stock prices.
Here's the one on the burglaries.
The correlation's essentially zero.
'Kay?
Well maybe the fact that you can estimate a model
with training data and then test it with validation data,
maybe you can get around that problem, right?
Ransack the data, find a model,
then test it with fresh data.
If it doesn't work throw it out and try again.
So say you got 200 observations.
Take 100 of them, estimate a model,
find some variables that work, test it,
ah it doesn't work, do it again.
Works, doesn't work, works, doesn't work.
Works, works.
Okay, it's just data mining again at a larger scale.
Instead of data mining 100 observations
you data mine 200 observations.
And again you will find random things that work.
On my two word model I estimated all possible
two word combinations for predicting burglaries.
I had in-sample the 70 observations the 10 weeks
and out-of-sample.
And up here are a lot of two word combinations
that worked well in-sample and out-of-sample.
Remember, but these are totally fake data.
These aren't real words, these aren't real word numbers.
They aren't real burglary data, but you can find something
that fits in-sample and out-of-sample.
For example, thing and kid,
.92 correlation in-sample, .93 correlation out-of-sample.
AI would say, we've done it.
But of course all they've done is proven
that you can find patterns in random data.
And that's the thing about data mining.
We think that patterns are unusual and therefore meaningful.
In fact, patterns are inevitable and therefore meaningless.
And the bigger the data the more likely it is
that we'll find meaningless patterns, okay.
The more data you have to ransack, the more likely it is
you'll find coincidental transitory correlations.
It gets even worse when you do something
like a deep neural network, which puts it inside a black box
where nobody knows what the computer algorithm is doing.
It comes up and says, like that mutual fund prospectus,
you ought to buy this stock.
Why, well I don't know, it's inside the black box.
You ought to arrest this person.
Why, I don't know, it's inside the black box.
You ought to deny this job application.
Why, I don't know, it's inside the black box.
And what they're doing is finding coincidental,
temporary correlations which may or may not make sense,
and we have no way of judging.
And the computer has no way of judging
because the computer doesn't know what anything means.
It doesn't know what words mean.
It doesn't know anything about the real world.
All it can do is find patterns.
So here are some examples.
Google Flu.
Check my time here.
Good.
Inside a black box, they didn't want anyone to know
what the words were, for the sake of privacy or something.
And they found these Google search terms
that predicted flus with 97.5% accuracy.
That's in-sample.
Like the stock market thing that was 88% accurate
or the burglaries were at 95% accurate.
Now let's go outside sample.
They overestimated by 100 in next 108 weeks.
An average of nearly 100% off.
So they abandoned Google Flu, okay, which is good.
Next one, software for evaluating job applicants.
Searched a bunch of stuff
and they found that good programmers
visited a particular Japanese manga site.
The chief scientists says,
obviously it's not a causal relationship.
Well why in the heck are they using it?
They found a temporary correlation and they went with it.
The company's algorithm looks at dozens of variables,
constantly changes the variables
as correlations come and go.
Why are they constantly changing the variables?
Because they're finding temporary coincidental correlations
that vanish with fresh data.
So they got to do it over again,
they find temporary coincidental correlations
that vanish and they got to do it over again.
If they had variables that were actually good predictors
of whether someone was a programmer
they wouldn't have to change them every few weeks.
How well does it work in practice?
Customers really really hate the product.
There's almost no customers
who've had a positive experience, okay.
I would also point out this is discriminatory, right?
Only certain people visit this Japanese manga site
and other people undoubtedly don't.
Data mining analysis of phone usage
said people are a good credit risk
if they use an Android phone.
They don't answer incoming calls.
Have outgoing calls that are not answered.
Do not keep their phones fully charged.
Now you might think I'm making this us, but I'm not.
I mean, this is a real program, a real algorithm
for deciding whether I should loan you money.
Okay, based on these silly things,
some data mined AI program.
Now you could probably make up theories for this
after the fact.
Why it makes sense.
Except it's actually, these are people
who are a bad credit risk, so it's the other way around.
So you could probably make up theories for that too.
The major point is it's just coincidental correlation,
so why do we believe them?
Because we think computers are smarter than us.
They're gonna base car insurance rates
not on your driving record or anything else,
but on the words you use on Facebook.
Again, I'm not making this up, okay.
They started off with things they thought
sort of made sense, like do you make lists.
Well maybe that's got something to do with something.
Do you set specific times to meet,
instead of saying tonight.
Maybe that's something.
Do you use words like always, never and maybe, perhaps.
I don't know which are good and which are bad,
but they thought it might make a difference.
And then do you like Michael Jordan or Leonard Cohen.
Well now we're getting a little far away
from what's going on here.
And they wandered totally off into data mining hell.
Our analysis is based on thousands of different combinations
of likes, words, and phrases.
It's constantly changing.
Well there's the giveaway right there,
why is it constantly changing?
'Cause it doesn't work, right?
They find a temporary correlation.
It doesn't work.
Find another temporary correlation, it doesn't work.
Our calculations reflect how humans behave
as opposed to fixed assumptions
about what makes a safe driver.
Fixed assumptions like how many accidents you have
or things like that, how many tickets you get.
Instead we'll go with Facebook words.
Why do people believe this stuff?
Because they think computers are smarter than them.
Now in this particular case the day before they were gonna
launch Facebook stopped them with a lawsuit
and said there's some provision of some Facebook contract
says you cannot use Facebook words to price insurance.
Either to approve applications or to set insurance rates.
Now Facebook was not being altruistic,
they have their own patented algorithm for doing that.
M'kay.
So we should expect that pretty soon.
Algorithmic criminology.
It's spreading the country.
Somebody gets arrested.
How much bail should be set?
Turn to an AI program.
Somebody gets convicted, how long should they go to jail.
Turn to an AI program.
Somebody applies for bail, I'm sorry,
somebody applies for parole.
Turn to an AI program.
Here's what this guy says.
The approach is black box, for which no apologies are made.
If I could use sun spots, shoe size,
or the size of the wristband I would.
If I give the algorithm enough predictors to get it started
it finds things you wouldn't anticipate.
What are things you wouldn't anticipate?
Things that make no sense, like wristband sizes, right?
Why do people believe this nonsense?
Because they think computers are smarter than them.
Well it doesn't work very well,
but it's still being used all over the country.
89.5% accuracy, I can tell whether you're a criminal
by looking at your face.
Letting AI program loose on your face.
These are all Chinese males.
There's three criminals.
This is from their article.
And there's three non-criminals.
Now you and I might notice a few differences.
These guys all seem to have suits on.
These guys all seem to be smiling.
So maybe this AI algorithm is some kind of smile detector
or something like that.
There was a movie, Minority Report.
About these psychics, Precogs,
could visualize murders before they happen.
And so the PreCrime police would go out and arrest somebody
before they committed a murder.
Well that's got the usual time travel problems, you know.
If you arrest them before they commit the murder
how do you see the murder, which never happens,
and so that's kind of a problem.
But this doesn't have this,
these people may have committed crimes
that they were never arrested for.
But now we've got 'em
'cause we got our facial recognition software.
And so we can arrest them.
A blogger wrote, what if they just placed the people
that looked like criminals into an internment camp?
What harm would that do?
They would just have to stay there
and go through an intense rehabilitation program.
Even if some of them are innocent,
how could that adversely affect them in the long run?
Why do we believe this nonsense?
Because we think computers are smarter than us.
We think important decisions
should be turned over to computers.
Even more scary.
Letting robots fight out wars.
Give them some kind of vague instructions
and let 'em fight it out and see what happens.
Okay, enough said on that one.
Now one of the big topics,
talking to students here tonight is,
are you all gonna get jobs
or are computers gonna take your jobs?
Well the thing you're learning,
or you know already from college is critical thinking, okay.
Computers do not have any critical thinking skills.
They have no common sense, they have no wisdom,
they don't understand the world in any real sense.
They're not gonna take away any kind of job
that requires critical thinking skills.
So here's a list from this guy, of a critical thinker.
Judges well the credibility of sources.
Can computers do that, no.
Identify reasons, assumptions, conclusions.
Can computers do that?
No.
Ask appropriate clarifying questions, nope.
Judges the quality of an argument.
They don't understand the argument,
how can they judge the quality?
Develop and defend a reasonable position.
Computers can't come up with theories
because they don't understand the world they live in,
that we live in.
Formulate plausible hypotheses.
I don't think so.
Draw conclusions, I don't think so.
Because again, they don't understand the world
that we live in.
'Kay.
And that's why the punchline again is the real danger today
is not that computers are smarter than us,
and they're going to eliminate us or enslave us,
the real danger today is that we think computers
are smarter than us and we're gonna let them make
important decisions that they shouldn't be making.
Questions?
(audience applauding)
Hello?
Yeah.
All right, we now have time for questions.
Please raise your hand and one of us
will bring a mic to you.
Please say your name and stand up.
Hi, my name is Yao, I'm from CMC, I'm a junior.
And I have a question for you, you said,
it's not a problem that, it doesn't matter,
it's not that, the problem is not that
we leave, we think, oh.
It's not like AI is smarter than us,
it's that we think they are smarter than us,
and like make them, let them make important decisions.
So what kind of decisions do you think is like not important
because like, I mean.
Well things like spell checking, internet searches,
tightening bolts, those kinds of things.
Very narrowly defined tasks computers are really good at,
okay, but any kind of general thinking
or taking some kind of thinking here
and applying to some other thing.
Or making, anything that involves critical thinking
computers absolutely cannot do
'cause they don't understand what words mean, okay.
What about like human scheduling,
and like assist kind of personal assistance and...
You mean like shopping for you or?
Maybe, like making schedules for you.
Well, they wouldn't know what the words mean
so you'd have to write out very very carefully, you know
it's like when one spouse goes shopping
the other one's got to send a very detailed list
or you come back with the wrong size or the wrong brand
or the wrong something, so you'd have to be very detailed
'cause you can't just say go buy some orange juice
'cause they don't know what orange juice means.
You'd have to write it out and you'd have to say
what size and what company and stuff like that.
Hi, thanks for the talk.
My name's Jafar, I'm from CMC.
I'm just curious as to like what you think like
the relationship between AI and like systems of power are
because it seems like just think like right now
AI's currently like plugged into these dopamine systems
run by like companies like Facebook
and just like social media systems, right.
And targeted advertising.
Yep.
Could you speak on that,
'cause I feel like that's like the biggest thing.
What did Zuckerberg say about people
who turn their personal information over to Facebook?
A bunch of dumb Fs, is that what he said?
That's like not realistic now 'cause everybody--
Well I don't, I'm not on Facebook.
I guess the--
Well I mean I don't know, we're different generations.
I don't understand why people go on Facebook
to share what they're having for lunch or dinner
and why people read about that and care about that.
It seems very egotistical to think that other people care
about what you're eating or what you're reading
or what you're doing, it's kind of...
I'll use bad words, use bad words here, but (laughs).
What's the scariest thing about like
the way people use AI now, what like?
Well first of all, all these kinds of decisions.
I mean, send somebody to jail for seven years
because the AI algorithm says they ought to go
for seven years, and it's inside a black box
so you have absolutely no idea
why the computer program said that.
You don't know if they put in false information,
you don't know if they're looking at the wristband size,
you have no way to challenge that
and yet it's happening today.
Or you're getting turned down for a job,
you're getting turned down for a loan
because of a word you used on Facebook.
It's preposterous.
But the most dangerous thing,
which I talked about with somebody before, maybe,
is the government surveillance.
I mean this whole idea of identifying criminals
by looking at their faces and stuff like that
and these countries, the U.S. is not quite there yet,
but these countries where cameras are everywhere
and survey everything you do,
you have no privacy whatsoever.
I think that's terrifying.
Thank you.
What's that province in China where
they have the muslims?
Xinjiang.
Yeah, and there's like how many millions of them
are in prison right now?
Don't know off the top of my head, anyone want to?
You know, but there's lot of them.
And so I go over and start talking to you
and the camera sees it and they think we're subversive
it's next thing you know we're both in jail.
I mean that's crazy, it's absolutely crazy.
That is a real scary thing.
Even scarier than you not getting a job, okay.
The government thinks we're conspiring
because we look like we're talking to each other
and they don't like our faces or our clothes or whatever.
Where they're monitoring our e-mail or monitoring our phones
and better safe than sorry.
If in doubt, throw 'em in jail.
I can speak.
I'm Brian Williams I'm a sophomore at CMC.
It was a great talk by the way, really fascinating.
I'm just curious what you think
are some of the most dangerous applications
of this false trust that we have in AI.
Like some of the biggest examples that you can think of.
I thought I just said 'em, but.
Oh.
Well the fact that we turn these important decisions like
fighting wars.
But in like historically, like since--
Well I'd say the most dangerous one right now
is the whole algorithmic criminology.
The fact they're deciding how long you should stay in jail
based on some algorithm that you have no idea
what inputs went in there,
and you have no idea what they're looking at.
If they're looking at your wristband size,
or if we're looking at your past record or what's going on.
That is, I mean that you're taking away
your life and liberty, okay.
And the fact that these people are willing to say,
well if a computer thinks you're a criminal
by looking at your face,
what harm is there if we put you in jail?
I mean, if that becomes widespread, God forbid.
So what would you suggest
that lawmakers should do?
Well I think part of it is,
one of the points of this book is of course is education,
to try and educate the public
so they're not so overawed by AI.
That just because a computer program says this
doesn't mean it's true.
I mean, that's part of the thing.
And the other thing, which is happening in the industry
is people are trying to go back to the roots.
The Roger Schanks, the Doug Hofstadters,
trying to build computer AI programs
that actually think the way humans think.
That actually understand the world, know what words mean,
know what consequences are, have cause and effect,
things like that, and it's really really hard,
but people are trying.
That guy Oren Etzioni up at the Allen,
I mentioned before about how can computers
take over the world when they don't know what it refers to.
He's trying to build computer programs
that have common sense.
And that is really really hard to have common sense.
And so you have these questions like
is it okay to drink sulfuric acid
if I put orange juice in it?
We know the answer, a computer doesn't know the answer.
Computer doesn't know what the two things are,
it doesn't know what putting them together are.
Is it okay to jump off a building
30 feet high?
I don't know, computers don't know.
Another tact is people are studying how do babies learn.
How do baby brains learn.
And seeing if they can somehow modify that
to get computers to learn the way babies learn.
Not by, when babies see a picture they don't have to see
a million wagons to know it's a wagon, right.
And so instead of doing pixel mappings
somehow figure out what Hofstadter calls
the skeletal essence of something.
Recognize the skeletal essence.
The rectangle, the circles, the handle,
and put them together and know it's a wagon
the way babies learn that that's a wagon.
And that's hard too, it's really really hard.
That's why the profession went off
and did something profitable.
Manfred.
My name is Manfred Keil.
Gary and I used an AI program last summer
to predict that West Germany would win the World Cup,
so that was pretty bad.
From the book--
Hey, I got second place in that contest.
Yeah, from the book
I was hoping you would also say something about
the Hillary Clinton campaign and how they used it
because I thought that was pretty telling.
Yep, so the introductory chapter is about
politics where AI is being used.
And so the campaign before the last one,
when Hillary Clinton ran, she was the overwhelming favorite.
She had the name, she had the power,
she had the establishment, she had the money.
And then along came this guy
with an unhelpful name, Barack Obama, and he won.
And part of his secret was he had this huge database
of pretty much every voter in the country
and they'd isolated what kind of things appeal to you
and to you and to you and they had micro targeted appeals.
And I mean part of it was obviously his charisma
and his eloquence, but he also had this huge database.
And he went out and he won.
Next time around when Hillary ran she said I'm not gonna,
I'm gonna be a Barack Obama, I'm gonna have a big database
and I'm gonna do micro targeting.
In fact hired people from the Obama campaign
to work on her campaign.
And they built this secret computer program called Ada
after a female mathematician from many centuries ago,
and hardly anyone knew it existed, okay it was a big secret
because they didn't want to make her seem
scripted and stuff like that.
And so she had this AI program
and it was telling the campaign how to spend
virtually every dollar they spent on television money,
where to go for campaign appearances,
and what issues to push.
And Ada failed.
Because Ada missed things that you can't quantify.
When Bernie Sanders gave a speech
tens of thousands of people showed up.
When Trump gave a speech
tens of thousands of people showed up.
When Hillary Clinton gave a speech
a hundred people showed up and sat there quietly.
And you can't put that in a computer, okay.
And so the computer was just going by things
it could measure.
And it said here's these Rust Belt states,
the reliable Democratic states in the Midwest,
Wisconsin, Michigan, Minnesota,
they're gonna go Democratic, forget about those.
Let's go campaign in Arizona so we can have a landslide.
Stupidity, right, absolute stupidity.
Because they couldn't measure enthusiasm.
And it wasn't 'til very late in the campaign they realized
that there might be a little bit of a problem there.
Then they decided to go out and campaign for rural voters,
again late in the campaign.
They assigned one person.
It was somebody from Brooklyn.
Which didn't make a big impact out in the Midwest.
And this campaign, it was missing the idea
that emotions matter,
it didn't even bother to, it didn't even bother
to collect polls in Wisconsin, Michigan, Minnesota.
People on the ground were begging,
we've got to go in and do something
or we're gonna lose these states.
But Ada said no, those are Democratic states,
you're gonna win 'em.
The computer said, here's what you ought to say,
Hillary Clinton.
I'm not perfect, but Trump's worse.
Okay.
Bill Clinton, the greatest campaigner
any of us have ever seen, when he won the election,
when he ran for president and won, what was his campaign?
It's the economy, stupid.
What people care about is their jobs.
And that's what Hillary Clinton should have been doing.
And if she'd listened to Bernie Sanders or Donald Trump
she would have known the issues that resonated with voters.
But Ada didn't know any of that and so they ended up losing
and so she was failed by big data and AI.
Yeah.
Hi professor, I'm Joseph--
Yeah, we've talked a lot before.
I'm wondering, you've spoken to a lot of the
downsides of AI, but what do you think are the benefits,
specifically in financial markets.
You know, there's Renaissance Technology,
very successful hedge fund that
you know, they use AI to comb through
a 10-Q upon it being released and--
Yep, yep.
On it, so.
Yep.
If AI can't understand words
how can it seemingly understand
what the management guidance given--
And so computers are really good at crunching numbers.
And the issue, the main issue I have
is do you follow the scientific method,
which is start with a theory and test it with the data
or do you start with the data
and discover a theory which might be a coincidence.
And the thing about Renaissance Technology
is they start with theory,
we don't know exactly what they do.
'Cause you got to sign a non-disclosure agreement
that not only says I'm not gonna tell you
what we do inside our company,
I'm not even gonna tell you I work for the company, okay.
But some stuff has leaked out, I got a former student who
ran a fund of funds and heavily involved in the industry
and some of the stuff they do, it makes total sense.
They got a bunch of mathematicians,
but they're looking at things that make sense.
Like here's a market that's closed for some holiday
in one country, and open in another country.
And so a little wedge might open up
in between the prices being equal or not equal.
Or here's a thinly traded stock
and there's only one person buying
and there's a whole lot of people selling.
If I can predict when you come in to buy,
then I can take the other side of the market
and take money from you.
And so they're not doing random stuff like
the weather in Curtin, Australia or something like that,
they're looking at things that make sense
and then they use the data to test the models.
And that's great, I love that, I mean that is
the scientific method, theory first, data later, okay.
And where AI goes astray is when it does data mining
which is look at the data, find a pattern, enough's enough.
What did you say before?
Up is up, you said something, you didn't say up is up.
One of my colleagues says up is up.
But the numbers, let the numbers speak for themselves.
The numbers speak for themselves.
Who needs theory, okay?
Now I said up is up, I have a colleague,
I think he's here somewhere, Jay Cordes.
I'm writing the next book, it's called
Ten Commandments of Data Science.
And he worked in the industry for 15 years
doing all sorts of stuff and he's got all sorts
of great stories.
One of them is, you all know The Office, right?
He lived The Office, okay, he lived the Office.
The crazy stuff going on in there.
But one of his managers,
one of his favorite phrases was, up is up.
You got something going along in revenue,
you happen to do something, the revenue jumps.
Jay says, I don't know why it jumped.
And the manager says, I don't care, up is up.
Whatever you did, do it again.
Then the revenue goes down.
What happened?
Well it was a coincidence, it was a blip.
And that's the problem with data mining
where you just look for patterns
and assume they're meaningful when in fact
there's so much noise in the data a lot of what you see
is just coincidental transitory blips.
We have plenty of time for questions.
Please wait until one of us brings the mic to you.
Hi, Andi, and...
Okay hi my name is Andi and--
Hi.
Thank you for speaking today.
I was just wondering what the implications
about this are for self-driving cars.
Do you think that driving is too,
or requires too much critical thinking for computers
to be able to handle it right now?
It's an open question, I mean right now
the self-driving cars you're supposed to have a driver there
with the hands on the wheel watching the road.
Okay, and if you don't then that's not,
you're violating the rules of the game, okay.
Well that's not a self-driving car, right?
That's you with your hands on the wheel.
And there a lot of things that right now
the algorithms can't handle.
Like one of them apparently is if there's something
stopped in front of you you crash into it
because it's been programed to deal with things
that are moving, when they slow down and speed up,
and what something's stopped
it doesn't know what to do about that.
The other thing is those adversarial attacks, okay.
So say you have self-driving trucks
criss-crossing the country,
and you got a bunch of angry truck drivers
and they start gimmicking toward stop signs
and stuff like that to cause crashes.
What are you gonna do about that?
And so right now it's an open question
whether they're gonna surmount all those things,
but I wouldn't sit in the back seat
of a car that's self-driving.
On the other hand as Jay says,
you look at a lot of people on the road
and they drive even worse than that, right?
They're drunk, they're stupid, they're texting,
and they're worse than a self-driving car
that doesn't know what to do.
Hi, my name is AJ Moore, I'm a freshman at CMC.
I know that you said artificial intelligence
isn't capable of critical thinking and stuff,
so I'm wondering what your personal opinion is
when it comes to artificial intelligence
being able to create art?
Any kinds of art, 'cause I think of the formulaic-ness
of certain like pop hits, like lyric-wise.
Yeah.
And I don't know
if you can even call that art.
Yep (laughs).
But
whether it be dance or lyrics or poetry or painting
or any of that, I'm just wondering
if you think it's possible or, yeah.
Well, part of the problem is the definition
of what is art, okay.
And some of the things that pass for art
like a piece of canvas with white paint on it.
Computer could do that.
The examples I've seen have been kind of weird.
There was a gal down at UCSD who was inventing new colors
and she came up with these colors, a Skanky Bean,
the computer came up with the names, and it's just godawful.
And then there's another one that came up with lyrics.
I don't remember the lyrics but they showed the computer
a picture and the picture was able to identify
a Christmas tree in there, it came up with some lyrics
to fit sort of a Christmas song, but it was absolutely,
now maybe somebody would think it's modern music
or something like that, but
it's Christmas I'm happy, it's Christmas (laughs) it's just.
I'm skeptical, okay, unless you have a very broad definition
of what art and music is, so.
Hi, thank you for your talk, my name is Celeste
and I'm a senior at CMC and I have a question about
(speaker muffled by mic cutting out)
Hello, okay, cool.
Hi.
And so it's less so about the regressions
that are run and the kind of predictive analysis,
but more so like AI chat bots and virtual assistants.
Because the reality is that more people every day
are turning to voice search and talking to their Alexas
and their Google Homes, and where do you think that ends?
Because the software that exists now from Google,
it's not as big of a failure as Google Flu was but it is,
it works, but it's still pretty rough and it's definitely,
it's gonna progress and where do you think
that is going to either fizzle out or--
I think in terms of what AI can do,
that's probably in the near horizon,
being able to master word,
turning sound waves into words
and then associating what those words mean, okay.
And one of the scary things is there was that incident where
it was some television commercial
for Burger King or something?
And the commercial said something about go buy a hamburger
and the little Google assistant or the Amazon assistant
perked up and wanted to go buy some hamburgers or something.
So that was kind of weird, but I think that's
relatively easy, and to take sound waves
and translate them into words and if it's been
pre-programmed to say turn on the TV means this,
it means push that button, that's pretty easy.
Okay, uh, hello.
Hi.
I'm Ben Bracker from Harvey Mudd College.
I've been reading some books actually on uses of algorithms.
Essentially my question is, you show how algorithms
can spit out absolute gibberish and I think large companies
would be smart enough to realize this,
especially with the example you were giving where
it's up and up and then it crashes.
I think companies have been using algorithms long enough
to know that that can happen when they use random inputs.
So I was wondering if you could talk a bit about like,
why they're, kind of why companies still persist
in using these algorithms.
Do you think it's because perhaps when they do
inputs that are relevant they do find genuine correlations?
Do you think that it could also be
perhaps because there's a self feedback loop,
a self-creating feedback loop such as one example I read
was where with the algorithms that tend to predict
criminal intent, when people rely on these algorithms
they sort of like self-validate
and so it seems like the algorithm's correct.
Do you think it could also be
in addition the fact that companies want to hide
their decisions behind complex systems
so as to almost scare away investigators?
What do you think are some of the explanations
as to why they're being used?
Is Jay here, Jay Cordes?
Yeah, I'll speak for him.
He was in the industry 15 years
and I think the right answer is
that managers don't understand AI
and they think because it's a computer
you should trust it, you should believe it.
And in fact they don't self-correct,
they go on making mistakes time after time after time
and they always attribute it to, well something changed.
There was a change in the environment,
a change in the parameter.
Let's just keep going and it'll work
because computers are smarter than us.
Jay do you want to chime in on this?
Oh get a microphone.
Thanks.
I would just add that a lot of times what they actually do
is design and conduct experiments.
So they're not really doing this data mining stuff
as much as maybe you think they would,
'cause that kind of makes the headlines.
A lot of people are actual data scientists,
they come up with an idea,
they run a randomized AB experiment, you know--
That would be Jay, that would be Jay.
Yeah, so they might not know exactly why
like page A works better than page B,
but it causes the revenue to go up.
And they did it with a randomized experiment,
they didn't do this like data munching stuff.
But then there are the other people who are called
data clowns.
Yeah, there are clowns.
So yeah.
And what the clowns do is if it works, use it, up is up.
Yeah, that was a big problem I ran into at work.
There would be a lot of times where
someone would do something and they would see the result.
And one of the craziest things that you haven't
mentioned yet is regression of the mean, it's everywhere.
And so for example, it was at an internet company
where underperforming domains that we had to optimize,
they were handed over to a friend of mine and they said,
what can you do to these, we should tinker with the layout,
we should put the keywords, do something.
Okay, so he works on it, the next day revenue's up like 20%.
Fantastic, he's a hero.
Every time they give him these domains,
every week revenue goes up 20%.
Well it turns out that one week he forget to do it,
revenue went up 20%.
And so they come to him and they're like,
well this is great, we want to do this with other people.
He's like, oh actually I never got around to it,
and they're like well whatever you did, worked.
You know, and it's like it doesn't even occur to them,
and this is this regression of the mean concept
which Gary actually came in and helped,
he actually came to our company and helped kind of teach us
about this concept, which is everywhere.
Even when you run real experiments you see kind of
the ones that win the experiment don't tend to do as well,
and it's everywhere.
You got another book on this that you got to read.
This is, what is it, it's--
What the Luck?
What the Luck?
What the Luck.
You guys gotta read that.
It's huge.
Thanks Jay.
Hey, how's it going?
Oh, it's working, nice.
I'm Elliott, I'm a senior.
Thank you for your talk.
My question is actually a bit of a flip on your statement
about the problem.
And it's off of Celeste's question.
So I worked for a chat bot company my sophomore summer.
And I'm like not a programmer or like a natural
language processing person or like anything like that.
And the like necessary complexity of a chat bot
was something that I found that I could make
after three months.
Not because it was capable of human conversation
but because it was capable of conversations
that humans would partake in.
And that was just super interesting to see.
Because like it obviously wasn't AI,
like it was very simple, like essentially
sales oriented feedback, like loops, that would
provide a more customized experience for the buyer
to essentially, like purchase goods that were like
more in line with their other aesthetic choices.
And I guess my concern isn't that we're gonna think,
like that we think that computers are smarter than us,
but that we're becoming as dumb as computers.
And I'm just curious, like besides educating ourselves
through things like your book, like what we could,
what we can do to resist that, I guess would be the right--
Yeah, I'm not sure about that.
I got an interesting sidebar though,
there's the Turing test, which is
I'm here talking to two things, one of them is a computer
and one of them's a person and they're in a room
and I can't see which is which
and I'm trying to guess which is which.
And they figured out to win the Turing test as a computer
what you have to do is act more human-like,
which is make mistakes.
And Turing himself said at the start,
if you start asking math questions
and this one never makes a mistake, I know it's a computer.
So in order to win the Turing test
you have to put in grammatically incorrect sentences,
you got to answer questions incorrectly,
you got to get mad and angry,
you got to insult the other person,
you got to act more like a human in order to pass the test.
And these Turing tests, they're interesting,
amusing, but they got nothing to do with anything
as far as, you know, real world goes.
Siri's kind of cute that way.
By the way,
you guys get jokes on Siri?
Those are almost invariably scripted.
They've actually hired, both Google and Apple,
hired gag writers from Onion and Saturday Night Live
and stuff like that to write the little jokes
and so you write in a question,
it gives a really funny response
and you think Siri's really smart
and it was actually the gag writer who was really smart
who put in that funny response.
Hi.
Yeah.
Back here.
My name is Sasha, I'm a senior here at CMC.
Thank you so much for coming and speaking with us about AI.
And from your presentation some of the things that
AI is not able to answer, like not recognizing a wagon,
and instead saying it's a badminton racket,
you know, these are, or like,
seeing that something like nothing is actually a cheetah,
you know, these are kind of things that we as we develop
and as children, you know, when we were really small.
We don't know that that's a wagon at first,
but we come to learn it.
Right.
I'm just sort of wondering
developers of AI who are trying to make
AI smarter than humans, what are sort of some roadblocks
in their way to making sure that AI
doesn't become as smart as humans and sort of like
recognize that it's actually a wagon, not a badminton racket
or, you know, sort of teach the AI so that--
Yep.
Are there any roadblocks if you can think of, or?
Well first, computers are getting
better and better at this stuff.
I mean, the language translation stuff
is much much better than it was five years ago.
And the image recognition stuff is much much better.
But my point is, it's not actually seeing things.
You know, in terms of the image stuff,
it's not actually seeing the wagon
and the wheels and the handle,
it's seeing the little pixels.
And so the question is how do you get the computer
to see things the way humans see it.
And nobody knows because we have very little knowledge
about how the brain works.
How do those neurons, how do they see something
and recognize it, we really don't know.
And the whole thing behind Hofstadter and Schank
and people like that is trying to figure out
how the brain works so that we can mimic the brain.
And we're a long long way from being able to do that.
Now one of Hofstadter's things is, his latest book,
is Analogy as the Fire and Fuel of Thinking,
is that we somehow draw analogies to things.
And so that wagon, we draw analogies to other things
that are look like that without mapping pixels.
We see the little box and the wheels, whatever,
and we draw by analogy that's probably a wagon
and then we add it to our memory bank,
that's what wagons kind of look like.
All sorts of variations and stuff like that.
Hofstadter says, it's a long long way before,
many many years before computers will be able
to think like humans, but that is the big challenge, yeah.
My name's Andrew, a CMC senior as well.
And so sort of on that vein,
what do you think it would sort of take for us to go
from like the tree to the moon.
Like is it, is it a back to the ground start
or like could we do it and like if so
what would that entail?
Yeah, I don't know.
That's what Hofstadter argues and Schank argues,
you got to go back to the ground and start over again
with the projects that they have been working on
for 40 years and haven't made much progress on.
You like that answer?
Yeah, go ahead.
What's your take on climate change?
And how it has to do with AI.
Like is there any--
Why would we need AI for that?
Well because clearly it's like
thinking on a planetary level requires massive like,
you know, changes of data if you're gonna have
a renewable electric grid, right?
Like how can AI be used in that and where--
Yeah, I don't know.
No?
I don't know the answer to that.
Seems to be more to do with human behavior
than with AI, but.
Hi, thank you for your talk, I'm Caroline.
I'm a senior at CMC.
I was kind of wondering, so you're kind of like saying that
we mistrust these AIs like completely, or,
and I was wondering if like--
Well too often, too often we do.
Not everybody does, I mean Schank gets really mad
when they say, Watson can think.
Sure, yeah, okay.
So I was kind of wondering what your take on was like
on the team effort between humans and AI systems.
Like sometimes we're not able to analyze big data sets
like AI can and so like I'm kind of wondering like
do you think that we should just dismiss them
and like they're complete, or like mostly just
very often erroneous or like do you think it should be
a team effort between humans and AI?
I think, the end of the book is this is why
we need human judgment more than ever,
and so there's definitely got be a human part of it
but there's also got to be a computer part
in terms of analyzing these big data.
I just can't crunch the numbers,
the stuff I do I couldn't crunch without a computer.
On the other hand, it's got to be
a scientific method type thing,
like running experiments to compare stuff.
Like that thing I did with the heart attacks,
searching through a hundred random things
and looking for correlations.
It's worthless because you know you're gonna find something.
I found it in random data.
So it proves nothing at all.
If I can find it in random data then I can find it anywhere.
Okay, and so I got to use common sense.
Does this actually makes sense?
And if it makes sense then do a controlled experiment.
Like for example,
aspirin.
Aspirin prevents clotting of blood, scientists knew that.
Blood clots lead to heart attacks and strokes,
scientists knew that.
So maybe aspirin, taking aspirin would prevent
heart attacks and strokes.
So they set up a controlled experiment
where they had 11,000 doctors take aspirin every other day,
11,000 doctors take a placebo every other day,
at the end of five years they decided aspirin did in fact
significantly reduce the incidence
of strokes and heart attacks.
And there you got the human brain is telling you
what experiment to run, okay,
and then you've got the conclusion,
which is very different from ransacking data
and coming up with buying a lot of footwear
is good or bad for you.
Time to study?
Go ahead.
Hi, my name's Andrea, I'm a junior.
My question is if we were to like hypothetically
be able to program computers to think like us
and then potentially be as smart as us,
then do you think there would be a danger?
Or do you think that that danger is made up by humans
because of the whole, what you were saying
in the first few slides about us giving them human--
I don't know, I do know it's a long way off.
Okay.
You and I'll both be gone
before that ever happens, but.
I mean, Hofstadter, who I really really respect a lot
says it's not theoretically impossible for computers
to have emotions, to tell jokes,
to have feelings, to think, to understand the world.
But it's many many years in the future.
And so I don't know what's gonna happen
many many years in the future.
Time to do homework?
Should we call, yeah?
Anyone have any more questions?
All right, perfect, so it seems that these
are all the questions that we have for tonight.
Please join me once again in thanking Professor Gary Smith.
(audience applauding)
Thank you.
