(dong ringing)
- So welcome back everyone.
That was a perfectly lovely lunch.
And we're all are rushing back
because this is the fireside chat
that everyone really wants to listen to,
although it's not much of a fireside.
It's a flower side.
Yeah we don't want to do that.
I don't think we want to do that,
but I'd like to welcome
our speakers here today.
Jason Matheny is going to be interviewing
General Jack Shanahan.
I introduced Jason as the
President of Georgetown.
For those of you who are
confused, it was just me
who was confused.
He's not, but I know
Jason from when he was
Associate Deputy DNI
and Director of IARPA,
back in the day when Jason
was really driving technology,
technology development, innovation
for all of the intelligence agencies
and did just a fabulous job,
but since then, he's been
on to some bigger and better
things, in terms of
being the founding father
of the CSET, which is, I need
to read more about the CSET,
because everybody's quoting it.
You put out some amazing publications.
So, I've got to read more.
He's also a member of
the National Security
Commission on AI.
And then with us today,
is General Jack Shanahan,
who also is in a leadership role on AI
at the Department of Defense.
And so with that, I'm gonna let you
introduce General Shanahan.
Thank you.
- Great thanks Ellen.
I'm gonna go straight into the questions.
We have an hour with General Shanahan.
I've got about 15 questions
that I wanna cover,
and then I wanna leave about 20 minutes
for open Q&A.
But first I just want to
express my deep appreciation.
I know you've got a whole
organization to run.
So, for you to take an
hour out of your day,
plus like 45 minutes on
each way for commute,
if you're lucky, you're on
one of the Lime scooters
today I'm assuming.
- [General] Of course.
How'd you know that?
- Yeah, I'm an Intel guy.
- [General] I just left
it out on the courtyard
if that's okay.
- That's good.
I'm sure it's being
vandalized as we speak,
but we really appreciate
you spending the time.
So, first just as background,
can you tell us what the
JAIC is and what you do?
- Yes, first of all let me
thank you for all you've done.
We treat artificial
intelligence people like me,
late comers to this as
breaking overnight news,
and you've been in this for a long time.
So, the work that you've
done in places like IARPA,
your work here at the CSET,
the White House Select Committee on AI,
and now of course the National
Security Commission on AI.
So, you've been setting the stage for AI
for an awful long time for the government.
So, thanks Jason everything you're doing.
In terms of the Joint
AI Center, what is it?
Why does it exist?
For those of you who have
never heard of this project,
it won't be hard to google it.
It's Project Maven.
I was the director of that for two years
and the Undersecretary of
Defense for Intelligence.
And Maven, in the
simplest way I can put it,
was a way to use artificial intelligence
specifically computer
vision machine learning,
to allow analysts to get
through volumes and volumes
of full motion video
in a much quicker way.
And it was a pathfinder project,
and it was designed to bring
in commercial technology,
find a way to get it into the department,
its speed and its scale.
Two things that we had not
been able to solve in the
Department of Defense up to that point
broadly across the board.
And we showed that it
was successful enough.
It still has a long way to go
I think to show real return on investment
but it was successful enough where
the Deputy Secretary of Defense decided
we needed an organization for the entire
Department of Defense,
not just intelligence,
whose entire mission would be
to accelerate DoD's adoption
and integration of AI to
achieve mission impact at scale.
So, the organization was officially formed
just over a year ago.
There wasn't much at the beginning,
because as is often the
case in the Pentagon,
you don't have any money,
you don't have any people.
You have a piece of paper that says
you are now an organization
that's supposed to be up
to speed very quickly.
So, the intent of the organization
was fielding and delivery
of AI capabilities
across the Department of Defense.
That's what was our charter,
and we can talk more
about the rest of the
implications of that decision.
- And how does that differ
then from other organizations
that are responsible for pieces
of AI within the department
like DARPA, SCO, the labs
within the different services?
- To still to its very essence,
in a very simplistic way
of talking about this,
we are AI now.
The other organizations
you mentioned are AI next.
I think that's a helpful
breakdown of who's doing what.
It's more complicated than that,
the idea of we're focusing on
taking existing capabilities
and get them into the field.
And another reason we stood up, the JAIC,
the Joint AI Center, was to
get across the technology
valley of death.
As Jason knows very well,
having worked on these projects
himself in IARPA.
The incredible technologies that come out
of these research labs, but the challenges
is really getting them put in at scale
across any organization,
whether it's the intelligence community
or the Department of Defense.
It's not a technology problem,
it's not a people problem.
It's an engineering problem about scaling
into weapon systems or into other systems,
that in the case of
artificial intelligence,
were developed at a time before AI,
and it's very hard to do.
So that was what we're really focused on
from the very beginning.
Now the relationship we
have with those organization
is such is that we're working
very hard day in and day out
to find a couple of projects
that are advanced enough
in their technology development,
an acronym that won't make a
lot of sense to most of you,
but TRL, Technology Readiness Level,
say six, seven, eight or above
that's just close enough
to be able to transition.
Can we pull that out of DARPA?
Can we pull it out of the
Strategic Capabilities Office?
Or can we pull it out of
the Defense Innovation Unit
on the west coast who we
work very closely with,
to begin to get it to the rest
of the Department of Defense.
It really is about finding
a way to get across
that technology value of death.
- So, starting a new
organization within the Pentagon
that's focused on the technology priority,
that's listed in the
National Security of Strategy
and the National Defense
Strategy can't be easy.
So, can you tell us what you've learned
from standing up such an organization
within the Pentagon?
What have been the biggest challenges?
What have been the biggest surprises?
- Yeah well it's the Big Bang Theory.
You have to create
something out of nothing,
which in the Department of
Defense is never a trivial task.
We've had to create this
organization from the beginning,
and as I said we didn't have,
a year ago we had maybe 10 people.
I wasn't even officially
confirmed into the position yet.
I had really no money to speak of.
We didn't have a place to work as a team.
And a year into this,
we now have almost a 120 people
when we have contractors included.
We have a very healthy budget
for the next fiscal year,
which begins just in a week.
We have an entire facility or a building,
floor of a building over in Crystal City.
So, it's a real organization.
I do get frustrated
sometimes having worked
on Project Maven for two years,
that we're not moving fast enough.
This is such an imperative for the country
that I know we need to move faster.
And yet when I stand back
and look at the forest
instead of the branches on the trees,
I realized we've actually
accomplished a lot in the past
year just getting the organization going,
but when you asked about some
of the biggest challenges,
I guess I would put it in a way that
we're trying to build a startup culture
and a startup organization,
as part of the institutional bureaucracy.
That's not easy to do.
We have to do both.
If I'm going to get funding
over the next five years,
I have to learn to speak the
language of the institution
and advocate for that money.
I don't have angel investors that I go to.
I have the Congress who
demands accountability
and I have organizations in the department
that ask me to prove why we're asking for,
or why we need the money we need,
but on the other hand to really embrace AI
the way that, I know
Jack Clark and Roz Shaw
will be up here after us, talking about
how fast commercial industry
is capable of moving
in this world.
We have to get that startup culture right.
So, it's a fine balance
and sometimes, it's hard to get it right
and I really worry day in day out,
that the process begins
to dominate product
I can't let that happen,
which is why DIU is so effective.
It's tight, it's small, it's agile.
Can we take that same approach
as we scale across the entire department?
- For the challenges, how
many of them come down to data
and data policies?
How much of it comes down
to access to compute,
access to talent, the
other kinds of resources
that we heard today?
- When we spent two years
building what the AI fielding
pipeline looks like, yeah
it's not any different
for the Department of Defense
than it is for any company trying
to embrace artificial intelligence.
And what we learned is what
all those organizations
and companies and what
you learned in places
like IARPA, 80% of our time and resources
are spent on the enabling functions.
Not on the algorithm itself.
The companies do this for a living.
They're very good at commercial companies,
put out algorithms,
the most talented people in the country
that have been working on these,
and these startups are
the biggest telecoms
and data and cloud companies in the world.
They know how to do technology.
What we're faced with
as you were suggesting
is just data management.
Being one of our core
challenges how do you find
clean, curate, label, take that data,
train an algorithm against it.
Take that model and then
integrate it into the system,
never designed for AI.
So, I call that the bookend.
So, in the middle you have
your training your model,
but on the other ends of that
you have all the other enabler functions,
which turned out to be I
won't say more difficult
than we expected, because we
didn't know what to expect,
but it matches exactly what people like
Andrew England say.
Here is what's going to happen to you
when you try to embark on an AI journey.
So, not giving up in frustration,
which is very easy to do
which is why we stood up the JAIC,
because too many people
were stopping projects.
They didn't know how to get through
the data management problem.
They didn't get through
the integration sustainment
in long term continuous integration,
continuous development of algorithms,
which is a whole story by itself.
- And can you tell me about
some of the applications
that you're prioritizing
and how you picked
those as priorities to start with?
- You know, again some of
this goes back to people
like Andrew and we had advice
from a lot of different
luminaries in AI and academia
and commercial industry.
Andrew Moore in his previous role at CMU,
and a whole bunch of others,
whose message was consistent.
Start small with a
manageable, bounded problem.
Show that you can get success out of that
and then figure out how to scale.
And that was the approach
we took with Maven.
I mean we didn't start with Maven
with an artificial intelligence answer,
then we went out in search of a problem.
Our problem was bounded.
It was too much video for
an Intel analyst to look at.
It was excruciatingly
painful for an analyst
to sit there and look at
video for 12-13 hours a day.
So, we knew there had to be a solution.
It was in the form of computer vision
and commercial industry
but in terms of the JAIC,
we started with two problems
that already were underway.
In one case its predictive maintenance
on an H60 helicopter.
Special Operations Command in
a particular unit under it,
had had some prescience here
and putting sensors on the helicopter
and keeping the data
that was being collected
from those sensors, and
then were able to join
what is already underway
in artificial intelligence,
machine learning to do
something with that data.
It was a particular problem
with the helicopters.
In a certain condition, you
could get sand into the engines
very high temperature
would potentially fuse
the rotor blades.
It's a very bad situation.
Could you predict when
the likelihood of that
would exceed a threshold
that you might have
to replace the engine?
So, we embarked on that with
Special Operations Command
with the army AI task
force, with Carnegie Mellon,
we're actually the ones
developing the model for that.
And that has been underway
now for really the past year.
So, we've actually delivered
capabilities on that one.
We'll move to Army, Navy, and Air Force
versions of that helicopter.
And the other one which
was a natural fit for us
because Maven had started it,
is humanitarian assistance
and disaster relief.
Again two specific parts of that.
We didn't make them up.
They were being asked
of us from the people
on the frontline.
In this case, we were
looking at how do you detect
fire line perimeters show a probability
where that is in real-time
using full motion video.
In this case it's semantic segmentation
is the actual artificial
intelligence machine learning piece
that we're doing.
And then another one is flood assessment.
Flood damage assessment,
road obstructions,
building damage assessment.
Those had been underway in various places
but we took what was already
existing in Project Maven
and repurposed it for us in the JAIC,
because it's continental United States.
It's not an intelligence operation.
It's defense support to civil authorities.
And the other really, really
strong benefit of that one
is there are a lot of people,
who have some reservations
about working with the
Department of Defense on AI.
Everybody wants to work with
the Department of Defense
on HADR, Humanitarian
Assistance Disaster Relief.
It's a very good one
for people to get into,
from the biggest companies to
NGOs to other international
partners like Singapore.
Well I believe we have a
Singapore representative
today in the audience, that
we're talking with them
on some collaboration opportunities
on this very subject, on HADR.
Then we've just started,
we're a couple of months
into cyber defense, which results have yet
to be determined.
And then we'll get more
into war fighting operations
over the course of the next year.
So, those are the first real three
and then the fourth one going bigger.
- Looking further out, say 10 years,
where do you think the
most important applications
of AI will be for national security?
- Every aspect of the
Department of Defense
can be improved somehow through
artificial intelligence.
The hard part it really
is, the challenge for us
is of all the lessons we've learned,
the one that I like to repeat is
the importance of problem framing.
Okay it's not the earth shattering
to anybody who's done AI
in commercial industry,
but spending the amount of time upfront
to really understand what AI
can and cannot do for you,
and do you have the data even
to embark on that project.
And if you don't you might
choose a different answer.
There might be just some rules
based algorithm is just fine.
So, once you understand what AI can do,
then you jump into it.
I'd say right now in the near term,
the biggest return on investment
is what you would expect.
It's the sort of prosaic things.
It's the quotidian getting through
business processes faster right?
It's spreadsheet analysis.
It's taking volumes of data
and finding a signal deep into that noise.
Those are what happens every day.
In my view as we get
further and further out,
the reason for Project Maven,
we were actually called the algorithmic
warfare cross-functional team.
That's our official name
by the former Deputy secretary of Defense
because he envisioned a world
of algorithm against algorithm.
And if I look out into that future,
I think the biggest benefits
will be from something
like AI enabled mission command.
How do I bring operations and
intelligence closer together
where the products of one
are available instantly
to the other, so we no
longer have operations
and intelligence and
sensors and platforms.
We have all of that combined,
and we have humans and machines
team in such a way that
people get to sit back and
think more about the problem
and make better and maybe
even faster decisions.
So, it's the idea of
if I look out 10 years,
I think of human-machine teaming,
I think of moving from perception
to reasoning to context
to dare I say at knowledge,
which is a little bit hard at this point
to imagine how quickly we get there.
But then AI is infused in
everything the department is doing
and we just don't even realize
it's part of the fabric anymore.
There's a reluctance to
understand AI in the first place
and yet people have it
embedded in their personal
electronic devices.
If we get it right in the
Department of Defense,
it's ubiquitous, we just
don't even know it's there,
but it's a long way
between where we are today
and collecting really messy dirty data,
and wrangling it, lunging it,
all the way to that little bit
more of a visionary future.
- In a few of the discussions this morning
the concept of AI safety,
assurance and reliability came up.
Can you tell us what the JAIC
is doing on those topics?
- As I say in a lot of settings,
when I'm asked about this part of it,
I've been in uniform for over 35 years
and I'm spending more time thinking
serious about the safe,
lawful, and ethical use of AI
than I have at any other
point in my history.
And really it's the product of where I am
in the organization as the director of it.
We take it very seriously.
Everything we do comes with a context of
are we gonna use this
AI in a way that follows
what the department, the
precepts of the department
has always had which is, is it robust?
Is it reliable?
Do we have some element
of transparency to it?
Is it going to be used safely, lawfully,
in an ethical manner?
So, within the JAIC itself,
I have a strategic engagement
and policy team.
They are chartered with
the ethics piece of this.
So, we're working very closely
with the Defense Innovation
Board, who are just about
putting the finishing touches
on their AI principles of defense,
which they'll make a recommendation
to the Secretary of Defense.
And then working very
closely with academia
and industry, all of whom,
a lot of big companies in industry now
are putting out their own AI principles.
We're trying to absorb what works
for the Department of Defense,
looking in the near term,
we're bringing in someone
that's I say ethicist
that got a little misreported
last time I said it.
but it was my own fault, it'll be someone
who's a technical standard/ethicist,
that as we develop the models
and the algorithms together
that they can look at that
to make sure the process is abiding
by sort of our rules, our
rules of the road as we say,
but I'm also interested
in down the road helping,
getting some help from the outside
on sort of those deeper
philosophical questions.
I don't focus them day-to-day because
of my charter to field now,
but it's clear, we have to
think carefully about this.
- So it seems like AI even may be compared
to a lot of other emerging technologies,
has a lot of potential for misreporting
and the press often like
hype cycles an investment,
what do you think the
public might be getting
most wrong about AI?
- This is not my original thought
but when you look back on the history
of disruptive technologies anywhere,
but let's say in the
Department of Defense,
I have firmly in the court of we tend
to be far too optimistic
about the short term advances,
and yet, we underestimate
the long term implications
and advances, and I'd
say that's where we are
right now in AI.
The hype is a little dangerous
because it's uninformed
most of the time and sometimes
it's a Hollywood driven
killer robots, Terminator,
Skynet that worst case scenario.
I can tell you I live this every day
and I don't see that worst case scenario
any time in my immediate future.
So, I think there's just,
we went a little overboard.
You've lived it in the
cycles of history on AI,
it's here to stay and I
am optimistic in the sense
that we are underestimating
what it can do in the future,
and that the advances probably
will become very rapid
at some point, and everything
from data management
becomes easier and becomes
second nature to just do AI
and we'll be a little surprised
by how fast it advances,
what I don't know right now
is what that timeframe looks like.
Is it two years?
Is it five years?
Is it 25 years?
I think it's just the hype is,
we've oversold the
capabilities a little bit,
which leads to a little
pessimism and cynicism.
I do not see an AI winter on the horizon.
Not at all.
I think it's just the opposite.
I see everyday new examples of companies
and institutions figuring
out new ways to use AI.
So, I'm very optimistic, I'm
bullish about the future.
- Are there any topics
related to AI that you think
receive insufficient attention,
that you think we should
be discussing more either
in public or in the context
of national security?
- The only one I would suggest
is just an overall campaign
of education and training, so
people understand what it is
and what it's not.
I believe that it's as
important to understand
the limitations of AI,
as it is to understand
the potential and the strengths of it.
As the National Security Commission on AI
does its interim report
and then its long-term
recommendations, I'm
putting a lot of weight
on your shoulders Jason,
because I think you can come
with some recommendations
for the entire society.
This is well beyond DoD.
It's well beyond the government.
It's really our entire society.
Do we embrace AI as our future?
With all its implications
good and potentially bad.
Job losses, the ethical concerns
about how it could be used,
do we understand it?
Do we at least all start
from a point of common
understanding or do we jump in it with
just vastly misinformed
ill-informed or hyped
understanding of what AI is all about?
So, I think if I were to pick one,
it was this idea of can
we get a sort of better
understanding of what AI really is.
- Is there an American approach to AI
that you think distinguishes
either our approach or the
approach of democratic nations
or NATO allies from
those of other nations?
Is there something characteristic
about our approach?
And what are the advantages
and disadvantages
of that approach relative to others?
- Yeah rather than say American,
I would stick with the sort
of the democratic open society
approach versus one that is
based on an authoritarian
repressive regimes.
I think there is the distinction to be at.
I will say I believe we're in
a contest for the character
of the international
order in the digital age.
And it's very important
that we get this right.
Everything we do is very
open and transparent.
We talk about ethics not
just on the research side.
I talk about ethics and I'm responsible
for fielding capabilities.
I do not see that same
approach in Russia and China.
I just don't see it.
There are suggestions
that there's Chinese.
Russians are interested in ethics.
I do not see that coming out
of the government institutions.
I see just the opposite.
It's fielding very fast.
I don't know what kind
of rigor and discipline
they put behind their
tests and evaluation.
What sets us apart is our ability to do
or our focus on real rigor and
test evaluation, validation
and verification before
we feel the capability.
That could have lives at stake.
So, that idea of being, the
idea of trust, transparency,
collaboration, openness about
what we're trying to do.
Now the department defense, to
be fair it's been criticized
for not being open and transparent
on certain things in the past.
But in this area, I think we're
actually doing fairly well,
trying to say what we're
doing and why we're doing it.
I do not see that same approach coming out
of China or Russia.
- So, thinking about countries
that might deploy AI systems
before they're fully tested,
has the Department been thinking about
how to defeat those systems
before they become
unsafe in a battlefield?
- Well you might be surprised
I won't fully answer that question.
I will say that you know
much better than I do,
but I have a new appreciation
for the fragility
and brittleness of the systems
that are in place today.
Whatever country we're talking about
'cos they're all based generally
off the same open-source
models and frameworks and standards.
So, they're not fully
hardened yet as we say.
We know we have to protect
the most important components
to the AI pipeline beginning with data.
We must protect our data.
We must protect our supply chains.
We must protect the models themselves.
So, I would say we're taking an approach
that acknowledges all those
like we've done with every technology,
makes it a little harder with this one,
because it is such an open source
and it's a commercial technology.
It's not a military technology.
On the other hand,
we're equally interested
in where vulnerabilities are
that we could take advantage of.
I will say there's no differences.
I think electronic warfare is
a pretty good action-reaction,
counteraction, counter
counter action or now cyber.
It's kind of the same way.
Offensive cyber, defensive cyber.
So, I think that's a useful analogy.
It's just AI might move
a lot faster than that.
- So, the JAIC recently
posted a large number
of job openings, covering a pretty broad
range of disciplines,
of technical specialties
as well as policy specialties.
Can you tell us a little bit about
those openings especially
for folks in the room
who will be looking for jobs,
either soon after the end of this meeting
or later in the year?
- We're very close maybe
within a day or two
of AI.mil being available.
We do own the domain AI.mil.
We will have our links to
the recruitment page on there
should anybody be interested.
But I said at the beginning, as we formed,
it was not just to do product delivery.
It was do a center of excellence for AI.
So, we have job openings
that are everything
from strategic, engagement, policy,
tests and assessment, foreign engagement,
industry engagement, but our
core shortfalls right now,
just like everybody else,
our glaring shortfalls
are in sort of the core data
science artificial intelligence
expertise and then product development,
and product delivery.
The ones that what I really need
is anybody has commercial experience
and government experience,
bringing both of those in
is the best possible answer.
But I also have bringing people in
like our chief technology officer,
who spent 25 years in Silicon Valley.
He's serving because he wants to give
something back to the public sector.
When he comes to the table,
he brings just a completely
different viewpoint,
and on day one, he made an
impact to the organization
because he came with a commercial view
of how fast we need to be moving.
And what does product
development and product delivery
look like in Silicon Valley
or Austin or Boston or New York,
versus what it looks like in
the Department of Defense.
And so that's where if I were to say,
what I most need, AIML expertise
and product development expertise.
- So, about a quarter of the
audience is from industry.
What would you say is
the best way for industry
to engage with the department
and why should they
engage with the Department
on AI projects?
What's the comparative gain for them?
- Yeah we need to do better at emphasizing
the importance of collaboration
between government,
industry, and academia.
I call it that triangle.
In artificial intelligence,
it's probably more
important than anything else
I've seen it and the importance of that.
And Project Maven we were
very aggressive in going out
and seeking commercial solutions.
Even the smallest startups,
but it took a lot of
very hands-on engagement
because these small companies
and even some of the bigger companies,
were not working with
the Department of Defense
on artificial intelligence.
They didn't know how to find
the Department of Defense,
what we were looking for
and they didn't know where to begin
when it came to doing
contracting and acquisition
with the Department of Defense.
We had a lot of work to do there.
It was taking the same
approach for the JAIC
is we want to work with
every possible partner.
My challenge right now is
I don't need pitches.
What I need to do is say here
is our well-defined problem.
Do you have a solution for that?
So, we're doing more industry days.
Last year we did a
Maven JAIC industry day,
which was very broad.
The feedback, the criticism we had is
could you be more specific?
So, for each of our lines of effort,
we will have more narrowly
focused industry days,
where we'll solicit from industry
and we'll have white papers,
and we'll see if those
white papers progress
to follow-on discussions
and potentially to contract.
But we're very interested
in working with everybody
from the smallest companies
up to the biggest companies
in the world.
- So, a half of this
audience are students.
What should they be thinking about?
As they complete their studies here,
what advice do you have for those
who might want to work
at the intersection of AI
and national security?
- As short as we are in
the country of AI, ML
various forms of artificial
intelligence expertise,
we have a lot of it.
We have a lot of technical
experts in this country.
We have a lot of very bright students
coming out of universities,
places like Georgetown
that are at the top of the game in policy
and national security.
What I need if I can find
a combination of people
who can do both.
To bridge the gap between
the tech community
and the policy national
security community,
because what we end up with today,
and I'm guilty of it as anybody.
So, I'm not throwing the brick
without some justification
is we have very senior people
who just don't understand
the artificial intelligence
ecosystem and the technology.
We're trying to make
decisions, big decisions,
about where the department is going.
Same in terms of
developing future policies
in national security.
I will plagiarize and
paraphrase pretty roughly,
I think it was Sir William Francis Butler
that said the nation that
separates its scholars
from its warriors, will have
its thinking done by cowards
and its fighting done by fools.
There is this idea of the tech world
having a better understanding of what
you're actually developing
and how it could be used
for good or for bad.
And then on the other side the people that
are making these big security
and policy decisions,
do you really understand
what this technology can do
as a force for good or as a force for bad.
It's a technology.
It could be used in either way.
So, that's what I'm looking for.
So, that somebody could
come in, I need act attitude
and aptitude, number one and number two.
Either way, you can reverse the order
but beyond that somebody
that can bridge that gap
between the policy world
and the tech world.
And I think the panel
after me can probably talk
some more about what
that should look like.
- Great and are there
specialized fellowships
within the department for
folks who are pretty sure
that they want to spend some
time within the department?
- There are, most of those
are run in other places
of the department.
What we're trying to do
is put together a plan
to have some fellowships
within just for the JAIC.
I did have two college
interns with us this summer.
Mostly by serendipity,
one came from Cornell.
He was a freshman, he's
going into his sophomore year
and I had my personal mic-drop moment
was he left for the summer and
he had an office call with me
and unsolicited he said
JAIC is a cool place to work
and I was done at that point.
(all laughing)
I wasn't expecting that
to be honest with you.
But it is a cool place to work.
We are trying to to move fast
and get the right kind
of people in the door,
and we do have opportunities for people,
but it's gonna take us a little while,
because it's again starting
a new organization.
- So, I have one more
question before I turn it over
to the audience.
You have been I think unusually open
and candid and clear in the
way that you've talked about
the JAIC and I think you've
earned a lot of goodwill
from that, both within the
national security community
and from academia and industry.
What has been, what was
the reason for that?
I mean you could have opted
for a different strategy,
one that had a little bit
more of a bunker mentality.
And second, what have
been both the benefits
and the costs of pursuing that strategy?
- My ultimate rationale has been that
in this technology, emerging
technology disruptive world
that we're in today, many
of the fastest advances
are coming from the commercial industry.
It's not that they're
not happening in DoD.
Research is as good as it's
ever been in the department,
but in this place of
AI cloud technologies,
it's happening so fast
that if we're not open,
if we're not transparent, at least talk
about what we're trying to
do, then people will assume
on our behalf and
sometimes those assumptions
will be on the worst case as departments
going straight to killer robots.
What I'm trying to do is
reset the conversation
a little bit and say AI
can be used for good.
We have a lot of people in harm's way.
Humans are very fallible.
In combat, they are particularly so.
If friction and chaos of
war is going to make people
make bad decisions occasionally,
can we use artificial intelligence
to make better decisions
to make more informed judgments
about what might be happening
to reduce the potential
for civilian casualties
or collateral damage.
I'm an optimist.
I believe you can.
It will not eliminate it, never.
It's war.
Bad things are going to happen,
but there's a deterrent
effect of not wanting
to fight in the first place,
but if we have to fight
that these capabilities
can be used in the best possible way.
So, my rationale has been we
ought to be talking about it
and I don't want to shy away for it.
I'm not gonna go straight
to lethal autonomous weapons
systems, but I do want to
say we will use artificial
intelligence in our weapon systems,
for the reasons I just stated.
It's to give us a competitive advantage.
It's to save lives and help deter war
from happening in the first place.
Us, our allies and our partners.
And if we don't do that,
I think we're worse off for it.
And I'm not sure everybody agrees with me.
I will be honest.
Some people think full speed ahead.
I don't care what
commercial industry thinks.
We're just gonna go do this.
This is too important to get it wrong.
AI is critical for national security.
Table stakes are very high right now.
I think we ought to do everything
we can to win and not lose.
- So, I think we have
some microphones set up.
And while we wait for
folks to approach it,
one really quick question.
Based on what you said about
80% of the investment really
being on this enabling infrastructure,
it seems like a really good move to set up
the JAIC within the CIO office.
Was that strategic?
I mean was the understanding
from the start a lot
of this is going to be
about connecting pipes
and getting databases to
work with one another?
- Yeah very intentionally
by the Defense Department's
Chief Information Officer, Dana Deasy.
A man who came from industry
and knew from the moment
he walked in the door the
way the department needs
to move is towards digital modernization.
He says digital modernization.
I say that synonymous with
war fighting modernization.
As we move from a hardware centric world
to a software driven age,
from industrial environment
to information age,
we have to do a couple
of things simultaneously.
One is AI, cyber, cloud and C3,
command control communications.
It's those four pillars
of digital modernization
converging in such a way
that will make the difference
I believe between success and failure
in a future fight.
And so by putting it under there,
he's the one responsible for
that digital modernization.
So, rather have it as an
appendage somewhere else,
which is an open discussion
a few years down the road,
to be honest with you.
If he were here he would tell you,
we'll see where it should
be five years from now,
but in terms of getting
the department moving
with speed, with alacrity
toward digital modernization
it's the right place to be.
- [Jason] First question.
- [Scott] Hi good afternoon general.
This is Scott Massioni
with Federal News Network.
I'm a reporter.
I understand that you're
teaming up with GSA
for some years, their
centers of excellence.
So, I was just wondering if you could tell
us a little bit about the
rationale behind that.
And then secondly, what
some of the expectations are
considering you're still
a fairly new organization?
- This is just inked as we say.
So, the details, I don't
have a lot of details beyond
we wanted a partnership with GSA,
because of their centers
of excellence concept
to include concerning an
AI Center of Excellence,
but they have been very
forward-leaning, coming to us
and I know what you're going through,
we know what you're trying to achieve,
let us help you.
On the contracting and
acquisition side of it,
it's been incredibly helpful
because in addition to trying
to hire the right people,
our other big challenge is
contracting and acquisition
moving the speed of agile methodology.
So, that is a big part of it
and they've offered to help us pretty much
in every single line of effort,
everything from intelligent
business automation,
once someone say a robotic
process automation,
we've got a start on that,
to some other help and
contracting acquisition
and to our JAIC or a
Joint Common Foundation,
which is that platform that
will be common to the JAIC
that will be a place for
data tools, libraries,
DevStack office environment.
Basically platform as a service.
So, they're offering to help
us in each one of those areas
and they have a chief of
their technology section,
who has been so supportive
of understanding
where the JAIC is trying to go
and has immediately jumped in
to offer the full support of GSA.
So, I'm very appreciative
of what they're done.
- [Scott] Thank you.
- Thank You.
- [Michael] Hello, I'm Michael Clare
with the Arms Control Association.
Jason thank you for your good questions
and General Shanahan,
thank you for your candor.
So, we in the arms control community
are concerned about the impact
of AI on nuclear stability,
and we could see on one hand that
having automated intelligence
could help with sifting
through incoming information
at a very high speed in a nuclear crisis
that it could be an advantage.
It could be a stabilizing force.
We also worry that the opposite is true
that machines and artificial intelligence
could be jammed.
It could provide misleading information.
It could accelerate the pace of events
beyond human capacity to follow,
and lead to unintended
escalation in a crisis
and lead to unintended
nuclear weapons use.
So, I ask what your thoughts are on this.
- A couple of different thoughts.
First of all you will
find no stronger proponent
of sort of integration of AI capabilities
writ large into the Department of Defense.
But there's one area where I pause,
and it has to do with
nuclear command and control.
If there are those in
the audience that read
some recent articles in war on the rocks,
and when the title of one of
those, it's very provocative,
does America need a dead end fueled by AI?
I read that, my immediate
answer is no, we do not.
This is one area where you
have to be very careful,
knowing what we're doing.
With the immaturity of technology today,
the idea of give us a lot of time
to test and evaluate.
This is the ultimate human
decision that needs to be made,
which is in the area of
nuclear command and control.
Now there are aspects
of information support
to the entire command and control process,
where of course it's just like
the intelligence enterprise
getting through full motion video,
getting through indications and warning.
That's a different component,
but to what you're getting at
is a much bigger more
complex question of AI,
of strategic deterrence.
There are people who are focusing on that,
largely in academia.
Day to day in my world
what I focus more on
is the shorter term
implications through test
and evaluation, validation
and verification.
Are we confident that what
we're gonna put in place
is robust, is resilient, is reliable.
When I say that we started
with lower consequence missions
the reason we did it that
way is to understand,
what we're getting ourselves into.
Nothing what I'm talking about right now,
nothing is taking humans
completely out of the loop.
We're still, even though
full motion video analysts
are still having to look at screens,
they're just doing it a
little bit different way
now that they have
assistance from computer
vision technologies.
So, I realize we have
to think very carefully
on the longer-term implications
when it comes to big questions,
like nuclear strategic
deterrence and arms races.
And I'll be honest with you
and just maybe a little
bit of a digression.
I don't like the term
and I do not use the term
arms control when it comes to AI.
I think that's unhelpful when it comes
to artificial intelligence.
It's largely a commercial technology
that can be used for commercial reasons
and military reasons.
I'm much more interested, at
least at a starting point,
of international norms
and rules of behavior,
which I think is extremely important
to have those discussions.
- [Michael] Thank you very much.
- Thank you.
- [Jason] Thanks Mike.
- [Dee] Thank you, my name's Dee Young.
I just wonder maybe we
should turn things around
to improve the government productivity.
I think the internet start
from the Defense Department.
(mumbles) have some in
National Institute of Health,
and they have some organization,
who identify waste and fraud
and abuse in government sector,
especially military or defense.
So, I just thought there are
some kind of revolving door,
who attract government
personnel to the private,
and in order to build
their private sectors.
That's why maybe things they
have to turn things around
to improve the integrity of government,
and appreciate who can
contribute with merits,
rather than let them humiliated,
and they had to go outside.
So, I just wonder if you can
do things around like this,
and maybe have to a little bit
distinguish right and wrong,
integrity rather than corruption,
then things that people
have that kind of integrity,
that kind of appreciation or
contribution to the country,
and to the society, that
will then turn things around.
And I told the DoD before--
- Miss, sorry we should get
to the end of your question.
- [Dee] My question is that DoD before,
do you have a candidate for one,
I think that should be
a good starting point,
why do you have to abolish that?
- Let me adjust the question
and maybe one application
of AI would be to address
cases of fraud and abuse.
You've talked about how we can apply AI
to a lot of back-office functions,
like contracting and finance.
Do you have any wins from that?
I mean are there cases
where we've been able
to identify others, duplicate services
or there's overcharging
on a particular service.
- Yeah there are, these kind
of initiatives are going on
individually in different
services and components.
I think I heard you mention
National Institute of Health.
They're doing some
remarkable work over there
on using AI to get through--
- [Jason] Proposal reviews.
- Yeah proposal reviews.
It's really interesting work
and the department,
small scale though it is,
we just had one success story working
with the chief management office,
which was going through
50 60-year-old DoD forms
to find offensive terms.
Not offensive probably at the time
but in retrospect somebody
received the records
from a deceased spouse
and found some check boxes
in there that had terms that were
not being acceptable today.
Using in this case just
expert systems and content
filtering, writing a little
bit of our code to go through
and coming up with a solution
in about a 100 hours of work
that are saving 13,000 hours of going
through those forms manually.
It's a small one,
but it's real and it's
tangible makes a difference.
So, I think there are
cases like this all over,
every government organization
and I know you used the word integrity.
I didn't quite get the context
but I will tell you integrity
is at the basis of everything
I do every day, and I think
we could use artificial
intelligence to help find cases,
where there might have
been unnecessary bias
or prejudice or statistical anomalies.
There is a whole question
of how you train the models
in the first place that could have biases
associated with them,
but I think there are opportunities
to do what you suggest.
I'll leave it at that.
- [Jason] Thank you.
Next question.
- [Caroline] Sir Caroline Pestel.
Office of Net assessment and OSD.
You spoke about the importance at the JAIC
of having individuals that
are able to understand
government service as
well as the tech industry.
In your 35 years of service,
what do you feel prepared
you to lead the organization
you are at the intersection
of national security
and technology and what would you advise
other senior leaders or
individuals in government
in order to educate or inform themselves
to be able to operate in the
space that you're describing?
- I wish I was a lot better
prepared than I really was.
The fact that I ran Project
Maven and had three stars,
somebody thought that
was sufficient to put me
in charge of the JAIC.
And I worked very very hard
to understand the tech world.
I can't even come close
to what Jason understands
about this world.
But it requires an open mind,
a receptivity to what's different
about the world that I grew up in.
I started off in the F4 Phantom.
Most people don't even know
that airplane existed anymore.
That's a long time ago, to go from an F4,
to artificial intelligence
is quite a journey.
I've had to reinvent my swing
a few times, you might say,
to get to this point.
But what's important
right now in position,
I'm specifically talking
about a position of leadership
like either Project Maven or the JAIC,
is somebody that can
balance the institution,
knows how to get things
done in the bureaucracy,
but also protects and gives top cover
to the real disruptors.
You need disruptors right now.
You need people are
willing to really challenge
the norms of doing business the old way.
And it's not going to be
easy for them to do so.
So, if there's anything that
I would be most proud of
in my time at Maven in addition
to the technology piece,
in addition to the culture change,
its protecting the people
that most understand
how to make a difference in the future,
because there's a thirst for it.
The people that are coming
into the government today,
uniform are not, are all seeing
the world unfold before them
in emerging technology.
It's not foreign to them.
What they need is top cover to do it
and some bottom-up innovation,
and then you just got to
charge at the institution
and there's going to be a lot of setbacks,
and that's part of the
reason I exist is to work
the institutional piece of it.
But for people coming up behind,
I'm very confident that
two generations from now
we ought to be looking at,
I'll use the military
term here, an 03 or an 04
that is being earmarked for
position of future leadership
and watch them very carefully,
rather then trying to retrain
us at a 30-plus year point,
it's very hard to do.
We have our own biases built in.
Find somebody that understands this
but also has the credibility
to work within the
organization, to get things moving.
It's one thing to be a disrupter
and nothing ever happens
because you just break
glass and nobody's there
to pick it up.
You need both of it.
I think there's a really important point
is to push hard and
push against the system
and push against bureaucracy,
but you need someone to help you do that
and it's a combination of above and below
that I think is the most effective.
- [Caroline] Thank you.
- [Ala] Hello thank you
for your talk today.
My name is Ala Latifah Tumbi.
I'm from the US Department of State.
I'm also a science fellow.
I work at the Office of Japan Affairs
and I work on science and
technology cooperation
between the US and Japan.
My question today is, so working at JAIC,
I can't imagine the types of
the newest technologies of AI,
you have the ability to
witness and have insights from,
and being that our country
benefits a lot from innovation
from foreign talent.
For example you mentioned Carnegie Mellon,
which has a large
majority of their students
from countries and in particular China,
do you think there are some projects on AI
that cannot be worked on with
what foreign national students
and or companies which employ a lot
of foreign national students.
I was wondering this in particularly
and also what about
countries who are known
to have be allies with them.
Do you think there's instances
where we cannot collaborate
or have these researchers,
foreign nationals working
on this AI technology
as it relates to our
national security interests.
I'm asking this mostly because
I'm working a lot with Japan
which is known as our ally.
And I can imagine there's instances
where we might not always
want to collaborate
on these really important
technologies in AI,
even if there are countries
that are our allies.
- Yeah it's always going to be a balance.
I'll divide the answer up
into sort of two parts.
Those that are allies and partners
that we're extremely interested
in working with on AI.
We're all at sort of these
almost nascent stages
in every country that we've talked to.
And I have a part of the JAIC
is strategic engagement and policy.
I believe we've had conversations
with 45 plus countries.
Now just in a very basic level.
Here's our DoD AI strategy.
Do you have an equivalent strategy?
Where are you going?
What are some projects
that we can work on?
So, the idea of having partnerships
is one of our inherent strengths
in the democratic Western societies.
That's important.
Now they have to balance that
against national security
on the academic institution piece.
Some of the best talent
in the world in this area
comes from outside the United States.
United States still leads the world
and leaders of artificial intelligence,
research and commercial
production and delivery
but there's such tremendous
talent coming from countries
all across the globe.
They come in and study
at our institutions.
Can we keep them?
Can we keep them and help
work together on projects?
But I do have to balance
the national security
piece of this.
The initial projects we started
with Maven and with the JAIC
were unclassified for the reason
that we were working with some companies
that didn't have people that
had security clearances,
and the data was not classified data.
It was a project that was unclassified.
That was a good starting point
but as we get into more war
fighting oriented projects,
that will become a little
bit different question.
So, it is a balance
between national security
and global cooperation.
In AI, so much of this
is already being done
in open source and commercial ways,
that it's very hard to
clamp down and restrict it.
It could be a boomerang effect
that it actually hurts
us more than it helps us.
So, I'm interested in talent
coming in from the outside
and then can we keep the
talent here and work together,
but I do have concerns
that if somebody comes in,
takes intellectual property,
brings it back to somewhere like China
and then use this against us,
I'm not interested in that scenario.
So, we got to figure out
how to do both effectively,
and that's not easy.
- [Ala] Thank you.
- [Zoey] Good Afternoon, General Shanahan.
Thank you very much.
I'm Zoey Stanley Lachman from the RSIS
Military Transformations
Program in Singapore.
- Yes.
- [Zoey] I had two questions
if you wouldn't mind,
both related to data.
The first question is
we know that there are
significant stovepipes
between the data coming,
not only between services,
but also between platforms.
What are the kinds of
efforts that the JAIC
could be involved in to
reduce those stovepipes?
And the second question is
also related to alliances.
I'm wondering if we imagine a
world several years from now,
where we see some of these
data management issues
that you've described, as
a burden sharing problem,
and our allies can help structure
and vet some of the data.
How do we get there and what
are the biggest barriers
we would face?
Thanks.
- No the second part of your
question, I agree with you.
That's part of what a
collaboration might look like
is how do we share data with each other?
How do we help each other through
the data management process
and through the JAIC,
we are in initial
discussions with Singapore
on some HADR projects
that we might work on together.
So, when I say that if we were to get
into a partnership with any other country,
I think we would look at
the entire delivery pipeline
rather than just one narrow aspect,
where can we help each other out.
And that's a good one to discuss.
On the first part of your question,
which was a data
management question, right
and specifically what again?
- [Zoey] How to reduce
some of the stovepipes
between the data--
- Yeah that's very good question.
Again one of the reasons the JAIC exists
and I use the bumper sticker
of centralized direction,
common foundation
decentralized development experimentation.
And I use that for a reason
because I don't want it to be seen
as we're a bureaucratic entity
that's enforcing or forcing ourselves
upon all components and services.
But where we should be the most helpful
in areas like this, where we
can get policies, procedures
and standards published and promulgated
such that it enforces some of the things
that have to be done.
We can't get to a future
of digital modernization
unless we fix some of these data access,
data quality problems and they're big.
They're not insurmountable
and we're talking about
mostly machine learning
and reinforcement learning
maybe a little bit different
in some other aspects of AI,
but in the cases of some of
the projects we have going
on right now, which our
machine learning project,
data is central to what
we're trying to do.
So, we do have some work to do
to get that data cleaned up
and have access to it.
When we started the
predictive maintenance,
it seemed like a fairly simple problem.
Immediately ran into well
this part of the army
has data over here.
This part of the army has data.
How do we get those data together.
The Air Force calls it a turtle back.
The Navy calls it a horseshoe.
Can we do some natural language processing
to show that those two terms are the same
and then we're able to work with it.
That's what we've embarked on.
So, each line of effort
in my mind is going
to spark five, six, seven,
10 different lines of effort
on the enabler functions of this.
And I agree, one of the
things we must be able to do
in our role as the JAIC,
is to have some enforcement
mechanisms to get us
faster towards the future
of digital modernization.
- [Zoey] Great, thank you so much sir.
- [Moderator] We have two more questions.
I think we have time for
both if we keep it short.
- [Sydney] Sydney
Freedberg, Breaking Defense
Deputy Editor and asker of long questions.
But I'll try.
You said very convincingly
that no we are not going to do
the 1983 Matthew Broderick movie thing
and start out with AI controlling
the news, which is good.
What is the step by step progression from,
hey here is an AI for
this very limited, narrow,
benign function.
It's checking sand intake in engines
or its monitoring fire
brakes, specifically,
as opposed to any of the
100 other things that
you were seeing in a disaster.
How do you move up from that
to things like the Atlas,
automated target recognition system,
which the Army is working on,
to things that automatically
engage the incoming missiles
to things that may actually
have some automated component
that's looking at a human target,
where human loss of life
is in fact going to happen,
if things go right?
What's the series of steps
and how long does it take
to get there in a way that you think
is ethical and responsible?
- Now let me go back
to what I said earlier
about AI now and AI next.
The position I inhabit
here as the JAIC Director,
I'm focusing on really
getting to the fielding.
So, we're always going to
start with limited narrow
use cases as we were going to call them,
on say can we take some AI capability
and put it into a small quadcopter drone
that will make it easier
to clear out a cave,
if that's what you need to do it
and prove, really prove
that it works before
we ever get it to a scaled production.
On the other side, on the AI next,
this is what we have the research arms
of the department working very hard on,
are those cases of more advanced
uses of artificial intelligence
in future weapon systems.
But throughout, whether
it's AI now or AI next,
they all rest on the foundation of policy,
such as our autonomy and
weapon system policy.
Those aren't ever done in a vacuum.
If we're going to develop the
systems you're talking about,
they will be based on the
bedrock of policies, authorities,
how we're going to test
it, who gets to approve it
as it goes through the sequence of event.
I'm very comfortable saying our approach
even though it's emerging technology,
even though it unfolds very
quickly before our eyes,
it will still be done in a
deliberate and rigorous way.
So, that we know what
we're getting re-fielded.
So, there may be something
that we're working on now
that all of a sudden produces great value
four years from now,
but throughout between
now and those four years,
we will have a very clear understanding
of what it can do and what it can't do.
And that will be through experimentation.
That'll be through
modeling and simulation.
And that will be in war games.
We've done that with
every piece of technology
we've ever used and I don't expect this is
going to be any different.
So, that's really how I
would answer that Sydney.
- [Jason] Last question.
- [Ada] Good afternoon gentlemen.
Thank you for your time.
General thank you for your time.
So, my name is Ada Latt.
I'm a student at National
Defense University
and I'm writing my thesis on China and AI.
And as you know General,
China released in 2017
their new generation AI
development plan in 2017,
with multiple targets with the end goal
of leading the world in AI in 2030.
And my question is this,
on the unclassified level,
can you share with us
what metrics that JAIC
is using to measure success?
- Our success or their success?
- [Ada] Our success.
- Our success.
It's a really good question.
So, this whole idea of metrics
and return on investment,
very difficult for AI.
We're just now laying out
what those metrics need to be,
and you could go to the
grand strategic level,
compare the United States
allies and partners
in our AI development to
where China and Russia are.
Is that a useful exercise or not?
Then you could get down to
what compute do they use,
what frameworks do they use.
What do we use?
Is there a comparative
advantage disadvantage there?
That's one way, but internal to the JAIC,
I'm down all the way into what metrics
are we going to measure to determine
that we successfully
adopted and integrated
a capability that is an
AI enabled capability.
Ultimately did it get adopted by a user?
Did the user say I want this capability,
I didn't plan for it, originally.
Thanks for helping me fund it.
From this point on,
we'll take the funding
and we'll sustain it
and we'll make it better over time.
So, I will never be able to
determine success in the JAIC.
The success will be
determined by the user.
Are they satisfied with what
they got from us or not?
And there's all these
other measures of success
like international engagement and policy.
That's separate but in terms
of delivering capability,
there is everything from
test and evaluation, metrics
intersection over union, precision recall,
all these words that mean something
all the way up to did we
give it to somebody who said,
we've got it from here
i.e., it scaled across
the department and it's
no longer on the left side
of the technology valley of death.
So there's a lot we need to look at.
We've just laid out a
couple of pages worth
of useful metrics.
Now I have to distill those
into the most useful series
that tell us are we on track or off track.
That's really what we're
getting to right now.
- [Ada] Thank you.
- Sir, thank you first for
spending this time with us.
Second, I've expressed my
gratitude to you before,
but never with a loud speaker.
So, thank you for not only
being one of the best transition
partners that we had for IARPA,
but also leading on values.
I think you really embody
the kind of humility
and candor and thoughtfulness
that we want to see in
our national leaders.
So, please join me in
thanking General Shanahan.
- Thanks Jason, thank you very much.
Thanks for the opportunity.
(audience applauding)
