(swoosh)
(upbeat music)
- Today we're gonna be having a panel.
Here are our panelists today.
The bios were made available
to you at registration
and also through your
confirmation materials,
so I won't go through those today.
But I wanted to just
tell you a little bit
about how we're gonna
moderate today's panel.
Each of the panelists
are gonna have an opportunity to come up.
They have five minutes of opening remarks,
where they're going to talk
a little bit about our topic today,
which is demonstrating a business case
for artificial intelligence.
At the completion of those
opening remarks,
we're going to be using the questions
that you submitted,
and thank you for all of you
that submitted questions.
You made my job so easy.
We'll be submitting those
questions to the panelists,
and giving them a chance to really expand
on the things that were
most important to you.
So that's how we will be
conducting today's panel.
Are we ready to start?
- Sure.
- Okay, great.
And then this.
- We're also the panel that gets to follow
Jensen's keynote this morning,
so this is going to be
a tough one to follow.
So good afternoon,
my name's Troy Mahr,
I'm Architecture Manager
for Data Analytics and Insights
Team at Rockwell Automation.
Rockwell's the world's
largest company dedicated
to industrial automation and information.
We've been talking about
the Connect Enterprise
for the last couple of years.
It deals with concepts such as
leveraging the power of data
to connect the shop floor
with the executive suite.
My area
we're blending a lot of these concepts.
Corporate analytical
data, cloud computing,
edge analytics, and IoT,
showcase how to pilot and
productionize specific use cases.
What I want to highlight
today is how we deal with
Rockwell's four electronics
assembly manufacturing plants
producing printed circuit boards
on 23 surface-mount technology lines.
These are high-volume operations
with each plant generating
approximately 3 billion
data points per year,
from 250,000 chip placements per hour.
It starts with the bare panel
that has solder paste applied to it.
Chips are then picked from
reels by vacuum nozzles.
Each surface-mount machine
has approximately 400 nozzles
performing this operation.
Problems come from
improper solder application
and nozzle misalignment or clogging.
Oftentime, problems are
only found at the very end,
at electrical tests.
This can lead to product
waste, quality issues,
returns, and possibly
customer dissatisfaction.
The goal is to use machine
learning to identify patterns
that predict failures before they appear,
as early in the process as possible.
The first use case is
solder paste inspection.
We've targeted difficult
to troubleshoot and repair,
hidden solder joint failures,
which are typically discovered downstream
at the end of
the assembly process,
at the electrical test station.
We trained the supervised
machine learning model
using historical parametric
solder paste inspection data.
The model is approximately 90% effective
at predicting electrical test failure.
This enables us to provide
near real-time feedback
to line operators and allows
notification to pull the board
and wash the solder paste off
prior to any further work,
recycling the board for re-use.
Benefits include higher
defect detection rate,
reduced errors and waste,
improved quality control.
The diagram shows the
technical architecture
including edge analytics,
like Jensen referred to this morning,
and cloud model training.
The second use case is our
pick and place process,
where we have machines that
attempt to self-correct
any alignment issues,
and alerts typically occur
when the machine goes out of alignment,
where it reaches a point where
it cannot correct itself.
This is essentially an alert
where production has
already been impacted,
or we have an outage.
The desired state, however,
is where you have actionable insight.
We created a first generation,
where alerts based on business logic
that notifies the operator
when the processes has started
to go out of control.
We've thence created a
second generation now,
where we have machine learning
in place to identify patterns
to predict failures before they appear,
as early as possible in the process.
The diagram shows the
technical architecture,
where we have edge
analytics, cloud components,
and mobile notification
for operator action.
This is based on
hierarchical temporal memory
with unsupervised learning.
So the solution we put in place is based
on the business case type
with requirements to value.
Reduced downtime, product
waste, product returns,
with improved quality and line efficiency.
To do this we explored options.
We looked at the data,
we understood it, wrangled it,
looked at options to
improve the data flow.
We then evaluated and selected technology
looking at device level
and edge analytics.
We looked at the technical
stack that we could deploy,
including how we looked at the edge
all the way up through the cloud,
including the management plane,
what level of data we needed to retain.
Then finally, how do
we make it actionable,
including business
value for mobile alerts?
Finally, we deployed it and refined it.
We went through multiple generations.
In closing, research shows that
AIML is most often deployed
to automate repetitive or manual tasks,
augmenting complex tasks,
improving customer
experience, or reducing costs.
The two use cases today are case in point,
eliminating waste.
Going forward we see that
research expects to focus
on new revenue, not just on
items impacting the bottom line.
We already have a number of business units
actively looking into this,
sometimes based on new
machine learning capabilities
built into our basic Enterprise
analytic capabilities.
Technology is keeping pace,
the data is there,
now it's a matter of what we do with it.
Thank you.
(applause)
- Thank you.
My name is Saad Sirohey.
I lead the advanced application
and digital business,
CT business at GE Healthcare.
I'm here representing GE Healthcare,
which is an honor for me.
GE Healthcare, of course,
is a company that produces everything
from a diagnostic imaging test,
from an MRI machine all the
way down to monitoring systems.
And what I'm going to
be talking about, here,
is really looking at how,
when we look at deep learning and AI,
how is that
going to impact our business?
And how is it going to provide
value to our customers?
When we look at the entire spectrum of
where our diagnostic imaging equipment
can actually have a
benefit to our customers,
it's the whole spectrum.
If you look at a patient's journey through
the diagnostic realm,
it's from order entry all the way
to the final report that is generated.
And so,
it's all about,
and this is a fusion of these
different data components
that are being generated.
Whether it's the information
in the medical history record
that would trigger what
type of exam or protocol
that should be done.
Whether it's the imaging
information that would be needed
to make sure that the patient
is properly positioned
in a scanner.
Whether it's how do we actually create
the correct scan acquisition protocols
for that particular patient,
for that particular purpose
that they're coming in,
and then start the scanning.
From that point on,
also it is for the purpose
that the patient is in,
what is the optimal exam,
or what's the optimal image
that can be reconstructed?
And then, of course,
after the images have been
reconstructed and presented
to the physician, can we add
more value to those images
than just presenting simple images?
So how do we add
information to those images?
And that's where the
automated, or automatic,
analysis of anomalies,
or of automation in the labeling,
can also start coming in.
So this is the spectrum
that GE Healthcare works in,
and when we look at this amount of data
that is being generated,
this is really at every
single aspect we can see
that there is opportunity on
how we can actually leverage
different components of this data
through the deep learning chain
to start having an impact,
and start producing
applications and products
that can benefit our
customers and their patients.
And this really goes
towards how do we deliver
value to the end-patient
in precision healthcare.
So how do we start
looking at that individual
as an individual, rather
than an average of builds.
And as an example of that would be
a patient coming in with
a different heart rate,
or a cardiac output, or a blood pressure.
May need to have a
different type of an exam
done at a different rate
than one that would be at a different day.
So how do we incorporate that information
so that when we do the exam,
that it's the most optimal exam?
And then, of course,
at the efficiency level
of a hospital,
to be able to correctly
identify which patient
needs to go to which scanner
for what purpose.
And so those are areas where we feel
that the deep learning
technology can really have
a huge impact in how we
provide value to our customers.
A couple of examples that I will share,
of how we've actually used
these and incorporated them.
So last year we introduced
the first deep learning image
reconstruction algorithm,
in which the target was,
"How do we address noise in an image
"without impacting any
of the image quality?"
And we successfully
introduced that product,
it's called TrueFidelity.
This really has a benefit of
reducing dose to the patient,
and I think Jensen,
for those of you who were here before
when Jensen was talking,
that was one of the
things that we mentioned.
Another example is really about
the post-imaging aspect of it.
So how to use deep learning to actually
automatically identify
and label spine segments in an image.
And very recently, about yesterday almost,
we introduced our first x-ray AI
product for the critical care suite,
that's automatically detects
pneumothorax in an x-ray image.
Now doing this,
we've actually introduced a whole platform
that really drives us to go to this level.
And this is really how
we collect the data,
the data has to be in the right form.
It's the data that's going
to drive the language,
or, it's the data that's
going to be responsible
for producing the model.
And for us, we've named
this the Edison Platform,
and this is really going to drive
how we move into this new age,
with incorporating deep learning
in every aspect of what we're looking at.
That's for me.
Thank you.
(applause)
- Good afternoon.
My name is Glenn Fung
and I the director of
machine learning research at
American Family Insurance.
A little bit about my
background, just quickly,
I've been a machine learning
practitioner since the time
that machine learning was
a curiosity for academics.
So I got my PhD. in
machine learning, 2003,
and I've been in the industry
doing machine learning since then.
And I had a opportunity
to come to Wisconsin
and start a group of machine
learning researchers,
as a very focused,
hard-core machine learning.
And so right now we are about 12,
and it gets bigger in the
summer, we have interns.
And my mission in American Family has been
try to convince,
and to share,
what this technology
can do for the business.
So more specifically, my team
does two things in there.
So one is to always exploring
the new technologies
that are coming out
because this field is
moving so fast, right?
Papers, for example, technologies that
we are using today, the paper
came out six months ago.
And for natural language processing,
the models evolve so fast
that there are models beating
state of the art numbers,
every, literally, every couple of months.
So my team,
it tries to keep up with that,
which is not an easy thing to do nowadays.
And number two, I also lead
external relationships,
so we have relations with
universities like yours,
and we try to engage with
professors and do exploration
that is not only
interesting for the business
(computer chime drowns out words),
but is also interesting
in the machine learning point of view,
so we can get a good synergy
between the collaboration.
We have a big one with the
University of Wisconsin right now
where we committed, we're
starting a center that is a
UW, Am Fam, data science
institute for research,
and I lead that relationship with the UW.
So for me, being in this
field I feel very lucky.
So it's moving very fast and, again,
it changed from being
something very academic
to being something we see
now in our everyday lives,
as you saw in the talk
from the keynote speaker.
So moving on,
I just want to be brief.
So I just wanted to
talk about one use case,
we're really excited about this,
we're very excited in my group and I.
So the motivation is a very simple one,
information retrieval.
So we have,
so insurance, I've been in several fields,
again, so I feel like I'm a
machine learning practitioner.
I worked in healthcare, at
Siemens, for many years,
and the I went to Amazon and did retail,
electronic e-commerce,
and now in insurance,
but I feel that the insurance domain
is one of the hardest one
because there are rules,
that is complicated,
the products are not
simply changed by state,
so it's very complicated.
And finding information
about those is a must.
So you should be able
to retrieve information
quick and fast.
So what are the motivations here?
You see agents always have
questions about products,
so they want to offer all things,
regardless of eligibility right?
And traditional systems
of information like
web retrieval or search by
keywords, sometimes doesn't work.
Some work sometimes, most of the time,
but sometimes doesn't work,
and that ended up being
a lot of extra burden
in call centers because if
you don't find the answer,
you call a call center and say,
"Hey, can you tell me the
answer to this question?"
And that's a lot of time,
and time is a lot of money
in our call centers, right?
So a couple of metrics there.
So our percentage is not a big percentage,
but it's a significant one.
Our searches don't return
the desired information,
and again, so it's a lot of calls
that go to the call centers.
So in order to solve
these, the process was,
so we had to understand
exactly what the problem was,
and talk to the users.
And in our conversations we framed this
not only for the specific
problem of finding things
in these manuals, but also
as a whole we can see this
as a holistic solution
for information retrieval
and for knowledge management
inside the Enterprise.
(mumbles) I'm not going
into the technical details,
but this is more like
our guidance principle.
We wanted to have,
everybody wanted one place
to find all the information that you need
without going through...
If we had something that
could model the knowledge
that we have in the
company, it can be used for
not only finding searches,
but also for a chat bot,
for example.
Chat bot can query this knowledge,
in a general way, and can answer people.
The systems,
so if we can have all the
knowledge in one place,
the system, the policy centers,
can also query that
representational knowledge.
So when you change this
knowledge in one place,
can update everything,
so you don't have to have all
that expensive coordination
between different places where
that information is needed.
Context, so important, why
you are asking something
in what context you are.
And this is also the part
where the machine learning comes in,
you have to understand
what people is asking,
and not only when they're typing it,
or asking in natural language
understanding and processing,
but also in different interfaces
like if you were talking, it
would be great to have an agent
that is listening to the calls,
kind of an AI agent, and
(mumbles) suggests search,
even before you type them in.
Again, so, because we
only have five minutes
and I think I'm probably,
almost, a lot over them.
I just want to quickly tell you
what the problem is, right?.
Say somebody wants to find
what is the maximum age for the
good students in this crowd?
Then you do the search,
and then because the
search is not customized
to ask questions as a human asks,
you don't find any answer.
So they have to type it
again a different way,
and then, finally, they find a way
that retrieves some results,
and they have a link so
they click on that link,
and then it takes them to the
page where the information is,
and then they have to read everything
to try to find their result,
but it's (mumbles) over there.
So what we're proposing is a system
where you ask the question,
and we already did this so
it's going into production,
we have a team of 30
people looking through it,
testing it right now,
getting end user feedback
how to improve the interface and all that.
So they just type a
question, and the system,
we train a system in Wikipedia pages,
so it's time for release of data set
that has questions, and answers,
and the context.
So we train this AI system
to understand the semantics of questions
and it automatically goes and
retrieves the answer for you.
So again, it not only do that,
but it tells you where they
found it automatically.
The information retrieval is painless
and it's automatic.
Thank you.
(applause)
- Most of my insurance
is with American Family,
so now I know where to go.
(laughs)
So, this is exciting.
Thank you so much for all your comments.
I think, like some of the things
that we heard is certainly
that there is a use case
as it pertains to reducing
any kind of errors, reducing costs,
making things easier for the customer.
Did I miss any key things?
Is there another use case that
your company's looking at?
And I'll just start with Glenn.
Anything that your company's looking at
that you didn't talk about today
in terms of just the primary benefit
to the company?
- From are saying
general from AI?
- Yeah.
Just in general.
- Oh we have many,
many applications that
I've been working on,
and some of them are more
into production already,
but some of them are very exploratory
and we are training all these models.
For example, insurance
is a classical place
for doing analytics.
So since the beginning, we
use models to assess risk,
and all that, so on top of all that
customer interactions are very important,
and this is one example of that,
where we are interacting
not only with the customers,
but with the agents
in order to make processes more efficient.
But also using all this technologies
that take voice transcription
and can also examine thousands of calls
and bring to the managers the calls
that we think need more attention to it.
That's another use case.
So we have using computer vision
in order to assess damage in roofs.
So nowadays, somebody
has to get into ladder
and go up in a roof, and
it takes a lot of time.
So when a storm hits a neighborhood,
you would need resources
to go out and explore that,
so we're now using drones,
that you can put in the ground,
and goes over there and
examine all the damage
and automatically can say,
"Yes, this is the damage."
And applications are,
especially in insurance,
are many, many.
- Thank you.
Troy, can you talk a little bit about
what data was used to
justify the business case
for your project?
'Cause I know that
Rockwell's doing quite a bit,
but what data did you
use to be able to say
that it was going to provide
a benefit for the company?
- Sure, so for this specific use case,
which was production
oriented, we looked at waste.
We looked at, obviously
the production costs
that went into it, but
a lot of it came down to
production costs.
So what was the impact of a line outage?
That was really what drove to it,
and then ultimately, you know,
what was the impact of
potentially shipping bad product?
So a lot of it went to
the true financials of it,
but also, in the end, it came down to
what was the impact of a line being down
for a certain amount of time?
That was really, you know, foundational
to that business case.
- Saad, I want to ask
you the same question,
'cause you're a totally
different industry.
What was the data that you used
to justify the expense of looking into AI?
- I think it was really about
what's the additional benefit
that we can actually produce
and reduce the amount of
inventions that need to happen
through human resources.
So for instance, the examples
that I was talking about,
deep learning image reconstruction,
we have done the traditional
machine learning computer,
what Jensen was basically talking about,
the complexities of writing code.
I think we've converted that complexity
into the model being learning,
and it was basically showing the case
and having the management agree to it,
and move fast to actually
prove that it could be done.
So those were the real
components of how do we reduce
the cost of the development
and the speed to market?
- And then in terms of that specific case,
for the first project that you took on,
what was the overall
length of that project,
and then did it meet your expectations,
or did it not meet your expectations?
- That project, yes, it met,
and I it think exceeded,
the expectations, even from
our customers' feedback
that was that it was
exceeded the expectation.
It was literally, from start
to end, it was within a year.
- So and this would
allow you to then move on
and take on additional projects?
- Yes, and I think that was,
it was the litmus of how do we move fast,
and also this allows us
to fail fast as well.
- Correct.
One of the things that, this
is another audience question,
is what excites you the most
about the power of AI and ML,
and its potential impact?
So I was hoping that maybe you could each
briefly answer that.
What is the thing, today
when we listened to Jensen,
it was about visioning
despite all of the things
that might be barriers,
so he's visioning about
what the future is.
He was asked about what's next,
and I think that when he said
what was next in his mind
people laughed, but he
was just being honest.
So like when you think about,
now that you know the technology,
what excites you the
most about the potential
that it can have?
Maybe if you could start first, Glenn.
- Well for me, again, is
this (mumbles) I feel,
again, I feel lucky to be
in this field right now.
And I have interacted with many professors
and I ask them, "How do you keep up?"
Every month there is a
relevant paper that comes out,
and it's hard for everyone.
And this in turn is exciting
for those guys have to
get new ideas, and put it out there,
and then you don't do it super quick,
so it's a lot of pressure
of the PhD. Students.
But in the frame of that, the technology
now is giving a lot of resource.
And I have been in the
field for a long time,
and have not always been the case.
I think with the right infrastructure,
and the necessary data to take
advantage of these models.
And I think that's what is exciting for me
is that I see a lot of
adoption and I think (mumbles)
starting riding that wave.
And that's just going to
get bigger and bigger.
And more concretely, just quickly,
how we are making
really good strides
into teaching machines
to understand language,
human language, it's
amazing the way its doing.
It's like, I have a presentation
that what I show people,
when we build this model
customized for insurance,
and we play kind of Mad
Libs with the models.
So we put in a phrase and we
want the models to complete,
and if we use, for example,
models trained by Google,
they are great, but they
don't know insurance.
So they say that the
customer called because,
and they say, "Oh they wanted
to exchange their toy."
And that doesn't make any sense for us,
but if I put my model, it would
complete and it would say,
"The model called because
they need a rental car
"because they had an accident."
And those are the kind of things
that are even impressive
for all the practitioner.
And when these models are
ready and we test it we, "Wow!"
Ourself get surprised at ourselves,
so that's very exciting for me.
- How about you Troy?
- I'm in what's considered corporate IT,
and we're now at the point where we've got
both the data sets, but more
importantly the tools, with AI,
that will put the power in the
hands of the business users.
And I think that's really
where we're at the cusp
of really giving them
what they've been asking
for, for a number of years.
And I think that's what's really exciting.
- I think the excitement for me is really
when we look at the entire spectrum
of where AI can start having an impact.
It's then fusing all of that
information together, right,
and really like all the
sensors and everything,
and this can literally change
the way that healthcare
can be delivered to our
customers, at the personal level.
So it's not
at a standardized set of
healthcare for everybody,
but it's personal
healthcare to an individual.
And I think this is where, literally,
this is where AI can really
start making it happen.
Fusion of different resources
and all the data coming together.
- I think in a lot of the
interactions that I have
with companies, there's
a lot of conversation
about wanting to get started,
and how will we get started,
and looking to faculty,
and looking to consultants,
and looking to others
just to be able to say,
"How am I gonna make
use of this technology?"
But I don't know, in some
of those conversations,
that they've identified that
vision for what they're trying,
necessarily, to achieve,
in a way where it's something
that they're passionate about.
I heard that a lot from
Jensen this morning.
I hear that from all of the things
that you've already described.
It's really about
changing, making a change,
that will last a long time,
and to actually transform your industry
and the way in which your
customers interact with you.
So I think that's really exciting.
One of the things I wanted
to turn to, just briefly,
is based on your experience
with these implementations,
how might AI disrupt
the technology market?
So one of the other points of confusion
for a lot of companies we deal with,
is just the plethora of options.
There's a plethora of consultants,
they don't know who to go to.
There's a plethora of
tools, infrastructures,
architectures, and it's a
little bit overwhelming.
How do you see AI disrupting
the technology market?
- I'm going first?
- Yep.
- So I actually see a trend
on companies building teams
to do their own AI,
that is, again,
it makes a lot of sense
to have knowledge in-house
and customize the (mumbles)
for the needs that we have
in our companies.
And the way, that's
like the first step of,
you know,
and that will disrupt the market
because I think they need
things that are to come up
from the side for vendors
has to be really, really
interesting and good in order to be worth,
you know, incorporating those technologies
instead of experimented
to building in-house.
And what I see is, that the first step.
The second step is like happen in Amazon.
They form all these
machine learning groups
that were like consultants,
internal consultants,
but the evolution is
that every business units
ended up having their own
machine learning team.
And that's because you not
only need the machine learning
and the AI expertise, but
you need business expertise,
and that is very hard
to get from consultants.
So you need, you really need to know
what you need for your
specific use case, all right?
So I think evolution is
that every main business
units in big companies
are gonna have their own
machine learning team
inside there.
- So, start small,
have a defined scope,
in the use case, the
two use cases I showed,
which was still based
on one assembly line,
we actually pulled in the
business domain expertise
with our advanced manufacturing
engineering group,
and that's what made it successful.
But it's still a small group,
small defined scope, and
don't fear failing fast.
You really need to be able to
call it if it doesn't work.
And have iterations.
Go through that defined set of scope,
call it successful,
call it a failure if you need to,
and then get to your next iteration.
And recognize that what you're doing now
will be different nine
months, 10 months, from now.
And then recognize that
something's gonna change,
the technology or the
business requirement.
- Saad.
- I actually, I echo
the sentiment that it's,
you really need the subject matter expert
to be embedded with the new
deep learning skill sets.
It should be used as a tool
to further what the business
plan is for the business.
And I also agree that you
need to show quick response
and really fail fast.
This is one of those opportunities
in which you can accelerate
the growth by just learning
where you can fail fast.
The other key component of this is that,
I mentioned before also,
but the data and the right type of data
is going to be the key thing.
So thinking has to start shifting from
when we're talking about
what we need to do in a development,
it needs to start thinking
by what algorithms I need,
to what is the right data
and what's the right type of information
that is going to be needed
to train those models,
and how do we go right.
So we'll shift from one set of expertise
to another set of expertise
in data curation and stuff.
- Saad, I want to follow
up on that a little bit.
When you got started,
and you needed to start
pulling together these
interdisciplinary teams,
what did you have to do
to upscale your teams
to be able to do this well?
- So two components.
One was we really needed to
inject some data scientists
with deep learning expertise.
So we had to inject that.
So we added some data
scientists in that role,
but having data scientists in like a silo
would not help because they
didn't know what was the problem
that we needed to address.
So they needed to work
together with the expertise.
So that's the way that we've built it,
and we've growing that expertise
within the organization,
by just making sure that
it's not working in a silo,
all on their own, but
they're actually embedded
with the expertise in the business.
- So Glenn, one of the
questions I would ask too is,
now we're here at a higher
education institution,
and I know you work with universities,
you mentioned in your opening remarks.
What, for all the faculty,
and administrators,
and executive leadership at universities,
what would you say needs to change
in order for graduates
to be better prepared
for the future of work?
- I think
I'm a big proponent of
academic collaborations with industry.
I think that's the way,
because then the process
of assimilating talent
that comes from institutions like yours,
and come to work in machine learning teams
is not a disruptive process.
It's just a continuous process
that starts with internships,
so I mean internships,
projects with the professors,
and then in the same way we learn,
so the companies learn from the experts
what are the things that are out there,
and then what are the
students are learning.
And the same way, the professor learns
what are the needs of the industry
in order to provide a better education.
Prepare the students for, you know,
for joining the workforce
later on in their career.
- And Troy, you know, for
any of the technologists
that are in the room, if
you had one piece of advice,
they have an opportunity to
upscale their own talents,
start learning things,
what would be the one thing
they could start doing
to make themselves more valuable
to the employers that they serve?
- Recognize there's a business
requirement to it as well.
So it's not just about the
technology, learn the business.
- Learn the business side.
- Yeah.
- Very good.
As you go forward, if you had one thing
that you'd like to be able
to see your company advance
with AI and ML that is
not being looked at today,
but if you just had your vision
of what you would like your company
to be looking at in the
future, what would that be?
Saad, I'd like to start with you.
- I think the area where
its, it's the speed,
it's the risk and speed,
and the ability to actually just go in
and start doing things much faster.
I think that's,
traditionally,
we've been extremely risk averse,
and that's the one area where I feel that
that's an area that we can leverage
deep learning and data analytics,
to push through that barrier
to start taking more risks,
but to get to a solution faster.
That would be one of my...
- Troy.
- Not to repeat that,
but looking at the top line
opportunities as well, revenue.
- For me, there's two main things.
So one is this process of
assimilating and thinking
machine learning like,
is not only for scientists
and data scientists,
it is for the whole company, I think.
So things are happening.
I used to tell, when I
joined American Family,
I used to tell people in Amazon,
I got managers coming
all the time to my team
and telling me,
"Hey, I have a machine
learning problem for you."
And for me, that's the best case scenario.
When people start thinking about
what this technology can do for you,
'cause you are the expert,
you have the problem,
so you can bridge,
you can go a step further and
bridge the gap between the
technology and the business case,
that will help (mumbles).
So educating
and getting people to think
what this technology can
do for you is number one.
And number two, and that's
happening in my company,
I wish, you know, we get there
faster, but it's happening.
Number two is being able to
deploy models faster and easier.
And I think that's a
problem for every company
that I know of.
I don't know any company that says,
"Oh, this is super easy for us."
So there is a lot of
complications to getting the data,
there are privacy concerns,
ownership of the data is always difficult,
security, and being able
to get from the prototypes
that the machine learning
teams build, to production,
is always, you know,
not as easy as it should be.
And I think getting there, for me,
is paramount, is very important.
- That was one of the good questions
that came from the audience.
I think this came from a
student was, just to ask,
how is managing a
project, a non-AI project,
different than an AI project?
And I know a lot of companies that moved
to Agile software deployment,
they're moving more quickly,
they're doing things on sprints,
why is AI different?
Saad.
Is it different?
- I, I mean as far as
what's the input and output,
so what's the requirements
for the project or the product
to be successful, that remains the same.
I don't see anything
different between doing
just injecting air, it's
basically like I said,
it's replacing some of the complexities
that we had to do normally
with some other skill set,
so at the end, it doesn't change
how you manage the project,
the expectation is this
project would go faster.
- It aligns to Agile
methodologies very well.
- Glenn.
- Yeah, I agree.
It has many similarities.
It;s one difference, I think,
is the dependency on data.
I think that makes them
a little bit different.
And again, so when you spend
time building these models,
a lot of the time, people say 80%,
I don't know if that's the right number,
but to get the data together,
and not only getting it
together to build your model,
but also to be able to make
the data available to the model
when it needs it, right?
So all the data pipeline makes
it a little bit different
from other projects, right,
that you need this constant
data, and then you need,
what to do with the outputs of your model.
And again, the whole infrastructure,
if we draw a big ball about a project
related to machine learning,
when machine learning is the core,
the machine learning part is like
a really small ball in the
middle of all these (mumbles).
So again, like 90% is similar,
but it has this really tricky component
that goes in the middle, and
that is kind of different.
- Data wrangling is really important.
- There was one question
that asked, specifically,
can we move forward, and
I know this is probably
more on the small to
medium sized companies,
are we able to move forward with AI
if we don't have a data
scientist on staff?
What would be your answer?
Glenn.
- I would say no.
(laughs)
For me its a no.
- It would be hard.
- Yeah, there's many platforms
that promote one bottom
push and get the data
and I'll build a model for you.
The problem with that is that
there's always going to be
dangerous things that can happen,
and I don't know if you plan
to touch into that later,
but one important thing
is, as far as in modeling,
I mean this is a hot
topic right now, right?
There's always somebody should be there
that is responsible for
what this model is doing,
and there is no data scientist,
and nobody's responsible.
So one other thing is,
we see it in the (mumbles)
and the different places
where all these models
are coming out being homophobic or racist,
and then we are taking
the projects saying,
"No, that wasn't modeled."
That wasn't the model, the
responsibility should be
people who build the model, right?
We don't want to have this
topic build the future
like, for example,
Terminator, like Skynet,
where they take over, "Oh,
that was Skynet who did it."
(laughs)
So no, we have to take responsibility
and (mumbles) aware of what we build.
So if there is no data scientist,
who can be accountable for
building that model, right?
- Troy, did you want to add anything?
- I agree.
I mean it gets hard.
You're hearing the term
citizen data scientist now,
and stuff like that.
Ultimately, somebody
needs to be responsible.
I mean you can go out
and get a consultant,
or something like that, to help you,
but there needs to be
some level of involvement
and some level of knowledge
brought to the process.
- Saad, do you think you can
do it without a data scientist?
- No.
I think, and the key area
here is really going to be
when we start relying on deep
learning and these networks,
and the data is really critical,
but the data is critical if it represents
everything that the
system is gonna to see.
So it has to be unbiased,
and I think that's where
the data scientists
need to inject that this
really is an unbiased sample,
and the model is really trained.
'Cause if you don't do that,
you can get into bunny holes
and then come up with
answers that are wrong.
- As we close today, I want
to give you an opportunity.
You've heard each other's presentations,
you've heard each other's answers.
If you have one thing that
maybe didn't get touched on
that you think is really, really important
about demonstrating a
business case for AI,
what would that be?
- Glenn, if you want to start.
- Well, I think I agree
with the fellow panelists have said.
I think connecting the value of the model
that you're gonna build,
'cause a lot of these
things can be really cool,
and having a clear...
For example, for me as a
researcher in machine learning,
I have all the time to
have a cable to wear
so I don't fly into blue sky and do stuff
that nobody cares about.
So being able to connect that research
to actual value from the business
is very important, to me.
So I think that's something that everyone
who is doing data science
'cause it's so cool
and so exciting, we can get
distracted from all the things
that are going on,
and then not do things that provide value.
So I think there has to be a good balance
between exploration
and providing value to the business.
- Building on that, tie it to value,
but start small and then iterate.
Fail fast, but get started, and get going.
Don't try to do the
whole thing all at once
because through refinement,
it's gonna be a better process.
- I agree with both,
and the only other thing that I would add
is to be able to share the
findings and the failures,
so that there are other instances
where the same kind of
things that you're doing
can actually have an impact in
other areas in the business.
And that's one of the things that,
can't just work in a,
kind of like a black box,
so you need to be able to
share and communicate that.
- I appreciate that.
I'm sat also, as one of the members
on our chartering council,
and we've had a lot of conversations
about this space, and what
companies are willing to share.
And I think one of the things
that we're really trying to do
is make sure that we can share knowledge
to lift everyone else up in the region,
and their ability to apply
AI in reasonable way,
so that we're creating business value
not just for a single company,
but for the region as a whole.
And so that's one thing
that we have run into
that we have a difficulty
in having companies share,
because their just sort
of getting started.
And we really appreciate those companies
that are a little bit
further in their journey,
that can share those
experiences with others.
One of the things that
we've talked a lot about
with the chartering council is creating
a community of practice at MSOE
around the application
of AI, machine learning,
deep learning, and related technologies.
And so for those of you in
the room that registered
for today, we hope that you
will be part of that community
of practice, as we go forward.
We're gonna wrap up this panel.
The reason we didn't do
questions from the audience
is just for efficiency's sake.
We took those questions in advance,
and all of the questions
that were asked today
were asked by our audience members.
If you have any additional questions,
I'm sure that they'd be happy
to answer them afterwards,
but we're going to transition
to the second panel,
which will start at 2:10.
So let's give a round of applause
for all of our panelists.
(applause)
