- [Georgi] Let me introduce our first,
so he's the CEO of Musashi AI,
it's an Israeli Japanese partnership
and the world's first employment agency
for industrial robots,
which I think is something
that sounds really cool.
And I had a chat with
him earlier about that,
and I'm sure he has a lot
of interesting insights.
So the company develops robotic
quality control inspectors
for manufacturing clients,
as well as the central brain
for navigation control,
autonomous mobile robots.
Onn himself, he has over
15 years of experience
in senior management and entrepreneurship.
Prior to Musashi, he was the
CEO of Rioglass Solar Systems,
where he led it from inception to become
world's largest developer and supplier
of receivers used in solar,
thermal power plants.
And prior to Rioglass,
he held various spinal
management roles between
tech companies, such as Siemens and Cisco.
We thought that he was at
Amdocs as a delivery manager,
and he was also delivering
courses services
at British Telecom.
Onn's entrepreneurial
experience also includes
co-founding JT Freebie,
a price comparison platform
for data free goods
and (murmurs) a medical devices
company fighting transmission
of infectious diseases
in hospitals.
He began his career as
a cyber security analyst
for the Israeli government
after his military service there
in the (murmurs) Forces.
He graduated from Booth (murmurs)
in the executive program as,
extensive international experience
and obviously an MBA from Booth in Psych.
So very happy to have
him here with us today.
Ben Ziomek is a part of
the full time 2019 program,
so just last year, he's
the CTO of Actuate.
It's a New York based AI
security startup that builds
computer vision software
to turn any camera
into a smart camera.
He started his career at Microsoft where
he led teams of engineers
and data scientists
leveraging AI to identify high
potential startups globally,
driving nine figures of cloud revenue.
Ben also has experienced working as
an AI consultant in Chicago,
San Fran, Tel Aviv and as a VC investing
in AI and gaming startups.
Also quite a interesting background,
and he will talk a little
bit about his company as well
and what they do, but it obviously
both of their companies
gave a lot of ethical issues
that they're dealing with,
and that will be quite interesting.
So Ben has also recently been recognized
on the "2020 Forbes, 30 under 30"
list for his work at Actuate,
I'm very happy to have him here as well.
And both of them, I'm hoping for
a very fruitful discussion,
which I hope you guys
can help us by asking
questions in the Q and A.
So thank you both for joining,
and Onn, if don't you mind,
if you can share a little
bit about your company,
your experience and just get us going.
- Thanks so much Georgi
for the introduction,
and I'd like to take the
opportunity to thank you,
Heather, Jenny and everyone else at Booth
for organizing this event and
obviously for inviting me.
So I'm really humbled by the opportunity
and grateful for the invitation.
Musashi AI is basically the world's first
employment agency for industrial robots.
We are a partnership,
an Israeli, Japanese partnership
that undertook quite an ambitious mission
to unlock the workforce
transformation of this new era,
of hyper-connectivity and AI,
which we are actually entering.
We fuse innovative Israeli technology
with Japanese industrial expertise,
and we develop train and deploy
an AI based robotic workforce.
Our partner is a Japanese corporation
Musashi Seimitsu,
it's a Honda Motor
subsidiary, a global tier one,
our two parts manufacturer.
So actually the collaboration
we have with them allows us
not only to immediate access to pilot
or robotic workforce in
real manufacturing floors,
but also to have a direct
access to the market,
starting with their 33
factories worldwide.
Now, to give you a little
bit of a background
where we come from,
I'll share with you
some interesting facts that actually
I was familiar with when
I was in my previous role
as the CEO of Rioglass,
but not to the magnitude
that the things really are.
So here's the thing,
about 40% of the global
manufacturing workforce,
is not engaged in production
or in material processing.
So 20% roughly perform visual
quality control inspection
of components,
which is mostly common in
the automotive aerospace,
and some other industries.
And another 20% is engaged
in material handling,
either driving forklifts or pushing carts.
Now, this is pretty incredible.
When you think about it roughly 40%
of the manufacturing
labor force in the average
manufacturing company
is not really engaged
in producing goods.
Now the flip side is that,
40% is engaged in super tedious monotonic
rigorous job with
absolutely no satisfaction
or sense of self fulfillment
and the productivity
and cost effectiveness of those employees
doing these jobs is questionable.
So we basically decided to concentrate
on this long tail of
underutilized workforce
and develop two products,
the first is an AI based visual quality
control inspector as you mentioned,
we are expecting surface defects in gears,
bearings and other manufactured parts,
which require 100% quality inspection
as a result of super high
cost of non-conformity.
The second is an ADAS,
advanced driver assistance systems
and central navigation and control system
for autonomous mobile robots.
These are used mostly in
logistics and material handling,
cleaning security, and
many other applications,
one of the booming industries
in the last couple of years.
Our business model is OpEx based,
as opposed to CapEx based,
we're not selling capital equipment,
we're making our robots available
on a paper use model, robots as a service.
And in terms of technology,
I can say that we are
using edge computing,
deep learning, and advanced
image and video processing,
to build and train
robots that perform tasks
faster and more effective than
existing solutions.
Just maybe to say something
about our customers,
our customers are naturally
automotive companies,
logistics companies, and
other manufacturing companies.
So this, in a nutshell who we are.
- [Georgi] Thanks Onn.
Ben, if you don't mind
also sharing with us,
a little bit about your work,
and actually
what kind of issues
(murmurs) you've dealing with
in the last couple of years.
- Yeah, absolutely.
Thank you so much, Georgi,
And also thank you so much to everybody
who's here for the presentation,
this is a really fantastic showing.
I don't think I've seen
this high conversion rate
from people who signed up to attendees
in any of the other webinars I've done.
So Booth is absolutely a great community.
In a nutshell,
and actually we're a
computer vision startup
that turns any camera into a smart camera
for security building management
and defense applications.
So that's gotten a little
bit broad recently,
and the reason for that is that, Sonny Ty,
who was a classmate of Georgi's in 15,
started the company originally
because his family,
he grew up in South
Africa where his family
was impacted by gun violence
and always wanted to do something
to make it easier for organizations
to protect themselves
against gun violence,
in this case,
by using AI to
automatically detect weapons
in security camera feeds.
And so the reason I'm
here instead of Sonny,
is because I got involved in 2018 because
while working as a VC,
I thought that maybe I wanted to jump back
to the startup space,
and I'd spent a lot of time previously
in my career thinking about
the privacy aspects of AI.
And so when I heard the idea,
that's now become Actuate,
which is using video
surveillance and AI analytics,
which are very scary,
very kind of dystopian concepts,
but applying them in a way
where you can respect privacy
and you can eliminate
bias and you can be about
as compliant as any security
camera system can possibly be,
got me really, really excited.
So I was thrilled to be
invited to this talk because
privacy and bias around AI is really
what I've made the focus of my career over
the past few years.
And it's something I'm
extremely passionate about.
So diving into that a little bit,
what exactly we do is that,
we connect to existing
security camera systems
and we monitor them
for threats or patterns
of behavior that could be
interpreted as problems
for building managers.
Critically, we're not
looking at any individuals,
we don't do facial recognition
and we don't synchronize our database
with any type of external personally
identifiable information.
Functionally, we're looking for a weapon
as an object or a person as an object,
and then we analyze different
patterns of behavior
of people or groups of people such as
social distancing issues,
crowds, or just general people counting
and people flow to help business managers
and security teams understand how people
are using their space and
keep their organizations safe,
especially in this new normal that we find
ourselves in with coronavirus.
I think the critical thing
on top of this is that,
we're not really providing
brand new capabilities
to the market.
There have been solutions
that you've been able
to buy for 10 plus years where you install
specific hardware to detect gunshots.
You install specific sensors to detect,
to count people count density
are real nuances that we've come in
and said that, you've already invested
in security camera systems and
by using deep learning
and computer vision.
And because we're the best
in the world at identifying
specific classes of objects
in the highly complex scenes
that you get insecurity camera footage,
we can give you comparable
or better accuracy
to hardware based solutions without having
to install anything else onsite.
So we're actually one
of the only companies
in the broader security and defense space
that is a hundred percent
a software solution,
because everybody else
wants to come and screw
some new sensor to the ceiling.
And we find that those are
the key differentiators
that we have,
is that we're super accurate because
we're the best at what we do.
We're software only,
so we're easy to install and critically
we're really focused on
privacy bias and compliance.
I think we're one of the only AI companies
where you can go on our website
and the first link at
the top is actually our
policy around bias and privacy,
and that's something we're
really proud of at Actuate.
- [Georgi] Thanks Ben, and then actually
is very interesting cause you're talking
about the portals of,
the privacy issues,
that you face and it's prominent
display on your website.
But can you both of you
maybe start with Ben
talk a little bit about the time you spent
thinking about the potential
ethical problems of your AI technology.
Again, Ben, you mentioned software only,
so in that context you started talking
a little bit about it, but yeah,
if you can talk a little
bit about the time you spent
thinking about the
potential ethical problems,
that would be lovely.
- Yeah, absolutely, thank you, Georgi.
So Onn and I had a chance
to sync up last week
and I think the structure that I presented
of how we think about ethical problems
is that there's really three buckets,
when you come to AI of
thinking about ethical privacy,
whatever issues.
The first one, is really
about job displacement.
So if you're gonna be
automating somebody's task,
what does that mean for their employment?
And I know Onn's company
has done some really great
and powerful thinking about this,
on our end why we do think
about how we can lower the
workload of security staff,
it tends to be much more of
a cloud computing scenario
while theoretically we're
getting rid of low level tasks
in reality security budgets don't shrink.
Those people go back to doing
what they were hired to do,
which is keeping the building secure
versus going through video footage,
which is a super low value activity.
So luckily after a lot of conversations,
that's not something that
we feel is a really strong,
ethical problem in our business.
But that leaves a second
two, which is privacy,
because if you're doing
facial recognition,
if you're identifying
individuals in any way,
even if you keep that data secure,
you're still open to subpoenas,
you're still open to government requests,
you're still open to lawsuits.
And so we've constructed our
system from the ground up
in a specific way that
doesn't have any private
information in the system whatsoever.
The one possible exception
of this of course
is images of people.
Because if we detect a weapon,
then we're gonna have to show
that image to our customer,
and we're gonna have to store that image
because otherwise the system
doesn't work very well.
But even in this area,
what we've found and what our policy is,
is that unless, it's a life safety risk
where you really do need a backup of
that video footage.
We've actually moved in a direction where
we don't even store privacy images
or images of individuals at all in our UI.
We're just showing you
high level information,
we're showing you flows,
we're showing you alerts,
we're showing you heat
maps and trend lines.
And then if you really
wanna go see that video,
we can give you the timestamps
and you can go back in your own system,
but that's not something
that we are doing.
And similarly, on the bias side,
I'm sure that most people
here have heard about
issues that systems
like Amazon Recognition
have had, on doing facial recognition
on people from different
ethnic backgrounds.
And our approach there,
was also to focus on
building a solution that just
looks at people as people,
we're not actually analyzing
what somebody looks like,
we're not analyzing what
their background is,
and so that's allowed us to really
side step a lot of the bias issues
and make sure that we're doing something
that is very robust from day one.
But even here, I mean,
you have to be very focused on nuance.
My company had a big internal
debate over the last few weeks
with a lot of the protests
that we've seen nationally,
where we actually came to the
conclusion that we need to
accelerate the process of
not showing specific images,
because even if our system is unbiased
and we show an image of
somebody that was detected
doing something,
the user's reaction to
that could result in bias,
even if the system itself is not biased.
And then secondly, even though
we're not analyzing individuals,
it turns out when you're doing things
like mass compliance detection,
which we're hoping to do with
the City of Chicago soon,
white people wearing white masks
and black people wearing black masks,
so pretty difficult.
And so it's even those edge cases,
which are seemingly really
minor that we have to make sure
we do really rigorous testing on,
to ensure that our system is unbiased
and completely compliant
through and through.
So that was kind of a long answer,
but I'm thrilled to hear what
Onn has to say on the topic.
- So Georgi you basically asked about the,
whether or not,
we thought about the ethical problems.
So I think we absolutely recognize
the potential ethical problems
of our business.
After all we are an
employment agency for robots.
And I think when you declare you're
an employment agency for robots,
you can't avoid the criticism
or at least the questions.
I think, reflecting back,
it probably all started
more than a year ago,
when I met here in Israel with
the second largest
placement agency in Japan.
They actually sent a
delegation quite a big one
to Israel, to scout for AI technologies.
When you think about it, what's,
a recruitment agency?
Why the hell are they interested in AI?
So when I met them, they told me something
which was quite incredible,
which actually probably
only Japanese can think of,
that, they acknowledged the fact that
if they do not do something
about their business model,
then within 10 years from
now they'll become obsolete.
So, this was like a Eureka moment for us
and how we started developing this idea
and this concept of an employment
agency for robots.
And I really think that the
introduction of smart robotics
can and will at the end of the day,
optimize the utilization of human labor
and increase employee self fulfillment.
It's clear that robots cannot undertake
the repetitive rigorous,
but the essential tasks
and freeing us human employees to do
more complex and engaging work,
where we have a distinct
advantage over a machine.
All this has a clear benefit
of increased productivity
at a lower operating costs.
But then the unavoidable question is
what about the people?
So I think that if you're
working like a robot,
you will most probably be
replaced by one eventually.
And it's not the employee's fault,
throughout history,
we humans have built
artificial corporate structures
that have forced us to be
something that we're not
and do things we're not passionate about
and not supposed to do.
And I think that with the
advancements of technology,
however, this can now change.
I think employees can
now be engaged in more
self fulfilling and rewarding activities
where they can put into practice
their distinct, extraordinary
human capacities,
by the way, we all have
extraordinary human capacities,
that's why we're human.
We just need to treat
people as what they are,
or what they ought to be,
and not what they are.
This is actually there,
I didn't invent it.
And so in order to put
things into perspective,
we are in connection with
the recruitment agencies and
placement agencies, mainly in Japan,
but we're gonna take it worldwide,
who find our business model very exciting.
And the plan is to reassign
or to build reassignment
programs for employees
and shifting them from the monotonic
jobs that they have no
added value in doing
so jobs they can really make a difference
and leave their mark on.
So we actually don't see
ourselves as part of the problem,
but part of the solution.
At the end of the day I think that AI,
could be a great catalyst
for a new world of
opportunities for mankind,
to find meaning and to explore
our unique human capacities.
So that's how we,
we treat the ethics issue.
- [Georgi] And thanks for that,
I truly appreciate it.
It's personally, 10 years ago,
I was doing some paper binders
and whenever we automated,
that was one of the best uses of my time
in my first company
out of a university,
something 15 years ago.
So I fully understand what
you're describing here.
In line of that,
so you said some of the
programs to relocate
people in Japan right now,
and so what I like,
so kind of,
do you have any examples in terms of
the relocation of these people,
or do you have any kind
of stories about it?
I think one of the,
issues that people are facing is that,
when they get displaced by automation,
they usually get trained in jobs that
are far more likely to be automated soon.
And it's just the process of stress
and moving on from one job to another,
which is in danger.
- So I think the,
maybe the underlying question here is
will robots actually replace us
at the end of the day?
Because that's what people are afraid of.
And you asked about what
kind of new jobs these,
employees can do?
And this is a very, very
interesting question, actually.
And I wanna give you maybe a little bit of
of a background
or some new angle
to look at things if I may.
So we're all familiar with
the The Stanford Binet IQ Test.
This invention that has served for years
is the standard process
to measure intelligence
is in my mind at least responsible
for one of the most
unfortunate and bias concepts
in modern education.
And I say that because the IQ test,
that seems not only that
intelligence is constant,
something you're born with
and cannot develop over time,
but also that it can be
measured with a single test.
Now studies made over the
years have shown however that
under certain circumstances
people can actually improve
their IQ score.
And I think this raises some
disturbing questions about
whether or not intelligence
can be quantified to begin with
and a about the way we address
intelligence all together.
Now the standardized testing
such as SAT and geometry
are still widely used as you know,
by the global education system,
including by this prestigious
academic institution,
each of us has to take the
geo math unfortunately,
that was a nightmare,
I have to say it is from
my personal perspective.
So this test is presumably
determining people's professional career
and sometimes their entire future.
But these tests measure only very narrow
aspect of our intelligence,
the one that is related
only to the ability
to exercise certain quantitative
and verbal thinking,
which indeed measure certain
aspects of intelligence,
but certainly not all of it.
And I think that even psychologists today
acknowledge that intelligence cannot
be measured by a single test
and it can actually come in
various shapes and forms.
And the traditional
definition of intellect is
limited to a very narrow aspect
of the overall human capacity.
Why do I tell you all this?
Because I believe that understanding
that intelligence is not
limited to only quantitative
and verbal capabilities is
important in order to begin understanding
how AI could impact our lives.
Because with the help of AI,
I think that humans will be able to pursue
their natural talents without prejudice
and make a living out of it.
There's a saying that every Jewish mom
would like her son or
daughter to be either
a doctor or a lawyer.
I guarantee you that
there's no single Jewish mom
who would wish for her son or daughter
to be artist or a musician.
And I asked why not?
And if you put it in the context of AI,
I think that AI can
optimize the utilization
of human labor by introducing
intelligent machines
that will replace humans
in performing all this
rigorous and repetitive jobs
that we spoke about,
but it will also free people
to focus on what they do best,
which is engage in problem solving,
in solving complex problems that require
the human X factor, I
call it the intuition,
the compassion, the
creativity to engage in art
and in communication and
in taking care of each other,
educating the young and
taking care of the elders.
So I think we will need more,
you asked about the jobs?
I think that all in all,
we will need more jobs
of love and compassion,
more jobs of education,
which is where AI can't help,
and we will need more social workers
and caregivers and elderly companions.
And we believe more teachers
in our education system
who can teach wisdom as
opposed to knowledge,
because at the end of the day,
no one can compete with
Google on teaching knowledge.
Actually, when you think
about the last 100 years,
so 100 years ago,
95 of the world's working population
was engaged in agriculture.
And today just 100 years later,
less than 10% of the
world's working population
is engaged in agriculture.
And you ask yourself,
where did the other 80% go?
If they become unemployed?
Clearly the answer is no.
And the fact is that every
technological revolution
better their lives, in terms of health,
standards of living and
even the job design.
Job has dramatically
changed over the years.
So I really think that
artificial intelligence
can reduce the cost of living,
it will serve as a tool for creativity,
meaning it will enable
artists as well as scientists
and musicians and writers and CEOs
and M and A experts to be
even more creative and more effective.
In our case on the manufacturing floors,
AI will not replace humans altogether.
There's no way AI or
robots will replace humans,
but they will actually work side by side,
we humans as assistive analytical tools
as performance enhancers, cost reducers,
and leaving the more
humane jobs for humans,
which are jobs of planning and designing,
which require creativity and innovation,
jobs of monitoring and
coordination of complex problem
solving, which requiring intuition,
require judgment deduction,
which is what we humans are
best at the end of the day.
So, you ask what new skillset we need
to teach those employees who
will be replaced by robots?
So they're not left behind
and I say, we don't need
to teach them anything new.
They already know what they are best at.
After all, each of us has
extraordinary human capacities
and we just need to bring people back
to their natural human inherent
qualities that distinct us from machines
and for everything else there's AI.
So that's my perspective on the new
or the future to be jobs.
- [Georgi] Thanks Onn,
I actually agree with the need for jobs
that require empathy,
I don't think we'll be
ever replaced by machines.
So I know my co-organizer
here Heather Wade,
she has been keeping an eye
on the question coming in
and she's the President of
the Alumni Club in London.
So Heather,
are there any questions
that caught your eye
and we can ask our participants?
- [Heather] Yeah, sure, hi everyone.
So a couple of things,
one is, please do continue
to submit questions,
there's a way to do that online.
So please feel free to do that,
and I will keep an eye on those
and pick up on those.
I'm going to pick up on one
that was submitted in advance.
So thank you very much for
the people that did that,
follows on quite nicely from what you were
just talking about Onn,
and I'll get both of
your views on this place.
So if you're thinking about this,
thinking a bit more around the sort
of mass application of AI in daily life.
When do you foresee that that will happen?
Or do you think that
this is already there,
we just don't recognize it as AI?
And then the sort of
second part of that is,
do you believe that AI,
I love this part, can be programmed
to be fully under control
and then possibly as a final part of that,
do you think there are any risks
of increasing AI capabilities?
I know that's a multi part question,
and maybe if I can start,
I'm gonna start with Ben this time.
- Yeah, that's a big question,
so thank you to whoever submitted it.
Can you remind me of the first piece of it
just so I can hit them all?
- [Heather] Yeah, sure.
So the first bit is around
when do you really see
this mass application of AI?
Or do you think we are
actually already there?
- So I think my favorite joke about AI
is that, it's not really
AI if you can build it.
Because the type of capabilities that we
take for granted today,
around things like translation
or even something as basic as
this background that I have,
that makes me look like I'm at Booth,
would have been considered
like phenomenal,
truly artificial intelligent
applications in the nineties.
And yet today it's considered
completely trivial.
And I think this is
connected to other concepts,
such as ambient computing
that people think,
"Oh, someday everything will be smart
and will be an internet of everything."
But again, from the
perspective of 20 years ago,
we're basically already there.
And so I think, the
question really connects
back to Onn's previous answer,
which is AI is only going to feel like
it's truly arrived once
it really impacts the way that we work
on a day to day basis.
Like pretty much all of our
leisure time to some extent
is already being impacted
by intelligent algorithms,
from stupid phone games, to
how you go to the cinema.
And it's going to be once we are displaced
or once there's some sort
of challenge in the work
environment that people
really feel like the age of AI
is upon us.
And I think the question of
risks also connects very well
with Onn's previous answer,
because I think on one side,
this idea of like post scarcity society
of this being the next
great wave of automation,
like the first and second
industrial revolutions
and the green revolution
that will take humans away
from the mundane tasks,
maybe white collar tasks
that we do now,
but mundane tasks all the
same and put us in a situation
where people can truly be self-actualized
and do what they
want with their time.
And I think this is clearly the dream,
I mean, what's the point
of post-industrial society
if that's not what we're pursuing?
I think the real risk is
frankly, a political one,
which is how do we manage this?
This is where things are going,
I do have my own doubts,
if we really have a strong enough view
into the future of technology
to say that this is
inevitable at this point,
but we're definitely a lot
closer to a post scarcity
world than we ever were,
and yet our politics,
especially here in the
US are very much rooted
in the concept of scarcity.
And you often hear people
saying they don't even know what
they would do if they didn't
have to work for a living.
And I think this is a massive cultural
and societal shift that is going to start,
and it is actually already impacting
the way politics works globally.
And I actually think,
I say political because
we're already starting to see problems
with how political systems work,
given people's fears for the future,
even when AI has not truly
impacted work environments yet.
And so that I see is the number one risk
going forward even before
we truly have the technology
to start to move people away
from mundane, productive tasks.
What are your thoughts Onn?
- I definitely agree.
I think this train has a
long left the platform.
So AI is, is pretty much everywhere.
We can talk maybe hours
about the reason why
it's not proliferating
more heavily because
the technology is already
is obviously out there
and the need is obviously out there.
And there are many companies
who are trying to implement AI
technologies and develop
new AI technologies.
So AI as a key force
the key driving force of our lives
is inevitable.
I think we need to get
used to the fact that more
and more activities, not only jobs, but
leisure activities you
mentioned is gonna be
predominantly controlled by AI.
And I think it's our,
responsibility to treat it
or develop it responsibly and smartly
to avoid or to minimize
because we won't be able to
avoid but to minimize
the ethical questions
or the challenges.
I think that, no one is planning
to deploy a "Terminator"
produced by Cyberdyne or Skynet for those
who were already around
when this movie came out
so those ethical questions
were already there
many years ago.
I think that at the end of the day,
AI will enable us to build a better world
for ourselves.
Because the technology is, I mean,
AI will make more cheaper
and more accessible food
that will be grown by AI,
it will enable cheaper housing.
It will enable cheaper
energy and more accessible
energy resources.
And there, as a result,
the workplace will change.
The standard of living will increase,
but the cost of living will decrease.
And obviously the work place will change,
it will be modernized and humanized
by the development of AI technologies,
which will ultimately
liberate us to practice
more humane jobs as I said before.
Famous futurists say that one of the goals
or primary activities of
humanity in the 21st century
is to fight death,
and create human enhancement technologies
which would alter the human
body in order to enhance
physical or mental capabilities.
So we could potentially be stronger,
we could jump higher,
we could think faster,
and God knows what else?
I'm personally not that
excited, to be honest,
I think we don't need to keep chasing
the technology train
that is driving full steam ahead
and try to make humans
think and act like machines.
We were born humans, thank God.
And I think we should be
thankful for that and embrace it.
So I think it all boils down to,
where we will take AI?
And I think, as you said,
as you rightfully said, Ben,
if you can build it, then
probably it's not AI completely.
I think, the most complex
engine on this planet
is the human brain.
And so far,
and I presume that no
one will ever be able
to completely mimic this amazing creation.
So personally I'm not so scared about
the implications of AI, as
long as we treat it smartly,
but I think we need to go back to basics.
We need to keep
remembering that technology
at the end of the day is
just an assistive tool
for us to live a better life,
a much more calm and easy
and self-fulfilling,
and leave all the tedious and
mundane tasks for machines.
- [Georgi] Thanks both,
I'm gonna bring it back to something Ben
started talking about, which
is the politics of things.
But before I do, I just want
to thank Onn mentioning,
the "Terminator" cause
one of my personal goals
for this chat was about three days ago,
was to mention the "Terminator" randomly,
so you solved that for me.
So Ben,
we had a chat last week
that we discussed this,
it's about the role of the
governments in all this.
So how do you plan to
interact with governments
whose human rights record
is not up to snuff?
So I know of the audience
members had asked them,
do you believe that global politicians
will sustain the same culture
as the USA in relation to AI?
Personally, I am even skeptical
about the US government,
which is generally a
very good track record,
here and there.
So what are your thoughts on the
kind of the role of the
government in all this.
- I think there's two primary pieces,
when we talk about the role of government
in these new technologies,
one is regulation and the
other is government access.
And this has become very contentious
in the United States
recently with things like
project Maven from Google,
automatically analyzing drone footage.
And so I think these have to
be treated somewhat separately,
and so to start on the regulation side,
I think that generally
governments don't know
what they don't know right now.
The European Commission
does like to overstep
in terms of
IT and tech regulation in general.
But I think the concern
is that people start
to think about AI as a monolithic block
rather than sub-components.
So I've done some writing
on the whole issues around
facial recognition,
and while of course we've
made the conscious decision
not to offer facial
recognition for ethical issues
and so we may be biased.
Our view is that there's actually
a very strong legal argument,
especially within the
context of things like GDPR,
to dramatically limit
organization's ability
to deploy or at least store
facial recognition data.
And I think that's
something that is reasonable
and some governments we've
already seen it start
with Local Governments,
but some National Governments
will start to adopt
over the next few years and that's fine.
The risk is that a lot of new technologies
will get looped into the
same regulation forever.
So if you're trying to
regulate computer vision
versus facial recognition,
you're going to have a lot of problems
because nobody knows
exactly what computer vision
can do right now, it's an open book.
It's not something that
we can objectively define
well enough to regulate it.
One really great example here,
is the rise of technology like Deepfakes,
which are, AI generated algorithms
that can make it seem like a person
is saying something that they never said.
And we've seen these with Obama,
we've seen these with Trump
and there have been calls
to regulate that type of technology.
I think that's really
scary, because right now,
while the technology may be
limited to making silly memes
or maybe politically questionable videos
like this same technology
has the potential
to be a wellspring of innovation
around real time entertainment.
Like imagine school children
being able to create
their own feature film
with the help of AI.
Like this stuff is actually
pretty close to being real,
and if you regulate
that type of technology,
you could cut off this massive
creative and innovative
wellspring of opportunity
without really solving
the core problem of bad
actors getting access
to technology in the first place.
And so that's why, in a nutshell,
I strongly believe that
government regulation
of AI is going to happen
and it's not gonna be a bad thing,
so long as they regulate the application
and not the technology.
And secondly, just to be brief
on selling to governments,
I think there's a big
debate about this broadly
in the technology sector.
And I started my career at Microsoft
and I think I tend to align pretty well
with their ethical view on this,
which is if a technology is being used
by a democratic government
in alignment with its laws
that have been agreed
upon by the citizens,
I really don't know if it's
the place of a private company
to say that we shouldn't be selling
that sort of technology in that place.
Obviously I do think companies
have an ethical burden
to say that if that
government stops using it
in a way that is legal,
or if they start using
it in a way that is no longer
democratically sanctioned,
then you should re-evaluate that
but I think crossing out the sales of AI
to Western Governments
is just a really bad idea
because China's not gonna stop.
And, all of the governments
across the EU, Israel
and the United States do need
access to this technology
in the 21st century,
otherwise they're gonna fall deeply behind
countries without these
sort of ethical qualms.
- [Georgi] Thank you, I
appreciate that answer.
So Heather,
can you...
- [Heather] Yes.
- [Georgi] Carry on with any
questions you might have some.
- [Heather] Yes, we have
a question that's come in,
as we've been chatting, which is great.
So let me pick up on this one.
So I'd love to get your
thoughts on how do you fight
against some of the biases
that we have heard about in AI,
particularly when you're
actually sort of coding the
algorithms, how do you
actually do it at that level?
What should people be looking for,
and sort of doing to fight against that?
And maybe if I can start,
I'll start with Onn this time.
- So the question is about,
just to make sure I understand the ethical
problems of the algorithm?
- [Heather] Yeah, the
potential for bias within AI
coming from actually the
algorithms that are being coded,
and how can you actually do
things when you're coding
that makes sure that these
are not coming through
in the (murmurs) like, for example.
- At the end of the day,
when we designed our products initially,
we designed it in a way that it should
replace a human employee.
Because the goal was
initially to take off,
employees from those two really,
tedious and the monotonic tasks.
With regards to
our quality control inspector.
So we basically decided to try to build
a system that would mimic 100%
the way an employee
does a human inspection,
obviously that's a super challenging task,
and the reason why AI comes into play here
is because the traditional algorithms,
the rule based algorithms can work
when you don't have
a very defined criteria for a defect.
So when you have a closed set of defects
that you're looking for,
most of the times,
you will be able to detect them
with a rule based algorithm,
traditional algorithms.
Now, when you have,
one human inspector that
is looking at a defect
in the morning,
and determining that the certain defect
is indeed a defect and in the evening,
the same kind of defects, it looks okay,
then you have a problem.
Because the criteria is not very clear.
So when we started
designing our algorithms,
we tried to mimic the neural networks
that we use in our brain
in order to generate
an automatic or autonomous solution.
So obviously we focus only
on the criteria to
distinguish between a defect
and a non defect.
So we didn't encounter any, I would say,
ethical issues with that.
Of course we encountered ethical issues
with the overall idea
of displacing employees.
Same by the way, with
our central system for navigation,
which is meant to
replace a forklift driver
at the end of the day,
the system basically uses a central brain.
There's an AI brain,
that basically determines
which tasks, that is coming
from the ERP needs to be allocated
to which forklifts
and when, and to optimize the route.
Here, again, trying to mimic the decision
making process that the shift
manager is going through in order
to determine which employee to send where
here, again,
we didn't encounter any
issues with the code per se,
because these were
very, very limited tasks
for a very, very limited timeframe.
That's basically it.
I mean, at the end of the day,
the challenge here was to
mimic the way the brain works,
the way the human brain works without any,
undesired side-effects.
- [Heather] Great, Thanks Onn.
Ben, can I pass over to
you for that question?
- Yeah, absolutely.
So I think in,
we're a little bit closer
to a lot of these debates
around bias than industrial
applications are,
and this is actually a
really contentious question,
especially over the last few weeks.
Because generally, if you
look across the industry,
including the top AI
research labs in the Valley,
you'll find two perspectives,
which are very, very polarized
because they're so similar.
And this is one of
those things we're like,
I hate my neighbor more
than I hate this other guy,
because we're so similar.
And the two perspectives really are that,
bias can sneak in,
in the construction of the model because
the data set is poorly constructed.
And we can come up with some edge cases
of where the actual structure of the model
might be biased, with like specific
cultural cues and what it's looking at,
but generally those are edge cases.
The perspective that a lot
of people have is that,
if there's bias in the
data, then of course
the model will be biased,
and that's something that needs
to be fixed on the data scale.
And this, I think is a very common view.
But what's been happening recently,
especially over the last few
weeks is you've seen even
researchers within the same lab,
disagree where one side is
taking that perspective,
and the other side is saying, well,
that might be technically correct
from a coding and training level.
It's actually completely wrong because
you would have only ended
up in that situation
if you completely ignored bias concerns
in the entire design of your research
and development process up to that point.
And so that group really
sees the challenges
with bias around like human collaboration,
around management of projects,
around not baking these concerns in
early enough in the development process,
so that you'll have a very clear image
of the limitations of your product
and be very specific that if there
is any bias there,
you already know about it
before you embark on
the training exercise.
And so, I do think both of
these perspectives have merit.
I mean, really, they're not
saying that much different.
It just gets very, very
contentious because
some people feel like baking this
in from the very, very early stages,
especially when you're
building a prototype
will only slow down development
when at the end of the day,
you're arriving in the same place.
And so I think that's kind
of a very vague answer
because this debate has really blown up
over the last few weeks,
and there's not one
generally accepted position
on where bias can creep in,
in the AI development process.
- [Georgi] Thank you, Ben.
Onn, I have a quick question for you.
It's not directly related
to the Musashi AI work
you guys are doing, but
because of your background,
I think it's probably
gonna be interesting.
So, the question is,
AI has been making many
medical strides in healthcare,
do you believe we should rely
so heavily on intelligence,
that is numerous risks to our security
and bias to our society?
And now I mentioned at the
beginning that you are,
just started as a
entrepreneur in tech industry,
so that's what I thought it might be
an interesting question
for you to ponder on.
- Well, I think
the simple answer is that
there's not enough doctors,
in order to, for example,
look at x-rays or MRIs or CT scans,
there are plenty of
companies that are doing
using AI for medical imaging,
for analyzing medical images.
I think that we can definitely take
a huge advantage of AI
technologies in order to,
I wouldn't say replace doctors,
because again there's this challenge
that Ben mentioned about biases,
that's the biggest challenge
to train a real AI model,
a real productive AI model.
But I think that,
we want to have medical care
that is more accessible,
one third,
or maybe more of the world's population
have no access to CT scans or MRI tests.
And I think, if you find a way to,
modernize or democratize,
the medical healthcare through AI,
you can bring healthcare
to the entire world,
which is now only the
developing countries can benefit from.
So I think the power of AI in healthcare,
healthcare is probably one of the,
industries with the most
potential to benefit
from artificial intelligence.
- [Georgi] Anytime I
went for an X-ray today,
and that was exactly the
thought that crossed my mind,
it went very inefficient
and I think a machine
would have gotten me in
and out extremely quickly.
So yeah, it was something I
thought literally for hours ago.
Thank you.
So I am mindful of the time and
we have about seven minutes,
I wanted to finish up with kind of a more
general question.
And I know we discussed
a little bit of that,
some part of this,
and we talked a lot about the future,
but can you share your thoughts kind of,
how you see the world
with the AI in the future,
and yeah, your thoughts
kind of on the future
and let's say.
- Well, this is a tricky question because
the truth is that we have
no idea about the future.
Absolutely no idea.
In fact, no one, has a clue how the world
would look like in 10 years from now,
and yet we're trying to
educate our children for it.
I think at the end of the day,
the future of mankind in a world
of abundant AI technologies
is really dependent on the
utilization of our human
capacities for innovation and creativity.
This actually reminds
me a famous "Ted Talk."
I think it's actually
the famous "Ted Talk"
that was given by sir Ken Robinson.
And he spoke about this a
little six year old girl
that was in a drawing lesson,
and the teacher said that this little girl
had a hard time paying attention in class.
And in this drawing lesson she did,
and the teacher went over to her and said,
"What are you drawing?"
And the girl said, "I'm
drawing a picture of God."
And the teacher said,
"But no one knows how God it looks like?"
And the girl said, "They
would in a minute."
So I think that,
and Picasso actually said
that all children were born
artists and the challenge is
to remain artist as we grow up.
So I really think that we
have indeed shifted a bit
from our natural human
capacities over the years.
And I think that creativity and innovation
and compassion are human capabilities,
which are super unique
and my contention is that
they would never be able
to be artificially built,
and we need to reallocate the
extraordinary human capacities
that we have for innovation and creativity
and intuition to value our jobs.
Where humans have added
value over machines.
I think that no human should
be working eight hours a day,
visually inspecting components,
transmission gears, or anything else,
same as no human should be pushing carts
or driving forklifts
all day on a production
for or a logistic warehouse.
There's nothing rewarding in these jobs,
these employees are
definitely not feeling content
or self fulfilled or challenged.
Their employers are not
utilizing these employees
real capacities to the fullest.
So at the end of the day, everyone loses.
So as I said before, I think the future
is where we get back to our very basics.
So we engage and enjoy the primal things
that we were created to do as humans.
And I really, really
think we are on the verge
of a new era for mankind,
with AI entering almost
every aspect of our lives
and making things cheaper
and more accessible.
I think there's a chance
that people will not work
just to make a living, we
won't be spending eight,
12 or 18 hours a day at work.
We will have time for
leisure, more time to spend
with our loved ones,
more time to work out and enjoy life,
and all these good materials with smart
and responsible adoption
of AI technologies.
- [Georgi] Thanks Onn.
Ben, your kind of final thoughts
on the future with (murmurs)
what is your (murmurs)
- So I think Onn took the
great high level view there,
and I'll just agree that,
I think over the longterm,
what is the point of all
of our capitalist systems
and capital accumulation,
if it is not to give people
better and easier lives?
That's the point.
And I think too often
we lose sight of that
on a short term basis.
But to kind of get a
lot more concrete here
while I think that's the longterm goal,
I do think that we can say
what we know is gonna happen
in the next few years with
AI and what we don't know.
And the big distinction
there, is that with current
technologies around deep
learning, convolutional,
neural networks, big data
sets you use to train them.
I think we're at the point
now where you can pretty much
build a system that can
answer any objective question.
And over the next five to 10 years,
we will have general
purpose expert systems
that can easily outperform
humans at a much,
much lower cost at any
objectively definable task.
And the big thing is right now,
there's no technologies
out there that can extend
that to the subjective.
And this is really critical because while
the vast majority of
problems, for example,
in industrial monitoring might be obvious.
You might not be able to describe them,
but they're obvious, at a certain point,
somebody has to decide how
to react to something novel.
And that's the sort of technology that
there just isn't really
a development path for.
And even as simple as in our world,
we constantly get asked if we can identify
suspicious people.
And bias problems with
that question aside,
what is a suspicious person?
It's a very subjective judgment,
and I just don't think
that there's technology
even on the horizon that can repeatedly
and accurately make subjective
judgments, even as evaluated
by a single person,
let alone the kind
consensus that's necessary
for an organization.
So in the short term,
we're gonna see massive
continued improvements
in functional development of AI,
but we're not gonna get any closer to like
the Androids or C-3PO's
that everybody still
has in their mind or the L9000's.
So that's kinda my
capstone statement there.
- [Georgi] Thank you, I
appreciate your presentation
both you and Onn been great panelists.
It was quite interesting
to me and educational
and glad we could have this chat.
I would encourage everyone
who is on this call
to get more involved with
the Alumni Club in London.
If you have any topics that you think
are of interest to you,
they share with us,
people like Ben and Onn have been kind
enough with their time and
I'm sure the entire Booth Network will
be very happy to participate.
So yeah, I would like to thank
apart from Onn and Ben,
Jenny, and Panka as well as Heather
for helping them with this
event and making it happen.
Thanks everybody,
and have a great evening, morning, day,
whatever is in your time zone.
- Thanks so much.
- Fantastic.
Thank you so much Georgi and everybody
who joined the call today.
- Thank you, bye bye.
