Did you know that augmented reality in the
enterprise, well, it's a reality?
It's happening in manufacturing and in a variety
of different areas.
Today on CXOTalk, we're speaking with Jim
Heppelmann, who is the CEO of PTC.
Jim Heppelmann, thank you so much for taking
your time and for being with us today on CXOTalk.
Great.
Thank you, Michael.
I'm happy to be here.
Jim, please tell us about PTC.
You mentioned PTC is a Boston-based software
company.
We're a global company, about 6,500 employees,
a little more than $1 billion of software
sales every year, and a market cap of about
$10 billion trading under the PTC ticker on
NASDAQ, so a pretty big--I like to say large,
but not extra-large--software company.
You've been in business for a long time and
tell us, today, if you can summarize your
business strategy.
We really think our company has a special
place at that point where the physical and
digital worlds come together.
I mean 30 years ago, our company pioneered
the field of 3D computer-aided design, and
the whole idea there was to have a digital
model of something that would later become
physical.
Along the way, we expanded into product lifecycle
management, which is really managing the configuration
of those models of those things.
Then, more recently, we really doubled down
on IoT software with our ThingWorx platform
and augmented and virtual reality software
with our Vuforia platform.
We see IoT and AR as really ways of connecting
physical things to digital things and physical
spaces to digital spaces so that we can very
easily and, in a lubricated way, move information
back and forth between these physical and
digital worlds.
What is the extension or evolution of AR from
the previous generations of software that
you had developed?
Imagine that we started as a 3D company.
Then we became a lifecycle management company.
It was the idea of lifecycle management that
brought us into IoT because IoT allowed us
to kind of close the loop during the fielded
part of the lifecycle of a product, for example,
so we'd have a closed loop lifecycle management
capability.
Then when we thought about, now we have data
from things in the field, and we have a 3D
understanding of those things, that's a perfect
application for AR.
AR is a 3D technology and we're a 3D company.
AR, of course, benefits from having information
that's about the physical world that you could
display in the physical world.
That's really what IoT is all about.
It's just been kind of a natural progression
from CAD to PLM to IoT to AR.
Where are the intersections between the industrial
Internet of Things, IoT, and augmented reality,
AR?
There's a huge intersection there.
Here's the way I like to describe it.
AR is IoT for people.
If IoT is about connecting things to the Internet
so that we can monitor, control, and optimize
those things, well then AR is about connecting
people to the Internet so that we can monitor,
control, and optimize the work of people.
If you think about it, in so many environments
whether it's a factory, a plant, an airport,
or a bus terminal, there are things and there
are people.
It would be ideal to optimize each of the
things in people and, frankly, optimize the
way they work together.
For example, IoT might tell me that a machine
is going to have a problem and I can use that
information to direct a worker where to go
and what to do when they get there using AR.
It's really a powerful complement that I think
is pretty unique to PTC and gives us really
this holistic view of connecting everything,
including the organic things, in the physical
world back to the digital world.
We tend to think of IoT as not necessarily
a mature technology but a very broadly applied
technology today, especially consumer devices
of every description, but AR less so, and
so where does the market for AR stand today
and where is the technology today, the state
of it?
You're right that it's at least a phase behind
IoT, but I think we're at a tipping point
where AR, done well, requires very good computer
vision.
Of course, in recent years, phones and tablets,
the things we all carry around with us, have
become very good at computer vision.
Suddenly, now with AR, we have a ubiquitous
device that every one of us owns that can
do AR really well.
Now, AR would be even better on smart glasses,
like a HoloLens or any of the other varieties
that are here and that are coming but, in
the meantime, we can all try it out and see
its power and see what it means using the
phone we have in our pocket or our purse.
I think we're at that tipping point where
AR is very practical and everybody is seeing
what it is and how you might apply it in business
and then shocked at the value it can create.
I think our view at PTC is the market is exploding
right now, but it's early.
I want to remind everybody; we're talking
with Jim Heppelmann.
He's the CEO of PTC.
Jim, it's relatively early stages for AR,
yet you have a variety of applications that
are in production that are being used by real
customers.
Please share with us some of those types of
AR applications.
Professor Porter and I wrote an article in
Harvard Business Review about a year ago.
During that, we studied all of the use cases
for AR in the typical enterprise.
We documented 103 major use cases that ranged
really every part of the company.
For example, if it were an industrial company,
the engineers have use cases for AR to combine
physical and digital things together in a
design.
The manufacturing organization has innumerable
use cases around work instructions; around
pick and place, maybe in a warehouse; human/machine
interfaces becoming virtual.
You move down into sales and marketing and,
of course, everybody loves the hologram catalog
to be able to see a product, maybe even configure,
and then see a product as a hologram.
Salespeople, by the way, love selling products
that have accompanying AR experiences because
it's a big differentiator for the product.
Then you go out to the customer site and,
of course, we can train the customer with
AR.
We can give the customer a whole new type
of digital experience around a product with
AR.
We can even let the customer do self-service.
If there is a problem, we can step them through
what they would need to do to see if they
could fix the problem before we dispatch a
truck and a service technician.
Many times, they can.
Of course, we could actually jump on a video
call with an AR overlay and coach them through
what we think they ought to do in the moment.
Then you take it downstream from there.
Every service organization can get huge productivity
benefits by understanding exactly what I should
do right now in order to fix the problem that
I'm confronted with.
Then, finally, the enterprise itself is thinking
about training.
It can be completely reinvented.
Training today, for frontline workers, is
really in advance and just in case.
We have an opportunity to turn that on its
ear and make it in the moment, just in time,
and just as needed, so there is a real opportunity
here in every part of an enterprise to reconsider
how we pass digital information on to people
and make those people much more productive,
much more accurate in the work that they do.
We have a question from Twitter.
Arsalan Khan is asking, "Is the data for AR
only machine generated or is it human-generated
as well?
If humans are involved in the collection and
preparation of that data, are there biases?"
I know in AI we think about that, but is this
even a relevant question?
No, it's actually a very good question.
I think that when we're augmenting information
into the physical world, it's a good question
to say, where did it come from?
Well, we have information in databases in
IT systems, which could be part of the picture.
We have information coming from that physical
world: sensors, control systems, and so forth.
That's part of the picture.
Then we can bring in human intelligence and/or
we can bring in artificial intelligence and
then take what we want from that combination
and build an augmentable experience out of
it.
I would say, could there be biases?
Yes.
I don't think that's a major problem but,
for sure, any human or AI biases that you'd
find in other forms of computing could find
their way into AR as well.
Please elaborate on, again, the kinds of data
that both human-generated data and real-world
data that must come together in order to create
an effective AR system for the enterprise.
Let me say first, one kind of data we need
is some kind of 3D understanding of the physical
world so that we can position information
relative to that, particularly if we want
to real AR with computer vision as opposed
to what some people call assisted reality,
which is also valuable but maybe not mainstream
AR.
We need to understand the shape of the physical
world or the physical object that we're interested
in decorating so we know where to place the
decorations.
Then the information that we're going to decorate
into that world, where does it come from?
Well, there are IT systems that know a lot
about things and places, and so they have
very valuable information.
They might know who is the customer, what
kind of service contract we have with this
customer.
Then there is data from the physical world.
This would be data being sensed from the actual
physical object or the physical space that
we're in.
What temperature is it?
Is this machine in front of me working or
not working?
Is it too hot to touch or is it cold?
There's lots of useful information that I
can combine with IT data.
Then again, I can have a human join that conversation
and become a contributor of AR content.
Sort of, join the conversation.
Maybe become part of the augmentable content
and feed voice and video into it.
Then, of course, I can use AI to process any
and all of that in the background, so it's
really this idea of, there are many sources
of data.
In theory, we can build a webpage from many
sources of data, so I say, think of AR as
like 3D webpages.
The issue with the Web, as it relates to AR,
is that the Web is built on the fundamental
premise of a 2D page, an HTML page.
You put information on a page and you could
collect that information from many places.
Then you take that page and you render it
on a flat piece of glass.
But I don't want to render it on a flat piece
of glass.
I want to render it on the real world, which
is 3D.
So, if I replace the page notion with a shape
notion and I gather data and, instead of putting
it on a page, I put it on a shape and store
that on the server.
Then when I download that shape, I take the
data on the shape and transpose it onto the
physical world in the same place.
It's a very powerful, I think, simple to understand
concept if you think of it as 3D Web technology.
The tools that are needed to create these
technologies as well as the data, where precisely
is PTC playing in terms of all of these components?
We've tried to pull together a suite under
our Vuforia brand that really has pretty much
everything an enterprise would need to tackle
most of those use cases.
For example, you need a 3D shape.
Well, I mentioned 30 years ago, we pioneered
the idea of modeling things in 3D.
Of course, the things you're interested in
might be modeled in somebody else's 3D, but
that's okay; we'll just use that instead.
Then, as it relates to spaces, for example,
where do you come up with a 3D model or a
space?
Well, the best thing to do today is to use
a 360-degree camera.
For example, we have a nice partnership with
a company called Matterport that captures
virtual tours of homes.
Using a 360-degree camera, they very quickly
create a 3D model of a home that you can walk
through to decide if you might want to buy
that home or not and save yourself the time
of driving there until you did the virtual
tour.
We can use that same model to bring together
a 3D model of a space and that's typically
a factory, a plant, or something like that.
Now we have 3D descriptions of things in places
and then PTC has a whole suite of technology
to allow you to, if you want, develop experiences
against that, author them as more like a technical
publications author to capture work being
done in that space, and then replay it for
a new worker in that space, or to have a video
call and to bring somebody else into that
space with you and let them show you, in that
space, what they think you ought to do on
a phone.
We have a whole suite, again, for software
developers, for technical authors, for frontline
workers to capture and transfer their expertise
and then to just do ad hoc collaboration using
AR through video calls.
You alluded to collaboration.
Tell us about that and the collaborative design.
Where does the state of the art stand with
respect to that?
Many of us have a scenario where we're trying
to do something.
I'm trying to bake bread or I'm trying to
change the oil in my car or whatever I'm trying
to do.
Trying to figure out how to download the new
operating system onto my laptop.
It's not going right and I need help.
What do I do?
Now, I can call somebody with any kind of
a video call.
Facetime, for example, on my iPhone.
While talking to that person, I could turn
the phone around and show them what I'm looking
at.
What AR brings to the table is you're having
that same type of conversation, but they can
see your environment and we're automatically
generating a 3D model so the remote user can
say, "See this thing here?" and they mark
some object in the environment.
They're not really marking on the screen.
The marks are going through the screen and
being anchored against the object in the background,
so they can, while talking, also draw in your
environment and you can see it.
Again, if I were baking bread, they might
say, "Take this cup of flour over here and
bring it over and pour it in the bowl.
Then go get this tablespoon of yeast over
here and put it in the bowl.
Now go get that cup of water."
[Laughter]
I don't really know how to make bread.
I'm just making it up.
"Then stir it around," and I would see all
those instructions in the real world.
I could just say, "Wow, it's crystal clear
what to do.
Thank you for bringing the power of AR into
this video call."
We've got a couple of questions from Twitter.
The first one relates very much to this topic
you were just describing.
Chris Petersen says, "Where do you see AR
going in terms of telepresence and then, beyond
that, will we see people operating machines
or cars long distance using AR and VR systems,
sort of similar to the way the military operates
drones?"
I would say absolutely to both of those ideas.
They're both very good ideas and things we
show people here at PTC in our labs.
Tele-presence means rather than seeing what
you see through my phone, let me become a
hologram and stand next to you.
I see you and you see me.
In particular, if you're wearing, say, HoloLens
type technology, you can really do that in
a powerful way.
I think there's a whole new form of telepresence,
which is, literally, project yourself into
some space or someplace and join the action.
Then the idea of sort of remote control would
be put yourself inside a machine, a car, that's
a thousand miles away, look out the windshield,
and decide whether you should take a right
or a left because you've placed yourself in
that environment.
There are some other examples that are less
intuitive but very powerful.
For example, if I had a VR model of a factory,
I could program where I want the robot to
drive that's carrying the parts around through
VR.
If I went into the factory, I could do the
same thing with AR by programming points on
the floor that I want the robot to follow.
There are so many applications.
It's just a treasure-trove, if you will, of
opportunities to bring real productivity to
people.
Many studies say people can be made 30% to
50% more productive.
Part of that would include you don't have
to go there.
Just think of all the time you'll save.
Typically, 60% to 90% fewer mistakes made,
and that all translates into big ROIs, big
value propositions for businesses.
To some extent, it sounds like something that
has been existing in games for a while, this
notion of virtual worlds.
Yeah, I mean it is.
We're really talking about the mirror world
where you create a virtual world to understand
what's happening in the mirror image physical
world.
Games typically don't have a physical counterpart,
so you're in a make-believe world.
You can do whatever you want.
But I want to put you or help you immerse
yourself in a virtual world that's a mirror
image of the physical world or go into the
physical world and bring all the information
from the mirror imaged digital world into
that physical world and see it in that environment
as if it's part of the environment.
The power of AR is it allows you to see and
process information without thinking about
it.
If there is an exit sign above a door in a
room, you don't even think about that, but
you know it's there and you know that's the
way out if there's a fire.
That's the power of AR.
You don't distract people and say, "Disengage
from the real world and stare at this phone,
tablet, or laptop for a while."
You say, "Anything that's interesting in that
phone, laptop or, frankly, the cloud behind
it, I'll just decorate as sights and sounds
into the physical world."
Maybe one more point on that.
I like to think that if you're thinking about
a HoloLens, but this would apply as well to
a phone or a tablet, when you put that on
your head, bits and bytes coming down from
the cloud turn into sounds and sights.
You can see data.
Likewise, when you generate sounds with your
mouth or sights with your hands, that gets
interpreted and converted back into bits and
bytes going up to the cloud.
This only works with people who are old enough
but, if you're old enough, you know what a
modem is, which is something that converts
analog signals to digital signals and digital
back to analog.
Really, a HoloLens or, frankly, any kind of
AR device is a modem.
It's converting data into things you can see
and hear and maybe even feel using your God-given
analog senses.
It's really a way to connect people to the
Internet.
You can both provide information to people
and get information back.
It's not conceptually different from putting
a raspberry pie with a sensor pack on a machine
and getting data from it.
It's really the same exact concept but applied
to people.
Let's go on to the subject of user experience,
customer experience.
Sal Rasa asks this question, "How can AR affect
customer experience?" but he is particularly
interested in healthcare and connecting patients,
families, and caregivers.
Let me back up to 50,000 feet and talk about
user experience.
One day, I was sitting in my kitchen looking
around.
I realized that there's a digital display
and some buttons and dials on my oven.
There's another one on my cookstove.
My refrigerator is trying to tell me what
temperature it is.
Then I have buttons to adjust it.
My freezer does the same thing.
I realize my coffeemaker has a digital interface,
my microwave has a digital interface, and
my refrigerator has a digital interface.
All of those things are trying to talk to
me, but they're all trying to do it using
crude, proprietary, primitive techniques that
I don't really understand that well and hate
to have to learn.
All of us have the blinking 12:00 on the microwave.
That's why we don't even know how to reset
the clock.
The point, though, is we can virtualize all
that.
We can not only virtualize it but combine
it.
Maybe next time I go into my kitchen, there
could be, like, think of a stadium display
like you'd see at a basketball game showing
the scoreboard and so forth.
I could have all those things projecting information
up to that central scoreboard where it's all
aggregated together and, by the way, that
scoreboard is virtual.
I only see it when I look at it with my smart
glasses or point my phone up there.
The idea here is, we can completely change
the way that people perceive things, places,
data, control systems, and so forth.
We can virtualize it all and turn it into
holograms and sights and sounds, frankly,
which Alexa does, and Siri, at some very small
level.
But with AR, what we're really saying is,
don't just stimulate people's sense of hearing.
Stimulate their eyesight because eyesight
is so much more powerful.
You can still do hearing as well, by the way.
But really bring people's eyesight into the
game so they don't just hear data; they see
it.
Anyway, I think it's a fundamental rethink
of products coming down the road because of
our ability to virtualize the entire interaction
model with them.
Everyone, again, we're talking with Jim Heppelmann.
He's the CEO of PTC.
Jim, you've spoken about a variety of different
use cases.
Can you describe to us today what you think
is the most common use case in the enterprise
today?
The most common use case, let's say the low-hanging
fruit, is to pass guidance and instructional
content onto what we'd call frontline workers
to help them do their job and not make mistakes
while they're doing it.
Rather than publish information in PDF, which
gets printed and they're flipping through
pages trying to understand what that means
and how to apply what I see in a PDF paper
document or, frankly, PDF on my phone.
It doesn't really matter.
How to interpret that and then map it into
the real world.
I've got to take 2D information and map it
into the 3D real world and try to understand
how to do that.
It's the problem we all have, by the way,
when looking at the GPS navigation system
in our car and then looking out through the
windshield and saying, "How do I equivalence
those two because they don't really look that
much the same?"
Anyway, we can pass that information on and
put it right where it needs to be.
While doing the work, the information about
how to do the work shows up.
By the way, there are certainly medical applications,
assisted surgery and so forth, for this same
concept.
Really, the low hanging fruit is companies
that have what we call frontline workers,
meaning they don't work behind a desk, behind
a computer.
They're out there in the real world.
They're in a factory.
They're at the customer site.
They're installing something, repairing something,
what have you, and they need information.
We can send that right down to them perfectly
in context.
We have another question from Twitter.
Actually, this is again from Chris Petersen
who is asking, "Are there standards and protocols
in place for this type of AR or is development
mostly ad hoc at this point?"
I don't think there are real standards.
There are some organizations who are thinking
about that but, I'd say, to the extent there
are standards that are not really controlling
the game right now, in any case.
Today, people are either authoring applications,
software engineers authoring applications
that contain AR or there are people using
AR tools to author content.
For example, one of the things PTC produces
is basically a 3D Web publishing tool.
It uses Web technology but, again, instead
of creating a page, you're decorating a shape.
Instead of rendering on a flat browser, you're
turning the camera on and passing the content
through onto the real world, but it's fundamentally
Web technology using all the Web standards
that you'd typically think about.
For authentication, for security, for all
of that stuff, it's just 3D content passing
through that.
I think that there are not necessarily standards
for all the tech.
I'll tell you there surely are not standards
for the look and feel.
That to me is a bit of a challenge, which
is, everybody's notion of how AR should look
is different.
You might move from one AR experience to another
and say, "Wow, that's very different."
It's kind of like before we had standardized
look and feel on Windows or Macintosh or what
have you, there was just the wild west.
I think, in AR, we're in the wild west as
it relates to the best techniques for decorating
the world in a sort of familiar way.
We have a question from Arsalan Khan on Twitter
specifically on training.
What kind of training do workers need to adopt
these technologies that for many of them will
be so very different from what they're used
to?
It's funny.
I think you don't need much training to use
AR because it's so natural.
If you can see and hear in the world around
you, AR is just enhancing your ability to
see and hear.
The training is put on a device or pick up
a device and follow the instructions you see
and hear.
I think AR doesn't need much training but
AR completely blows up the classic model of
training.
Again, the classic model of training puts
you in an unnatural environment, a classroom
or something like that, and pass information
to you that's not very much in the context
of the real world and, by the way, you probably
don't ever need to know.
We're going to train you just in case, just
in case you ever run into an environment.
Then we hope, when and if that happens, you
remember all this stuff.
I always joke with some of my colleagues here
at PTC.
I’m an engineer, of course.
I say, "How much calculus did you take?"
The answer is, "A lot."
I say, "How much do you do remember?"
They say, "Not very much."
I say, "Well, the good news is, you never
needed it anyway," for most people.
Some people really do need it.
Again, that was all just in case.
We trained everybody in calculus just in case
somewhere later in their career it would prove
useful.
AR says you don't have to do that.
Just, in the moment, where somebody is, tell
them what to do.
Step them through it.
Maybe even, ad hoc collaboration, help them
through it.
Don't try to load their brain up with all
these ideas that they may or may not ever
need.
Just give them highly relevant content in
the moment.
Maybe just one last example on that and I
like to share this thought with people just
to get them thinking.
Let's imagine I wanted to play chess against
the best chess player in the world, Garry
Kasparov.
Now, for various kind of old dog new trick
reasons, I don't know how to play chess, so
I would have a very difficult time beating
anybody much less the best in the world.
There's a computer, of course, Deep Blue from
IBM, that can beat Garry Kasparov.
What if I put on a HoloLens that was connected
to Deep Blue, artificial intelligence, and
I sat down across the table from Garry Kasparov
and all the HoloLens did is say, "Take this
part that's blinking and follow the arrow
and move it to the square that's blinking"?
Garry would think really hard.
He'd make a move.
Then I'd just make a move.
Then he'd think hard and make a move, and
I'd make a move.
I'd win every time, most of the time.
I don't exactly know the track record of Deep
Blue.
Just think of all the training I didn't bother
to do to become as good at chess as Garry
is.
I am good at chess as he is now good at chess,
but only through the power of AR bringing
the power of the digital world into the game
and me just actuating the desire of Deep Blue,
I mean in the chess game in the local Starbucks.
It's an amazing idea that I think will forever
transform the way we think about training
both in the academic world, but for sure in
the business world.
What about the equipment?
I know that one of the obstacles or resistance
forces to adopting AI, say in manufacturing
environments, has been the size of the headsets
and the fact that the headsets have to be
connected to a computer.
More and more, the headsets are wireless,
of course.
Nonetheless, headsets are a problem.
Now, what I would tell you is that AR really
runs well on phones and tablets, but there
is one problem and that is it ties up your
hands.
If you're a frontline worker trying to assemble
something, install something, or repair something,
you actually need your hands available for
tools, parts, and so forth.
Now you need a hands-free or a head-mounted
device.
I think probably the best of those is the
HoloLens and particularly the second generation
HoloLens, but there are others.
What I would say is, I expect we're going
to see a dramatic leapfrogging exercise now
on the hardware front because Microsoft just
leapfrogged with the second generation HoloLens.
It's highly rumored that Apple is going to
come out with something.
When they do, it's going to be a huge breakthrough
because it'd be consumer grade, affordable,
easy to use, smart glasses.
Meanwhile, Facebook and Oculus are rumored
to be working on not VR but this time AR headsets
and Magic Leap has their AR headset out there.
I think that we're in a position now where
there'll be rapid progression, a couple of
new devices every year, and maybe even every
quarter, until we get, in the next year or
two or three, to the point where we have a
pretty good device.
I look at it and say, my job is to keep the
software moving ahead of the hardware and
I'm hoping the hardware catches up because
what would really burst the dam open for enterprise
use would be really good, affordable, head-mounted
hardware, something that was of this sort
of form factor.
If I'm going to put on glasses that do analog
correction, wouldn't it be great if the very
same glasses also did digital correction so
that the light waves coming in were enhanced
both analog and digital technology?
Then when I looked out the world, I'd see
everything clearly, but it would be a combination
of physical and digital information.
You just made a very interesting point.
You said that your job is to keep the software
moving ahead of the hardware.
That begs the question; where is the software
going, not in the next five or ten years,
but in the next two or three years?
Let me just add a point to that.
First of all, we think the software needs
to be agnostic or independent of the hardware
so that customers can bring their software
and content from one hardware device to another
as they leapfrog and pass each other.
I think it's a big mistake for anybody to
build on a software stack that comes with
the hardware stack because I think you're
going to find yourself painted into a corner
when better hardware comes out.
Yeah, if you think of where the software is
headed, we're trying to make a nice combination
of both object-based and spatial AR, make
that really easy so that, in a room, I can
do AR.
But if I approach an object, I have a deep
understanding of, I can switch to a deeper
understanding.
We're trying to bring more interaction into
the model.
For example, the new HoloLens allows you to
touch holograms.
Now if your control panel for a machine is
actually a hologram with 3D buttons on it,
I can touch buttons and turn them on and off.
Now I've created a really powerful, virtual
HMI.
We're trying to leverage those new capacities
in the HoloLens.
There are telepresence ideas that we talked
about based on one of the caller's questions
earlier.
There is just a target rich environment right
now on the software side, getting data from
many sources, connecting IoT systems, 3D systems,
and AR systems, better and better user interfaces.
One of our goals at PTC is to try to say AR
shouldn't be for developers.
Frankly, enterprises are never going to make
it work if AR experiences are coded in Unity
by software engineers.
That to me is a dead-end road.
We need to make it so that authors can publish
information in AR instead of PDF, Word, or
what have you, PowerPoint.
We need to make it so that experts can capture
and pass on their expertise without needing
authors or coders, and that anybody can jump
on a video call right now with anybody else
and provide AR-based collaborative guidance,
let's say.
To me, it's such a target rich environment
and PTC is spending a tremendous amount of
money on this, by the way, because we think
it's a really great opportunity for us to
ultimately own, hopefully, the concept of
enterprise AR because we have a whole suite
that tackles the wide range of problems that
an enterprise would want to tackle with the
cohesive set of technology that works together.
Jim, we're going to run out of time shortly,
but I want to talk about the industrial Internet
of Things because that's also important here.
I know you did touch on it earlier, but maybe
give us the quick overview and, again, let's
talk about that intersection between IoT and
augmented reality.
Imagine that there's a machine in front of
me, there's me, and the machine is connected
to the Internet through some kind of IoT gateway,
an edge agent, and is talking to the cloud.
Then I'm maybe wearing a HoloLens or I have
an AR device and I'm connected to the cloud.
Up in the cloud, it's got information coming
and going from both of us.
The first thing the cloud can do is tell me
about that machine.
Which machine is it?
What's it been doing?
Does it have any current or developing problems?
If so, what should I do about it, and so forth?
The cloud might be talking to the machine,
what it should do next, but it's also guiding
me what I should do next and what I should
know.
It's giving me the ability to visually and,
thanks to logic and even artificial intelligence,
really understand what's going on.
It also gives me a new model of interacting
with that machine.
That machine doesn't need a screen.
It doesn't need buttons, dials, keyboards,
none of that stuff because now I can talk
to it.
Again, when I speak, it's not that I'm speaking
to the machine.
It's that I'm speaking to the HoloLens and
the HoloLens is taking my speech, converted
to bits and bytes, up to the cloud, turning
it into machine commands, and the machine
commands down to the machine in front of me.
I can basically carry on a conversation with
my hands and my mouth and my eyes and my ears
with a machine just like I'm doing with you
right now.
I basically am on the same playing field now
as that machine.
Anything the cloud can do for that machine,
it can do for me too, which really lifts my
capabilities as a human because I can finally
enjoy the benefit of the cloud to the extent
the machines have been doing over the last
decades.
In simplistic terms then, the AR side digitizes
me and the IoT side digitizes that machine.
Then your system brings the two together so
that I can manipulate that machine directly.
It's funny.
It feels direct but, in fact, it's through
many layers of indirection.
Essentially, manipulate that machine directly
because of the combination of IoT on the machine
side and AR on the human side.
AR is IoT for people.
If you think of it that way, now we're both
like machines, more or less.
I don't want to go too far with that, but
we're both connected in the way that you think
of a machine being connected and we're both
passing information back and forth to the
cloud and, through that, to each other, to
other machines and things around us, and so
forth.
I think we're really talking about changing
the way humans interact with the world and
that's because bits and bytes become sounds
and sights.
We know how to process sounds and sights without
even thinking about it, particularly if they're
part of the environment as opposed to put
on some flat device that we have to study
while looking away from the environment.
Just maybe to give you a thought, sometimes
I say, "Do this exercise," if you're in a
big room, an auditorium, or a shopping mall.
Close your eyes.
Open them for one second, look around, and
close them again.
Think about how much information you just
ingested.
You know where you are -- I'm in a shopping
mall.
I even know where I am -- I'm outside the
favorite department store.
The bathrooms are down there.
There are not many people here today.
Oh, that's because it's a sunny day outside.
I can see through the windows.
All that information came to me without thinking
about it because it was just there and I used
my sort of natural, again, mother nature-given
right to process information visually and
with my hearing and integrate it all together
without studying it.
Anything you put on a phone; you have to study.
If you put words on a phone, well, you can
only read words, text, at about three words
per second, which, in digital terms, is glacially
slow.
But if those words become pictures that are
part of the environment--oh, my God--it's
so fast.
It's a very powerful concept.
To what extent is the user interface becoming
a core competency of PTC and your investment?
I would say, think of us; we're creating tools
to help people author content and that content
kind of becomes the user interface.
I think what we need to do is to be able to
pass on to our customers the right kind of
style guides, tips and tricks, and techniques,
if you will, to use our software to produce
AR experiences that are really sexy and easy
to understand.
I think we need to have that expertise, but
I don't think I'm in the business of selling
that expertise.
I'm in the business of selling tools and passing
on that expertise so people know how to get
really great value out of the software tools
that I would sell them.
You're selling them the capability and it's
up to them then to decide how to use that
for their particular use case.
I'm not selling an owner's manual for a refrigerator.
I'm selling software that would allow you
to develop an owner's manual, an AR owner's
manual for a refrigerator.
To make that experience 3D and in context
rather than 2D and on paper.
Let's finish up by talking about deployment.
People are listening to this.
They say, "Yeah, this sounds really good,
really great."
How do they start?
What should they do, people in the enterprise?
One thing I'd recommend is, go to PTC.com
and download the Harvard Business Review article
that Professor Porter and I wrote because
it's a completely non-commercial piece around
the strategy and power of AR for enterprises.
That'll give you a lot to think about.
Generally, you want to start with use cases.
Where could you make workers, particularly
frontline workers, much more productive and
how would you apply AR to that?
There are different types of AR I've talked
about in the course of this discussion.
Which type is most important and the different
types have different startup costs, if you
will?
Some of it, like the video call idea, it takes
you five minutes to get going.
You download an app, you make a call, and
you're going.
There's no implementation, per se.
On the other hand, if you're trying to develop
your own apps using computer vision engines
and toolkits, you're going to start by hiring
developers.
That's a long path to productive use.
That's not to say that's not viable, particularly
for sales and marketing apps, but I think
it's really about picking the high-value use
cases and then understand the requirements
to go tackle those use cases and then get
started.
Finally, what is your vision for where PTC
as a company, as an organization, is going
over the next number of years?
Yeah.
Again, we're trying to bring these physical
and digital worlds together.
We've identified that IoT and AR are two of
the most critical technologies for crossing
that physical/digital boundary.
IoT is mostly bringing data from physical
to digital and AR is mostly bringing data
from digital to physical.
That creates a loop that allows us to keep
going back and forth between these mirror
image worlds.
We want to be the leading software company
in terms of connecting these worlds together
in particular with our IoT platform ThingWorx
and our AR platform called Vuforia.
That's a great combination.
They're both leading in their respective fields.
There's a tremendous amount of business growth
and success out there ahead of us, so we're
excited to go after it.
Jim Heppelmann, CEO of PTC, thank you again
for taking your time to be with us today.
Thank you, Michael.
It was great.
You've been watching CXOTalk.
Thank you so much for watching.
Before you go, subscribe on YouTube and subscribe
to our newsletter.
Hit the subscribe button at the top of our
website.
Thanks so much, everybody.
We have more shows coming up and we will see
you soon.
Have a great day.
Bye-bye.
