>> From the Silicon Angle Media Office
in Boston, Massachusetts, it's the Cube.
Now here's your host Stu Miniman.
>> Hi, I'm Stu Miniman
and welcome to the Cube's
Boston area studio.
This is a Cube conversation.
Happy to welcome to the
program first time guest
Benjamin Nye, CEO of Turbonomic,
a Boston-based company.
Ben, thanks so much for joining us.
>> Stu, thanks for having me.
>> Alright Ben, so as we say,
we are fortunate to live
in interesting times
in our industry.
Distributed architectures are
what we're all working on,
but at the same day,
there's a lot of consolidation going on.
You know, just put this in context.
Just in recent past, IBM
spent 34 billion dollars
to buy Red Hat.
And the reason I bring that up
is a lot of people talk about
you know, it's a hybrid multi-cloud world.
What's going on?
The thing I've been saying
for a couple of years
is as users, two things you need to watch.
Care about their data an awful lot.
That's what drives businesses.
And what drives the data really?
It's their applications.
>> Perfect.
>> And that's where Turbonomic sits.
Workload automation is where you are.
And that's really the
important piece of multi-cloud.
Maybe give our audience a
little bit of context as to
why this really, IBM
buying Red Hat fits into
the general premise of
why Turbonomic exists.
>> Super.
So the IBM Red Hat combination
I think is really all about
managing workloads.
Turbonomic has always been
about managing workloads
and actually Red Hat was
an investor, is an investor
in Turbonomic, particularly
for open stack,
but more importantly open shift now.
When you think about the
plethora of workloads,
we're gonna have 10 to
one number of workloads
relative to VMs and so worth
when you look at
microservices and containers.
So when you think about that combination,
it's really, it's an
important move for IBM
and their opportunity to
plan hybrid and multi-cloud.
They just announced the
IBM multi-cloud manager,
and then they said wait a minute,
we gotta get this thing to scale.
Obviously open shift and Red Hat is scale.
8.9 million developers in their community
and the opportunity to
manage those workloads
across on-prim and off in a
cloud-native format is critical.
So relate that to Turbo.
Turbo is really about
managing any workload
in any environment anywhere at all times.
And so we make workloads smart,
which is self-managing anywhere real time,
which allows the workloads
themselves to care
for their own performance assurance,
policy adherence, and cost effectiveness.
And when you can do that,
then they can run anywhere.
That's what we do.
>> Yeah, Ben, bring us
inside of customers.
When people hear
applications and multi-cloud,
there was the original thing.
Oh well, I'm gonna be able
to burst to the cloud.
I'm gonna be moving things all the time.
Applications usually
have data behind them.
There's gravity, it's
not easy to move them.
But I wanna be able to
have that flexibility of
if I choose a platform,
if I move things around,
I think back to the storage world.
Migration was one of the
toughest things out there
and something that I spent
the most time and energy
to constantly deal with.
What do you see today when it
comes to those applications?
How do they think about them?
Do they build them one
place and they're static?
Is it a little bit more modular now
when you go to microservices?
What do you see and hear?
>> Great, so we have
over 2,100 accounts today
including 20% of the Fortune 500,
so a pretty good sample set
to be able to describe this.
What I find is that CIOs today
and meet with many of them,
I want either born in the
cloud, migrate to the cloud,
or run my infrastructure as cloud.
And what they mean is
they want, they're seeking
greater agility and elasticity
than they've ever had.
And workloads thrive in that environment.
So as we decompose the applications
and decompose the
infrastructure and open it up,
there's now more places to
run those different workloads
and they seek the flexibility
to be able to create
applications much more quickly,
set up environments a lot faster,
and then they're more than
happy to pay for what they use.
But they get tired of the waste candidly
of the traditional legacy environments.
And so there's a constant evolution for
how do I take those
workloads and distribute them
to the proper location for them to run
most performantly, most cost effectively,
and obviously with all the
compliance requirements
of security and data today.
>> Yeah, I'm wondering if you could help
connect the dots for us.
In the industry, we talk a lot
about digital transformation.
>> Yeah.
>> If we said two or three years ago
was a lot of buzz around this,
when I talk to N users
today, it's reality.
Absolutely, it's not just,
oh I need to be mobile
and online and everything.
What do you hear
and how do my workloads
fit into that discussion?
>> So it's an awesome subject.
When you think about what's
going on in the industry today,
it's the largest and fastest
re-platforming of IT ever.
Okay, so when you think about for example
at the end of 2017,
take away dollars and focus on workloads.
There were 220 million workloads.
80% were still on prim.
For all the growth in the cloud,
it was still principally
an on prim market.
When you look now forward,
the differential growth rates,
63% average growth across
the cloud vendors, alright,
in the IAS market.
And I'm principally
focused on AWS and Ajur.
And only 3% growth rate
in the on premise market.
Down from five years ago
and continuing a decline
because of the expense,
fergility, and poor performance
that customers are receiving.
So the re-platforming is going on
and customers' number one question is,
can you help me run my
workloads in each of these
three environments?
So to your point, we're not
yet where people are bursting
these workloads in between
one environment and another.
My belief is that will come.
But in today's world,
you basically re-platform those workloads.
You put them in a certain environment,
but now you gotta make
sure that you run them well
performantly and cost effectively
in those environments.
And that's the digital transformation.
>> Okay.
So Ben, I think back to my career.
If I turn back the clock even two decades,
intelligence, automation,
things we were talking about,
it's different today.
When I talk to the
people building software,
re-platforming, doing these things today,
machine learning and AI,
whatever favorite buzzword
you have in that space is really driving
significant changes into
this automation space.
I think back to early days of Turbonomic.
I think about kinda the
virtualization environments
and the like.
How does automation intelligence,
how is it different today than it was say,
when the company was founded?
>> Wow.
Well so for one, we've
had to expand to this
hybrid and multi-cloud world, right?
So we've taken our data
model which is AI ops,
and driven it out to include Ajur and AWS.
But the reason would say why.
Why is that important?
And ultimately, when
people talk about AI ops,
what they really mean
whether it's on prim or off,
is resource-aware applications.
I can no longer affect
performance by manually
running around and doing
the care and feeding
and taking these actions.
It's just wasteful.
And in the days where
people got around that
by over-provisioning on prim
sometimes as much as 70 or 80%
if you look at the resource actually used,
it was far too expensive.
Now take that to the
cloud, to the public cloud,
which is a variable cost environment
and I pay for that
over-provisioning every second
of the rest of my life
and it's just prohibitive.
So if I want to leverage
the elasticity and agility
of the cloud,
I have to do it in a smarter measure
and that requires analytics.
And that's what Turbonomic provides.
>> Yeah and actually I
really like the term AI ops.
I wonder if you can put a
little bit of a point on that
because there are many admins
and architects out there
that they hear automation
and AI and say, oh my gosh,
am I gonna be put out of a job?
I'm doing a lot of these things.
Most people we know in IT,
they're probably doing way
more than they'd like to
and not necessarily
being as smart with it.
So how does the technology
plus the people,
how does that dynamic change?
>> So what's fascinating is
if you think about the role of tech,
it was to remove some of the
labor intensity in business.
But when you then looked inside of IT,
it's the most labor intensive
business you can find, right?
So the whole idea was
let's not have people
doing low value things.
Let's do them high value.
So today when we virtualize
an unpremised estate,
we know that we can share it.
Run two workloads side by side,
but when a workload spikes
or a noisy neighbor,
we congest the physical infrastructure.
What happens then is that it gets so bad
that the application SLA breaks.
Alerts go off and we take
super expensive engineers
to go find hopefully
troubleshoot and find root cause.
And then do a non-disruptive action
to move a workload from
one host to another.
Imagine if you could do
that through pure analytics
and software.
And that's what our AI ops does.
What we're allowing is
the workloads themselves
will pick the resources
that are least congested
on which to run.
And when they do that rather
than waiting for it to break
and then try and fix it people,
we just let it take that action on its own
and trigger a V motion and put
it into a much happier state.
That's how we can assure performance.
We'll also check all the
compliance and policies
that govern those workloads
before we make a move
so you can always know that
you're in keeping with your
affinity-in affinity
rules, your HADR policies,
your data sovereignty,
all these different myriad of regulations.
Oh and by the way, it'll be
a lot more cost effective.
>> Alright, Ben, you mentioned V motion.
So people that know virtualization,
this was kind of magic
when we first saw it
to be able to give me
mobility with my workloads.
Help modernize us with cubernetties.
Where does that fit in your environment?
How does multi-cloud
world, as far as I see,
cubernetties does not
break the laws of physics
and allow me to do V
motion across multi-clouds.
So where does cubernetties
fit in your environment?
And maybe you can give us a little bit of
compare contrast of kinda
the virtualization world
and cubernetties, where that fits.
>> Sure, so we look at
containers or the pods,
a grouping of containers, as
just another form of liquidity
that allows workloads to move, alright?
And so again we're
decomposing applications
down to the level of microservices.
And now the question you
have to ask yourself is
when demand increases on an application
or on indeed a container,
am I to scale up that
container or should I clone it
and effectively scale it out?
And that seems like a simple question,
but when you're looking at
it at huge amounts of scale,
hundreds of containers or
pods per workload or per VM,
now the question is, okay,
whichever way I choose,
it can't be right unless
I've also factored
the imposition I'm putting on the VM
in which that container and or pod sits.
Because if I'm adding memory in one,
I have to add it to the other
'cause I'm stressing the
VM differentially, right?
Or should I actually clone the VM as well
and run that separately?
And then there's another
layer, the IAS layer.
Where should that VM run?
In the same host and cluster
and data center if it's on prim
or in the same availability
zone and region
if it's off prim?
Those questions all the way down the stack
are what need to be answered.
And no one else has an answer for that.
So what we do is we
instrument a cubernetties
or an open shift or even on
the other side a cloud foundry
and we actually make the scheduler live
and what we call autonomic.
Able to interrelate the demand
all the way down through
the various levels of the
stack to assure performance,
check the policy, and make
sure it's cost effective.
And that's what we're doing.
So we actually allow the interrelationship
between the containers
and their schedulers
all the way down through the virtual layer
and into the physical layer.
>> Yeah, that's impressive.
You really just did a
good job of explaining
all of those pieces.
One of the challenges
when I talk to users,
they're having a real
hard time keeping up.
(laughing)
We said I've started to figure
out my cloud environment.
Oh wait, I need to do
things with containers.
Oh wait, I hear about
the server-less thing.
What are some of the big
challenges you're hearing
from customers?
Who do they turn to to help
them stay on top of the things
that are important for their business?
>> So I think finding the
sources of information now
in the information age when
everything has gone to software
or virtual or cloud has become harder.
You don't get it all from the
same one or two monolithic
vendors, strategic vendors.
I think they have to come
to the Cube as an example
of where to find this information.
That's why we're here.
But I think in thinking about this,
there's some interesting data points.
First on the skills gap, okay,
Accentra did a poll of their customer base
and found that only 14% of their customers
thought they had the
requisite skills on staff
to warrant their moves to the cloud.
Think about that number, so 86% don't.
And here's another one.
When you get this wrong,
there's some fascinating data that says
80% of customers receive a cloud bill
north of three times what
they expected to spend.
Now just think about.
Now I don't know which
number's bigger frankly, Stu.
Is it the 80% or the three times?
But there's the conversation.
Hey, boss, I just spent
the entire annual budget
in a little over a quarter.
You still wanna get that cup of coffee?
(laughing)
So the costs of being wrong
are enormously expensive.
And then imagine if I'm
not governing the policies
and my workloads wind up in a country
that they're not meant
to per data sovereignty.
And then we get breached.
We have a significant problem there
from a compliance standpoint.
And the beauty is software
can manage all this
and automation can help
alleviate the constrain
of the skills gap that's going on.
>> Yeah, you're totally right.
I think back to five years
ago, I was at Amazon Reinvent.
And they had a tool that started
to monitor a little bit of
are you actually using the
stuff that you're paying for?
And there were customers
walking out and saying,
I can save 60 to 70%
over what I was doing.
Thank you Amazon for
helping to point that out.
When I lived on the data center side
and vendors that sold stuff,
I couldn't imagine if your
sales rep came and said,
hey, we deployed this stuff
and we know you spent millions of dollars.
It seems like we over-provisioned you by
two to three x what you expected.
You'd be fired.
So it was like in Wall Street.
Treats Amazon a little bit differently
than they do everybody else.
So on the one hand, we're making progress.
There's lots of software
companies like yourself.
There's lots of companies helping people
to optimize their cost on there.
But still, this seems like
there's a long way to go to get
multi-cloud and the cost
of what's going on there
under control.
Remember the early days?
They said cloud was supposed
to be simple and cheap
and turned out to be neither of those.
So Ben, I want to give
you the opportunity.
What do you see both as an
industry and for Turbonomic,
what's the next kinda
six to 12 months bring?
>> Good, can I hit your cloud point first?
It's just when you think of Amazon,
just to see how the changes.
If I go and provision a
workload in Amazon EC2 alone,
there's 1.7 million different combinations
from which I can choose across
all the availability zones,
all the regions, and all the services.
There's 17 families who
compute service alone
as just one example.
So what Amazon looks
at Turbonomic and says,
you're almost a customer
control plane for us.
You're gonna understand
the demand on the workload,
and then you can help the customer,
advise the customer which
service, which instance types,
all the way down through
not just compute and memory,
but down into network and storage
are the ones that we should do.
And the reason we can do
this so cost effectively
is we're doing it on a
basis of a consumption plan,
not an allocation plan.
And Amazon as a retailer in their origin,
has cut prices 62 times,
so they're very interested in using us
as a means of making their
customers more cost effective
so that they're indeed
paying for what they use,
but not paying for what they don't use.
They've recognized us as giving us
the migration tools competency,
as well as the third party
cloud management competencies
that frankly are very
rare in the marketplace.
And recognize that those
are because production apps
are now running at
Amazon like never before.
Ajur, Microsoft Ajur is not to
be missed on this one, right?
So they've said we too
wanna make sure that we have
cost effective operations.
And what they've described is
when a customer moves to Ajur,
that's a Ajur customer at ACA.
But then they need to make
sure that they're growing
inside of Ajur
and there's a magic number
of 5,000 dollars a month.
If they exceed that, then
they're Ajur for life, okay?
The problem becomes if
they pause and they say,
wow this is expensive or
this isn't quite right.
Now they just lost a year of growth.
And so the whole opportunity with Ajur
and they actually resell
our assessment products
for migration planning
as well as the optimization thereafter.
And the whole idea is to make sure again
customers are only
paying for what they use.
So both of these platforms in the cloud
are super aggressive with one another,
but also relative to the
un-prim legacy environments
to make sure that the workloads
are coming into their arena.
And if you look at the value of that,
they round numbers about
three to 6,000 dollars a year
per workload.
We have three million smart
workloads that we manage today
at Turbonomic.
Think what that's worth
in the realm of the prize
at the public cloud vendors
and it's a really interesting thing.
And we'll help the customers get there
most cost effectively as they can.
>> Alright, so back to looking forward.
Would love to hear your thoughts
on just what customers need broadly
and then some of the areas
that we should look for
Turbonomic in the future.
>> Okay, so I think you're
gonna continue to see
customers look for outlets for
this decomposed application
as we've described it.
So microservices,
containers, and VMs running
in multiple different environments.
We believe that the next one,
so today in market we have STDC,
the software defined data
center and virtualization.
We have IAS and PASS in the
public and hybrid cloud worlds.
The next one we believe will
be as applications at the edge
become less pedestrian, more strategic
and more operationally intensive,
then you're talking about
Amazon Prime delivery
or your driverless cars or
things along those lines.
You're going to see that the
edge really is gonna require
the cell tower to become the
next generation data center.
You're gonna see compute memory
and storage and networking
on the cell tower
because I need to process
and I can't take the latency
of going back to the core,
be it cloud core or on premise core.
And so you'll do both, but
you'll need that edge processing.
Okay, what we look at is if
that's the modern data center,
and you have processing
needs there that are critical
for those applications
that are yet to be born,
then our belief is you're
gonna need workload
automation software because
you can't put people
on every single cell tower in America
or the rest of the world.
So, this is sort of a
confirming trend to us
that we know we're in the right direction.
Always focus on the workloads,
not the infrastructure.
If you make the application
workloads perform,
then the business will run well
regardless of where they perform.
And in some environments
like a modern day cell tower,
they're just not gonna be the opportunity
to put people in manual response
to a break fix problem set at the edge.
So that's kinda where we
see these things headed.
>> Alright, well Ben Nye,
pleasure to catch up with you.
Thanks so much for giving us the update
on where the industry is
and Turbonomic specifically.
And thank you so much for watching.
Be sure to check out theCube.net
for all of our coverage.
Of course we're at all the big cloud shows
including AWS Reinvent
and CubeCon in Seattle
later this year.
So thank you so much
for watching the Cube.
(gentle music)
