HPE Ezmeral ML Ops supports the deployment of the models native runtime engine
as a scalable, containerized endpoint.
Once a model is built, data scientists can check the model into the model registry
and create a deployment cluster to serve this model for inferencing.
Logged in as a Developer,
on the Model Management screen,
click the 'Register New Model' to open the Register/Update Model screen.
Start with a short name for the model.
Add a brief description of the model if desired.
For tracking model versions,
we enter 1 for a new Model, or higher for updated versions.
Click the ‘Browse’ button and navigate to the correct directory
to find the 'Path to Model Repo'.
In this example, we use the models we previously
uploaded in the model repository demo.
The 'Path to Scoring Script' can be found by clicking the 'Browse' button and
navigating to the optional wrapper script to be used for scoring the model.
In this example we use the codes we previously uploaded in the model repository.
The 'Trained on Environment' field is optional, but we could enter the
description of the training environment that was used to create this model.
When we have finished entering information,
we click the Submit button to finish adding the new model.
From here we can deploy the model cluster.
Enter a unique name for the new deployment cluster.
The model selected is populated with the name of the model
we selected to deploy from the model management screen.
We are also able to update any of the fields below as we have in previous demos,
but for this example we will use the defaults and deploy!
With this Cluster deployed, we can get service end point
to make a call to the REST API endpoint.
Click on the Deployment cluster 'Income_Prediction'
and locate the service endpoint shown under the Model Serving LoadBalancer.
This URL can be copied and pasted into a curl command
or an external client, such as Postman,
to post a REST API call after updating the Model name and version.
HPE Ezmeral ML Ops simplifies the processes around model management
and model deployment, with model registry within project specific repository,
and model serving via deployment cluster.
As evident in these demos, HPE Ezmeral ML Ops provides a container-based platform
that enables enterprises to operationalize machine learning workflows
and deploy machine learning at enterprise scale.
