In this document we’re going to go through an end-to-end example showing the lifetime of an example composite solution in Acumos. We’re not going to cover every possible way of doing things; this is just meant to be a cookbook recipe to help get people started.
In particular, we are going to cover onboarding two simple Python scikit-learn models using web onboarding, then going to the Design Studio and composing a solution using those models. After that we show how to deploy the model into Azure, and finally present a web site you can use to communicate with the composite solution.
Note that there are really two different user experiences represented here. The first is the experience of the data scientist, who creates the model and onboards it to Acumos. The second is the model user, who wants to deploy the model as a microservice somewhere and use it.
The example presented in this document is called "face privacy filter" because it demonstrates obscuring human faces in images. The solution joins together a single standalone model for detecting faces in an image with a second model for obscuring regions in an image.
Here are the prerequisites for going through this example. Note that these are not strictly all required to use Acumos itself—there are alternatives at various points—but these are what you will need to duplicate the steps in this particular demo. There are slightly different requirements for the Model Onboarder vs. the Model User; of course, in this demo, we will be wearing both hats. Here is the software you will need to have installed on your local system:
- Model Onboarder: Web browser, Python (3.6+), Acumos Client Library, Protocol Buffers (Protobuf), ML packages of choice, git client
- Model User: Web Browser
This example uses a Python virtual environment. This ensures first that you avoid problems with existing packages on your machine, and eliminates the need for admin privileges to install software in protected places.
The model onboarder will first need to install a few things: Python version 3.6+ and the Protocol Buffers compiler. You should read the advice on https://pypi.org/project/acumos/ which gives good information on how to install this software. You should know which ML packages you are using, and of course it will be up to you to install these on your local system. Acumos supports Java and R as well as Python, and these have their own client libraries and requirements, but this document/example is Python-specific.
The model user needs to have access to a deployment environment for running their composite solution. This example will show how to deploy to the cloud, specifically Microsoft Azure. Users will use a web browser to communicate with the deployed solution.
The very first step is to ensure you have Python version 3.6. You might need to install it using a package, you might use pyenv to install it, etc.; there are simply too many possibilities to cover here. Ensure you get output something like this:
Next you will create a new directory with a virtual environment; i.e., where all the packages are installed for this demonstration. This example uses the name "virtenv36" below, a reminder that it's a virtual environment with Python version 3.6. Type the following commands:
An example session from a Linux system appears below. Please note the exact paths shown in the command output may differ on your system.
Clone the Model Source
The next step is to obtain the Python source code and supporting data for the face-detection and face-pixelation models from the Acumos git repository using git. Type this command:
An example session from a Linux system appears below.
Create Model Artifacts
Next you will install the Python prerequisites for these models, and use the source to create model artifacts that can be on-boarded into Acumos. Type the following commands:
You should see two new directories with similar contents:
Prepare Model Bundles to On-Board
In this step you will create zip archives for use in the Web On-Boarding feature of Acumos. These bundles contain the model artifacts you created in the previous step. It's important that these zip archives contain no directory or subdirectory, just the required three files. Type these commands to create the two zip archives:
When you are done you should have two new zip archives. The file sizes may differ from the example here:
On-Board the Face Detect Model
Now we will go to the Acumos instance where you wish to onboard this newly created model. Log in to Acumos, select ON-BOARDING MODEL from the left tabs, and fill in the requested data. We have a scikit-learn toolkit model:
We’ve already downloaded the client library, so we just check the “Installation of the toolkit library is completed” box. Next, we click on “Upload Model Bundle” and drag and drop the zip archive we created earlier from the dumped files:
Click “Done”; then will be able to click the "On-Board Model” button which kicks off the on-boarding process. Be patient—this can take quite a long time! You will see a little animation near the top of the page showing progress. When Onboarding completes successfully, you should see something like this:
On-Board the Face Pixelation Model
When the previous on-boarding step completes for the "detect" model, perform the same steps for the "pixelate" model.
View Your Models
Congratulations! Now your models are in the catalog as Unpublished (i.e., private) models. You can click the “View Models” link shown by the final green circle with the checkmark to go directly to your models on the "My Models" page. Please note these models have different names, but the screen truncates the names after the first 16 characters so they appear to be the same:
Click on your new model (face_privacy_filter_detect) to go to that model’s specific page. There’s not a lot of information there right now, but if you want, you can click on “Manage My Model” and do things like Publish your model (to Company or Public), add information, or share your model with other users.
Set Model Category and Toolkit Type in Metadata
To workaround a current limitation in the Design Studio, you must add some metadata to your newly on-boarded models. For each one, click on the model to visit its model page, click on the Manage My Model button in the top, then click on "Publish to Company Marketplace" in the left navigation bar. This will allow you to edit the metadata.
- Revise the model name to make it short; e.g., "detect" instead of "face_privacy_filter_detect" and "pixelate" instead of "face_privacy_filter_pixelate". Click on Model Name, wait for a popup to appear, enter the new name, click Update. Do this for both models.
- Set a model category and a toolkit type. Click on "Model Category", wait for a popup to appear, select a Category such as Classification on the left, select toolkit type Scikit-Learn on the right, click Done. Do this for both models.
We’re not going to do any more things right now in this example. We’re at the end of the Onboarding section, and we are now switching hats to become a Model User. You won’t be able to see someone else’s models unless that person publishes the model to company or shares it with you. So, we will continue where we left off as the same user who on-boarded.
Create a Composite Solution
Now that the models are available in the Acumos platform you will create a composite solution and deploy it.
Compose the Solution in Design Studio
Open the Design Studio by clicking on the left navigation bar. Wait for the screen to populate. Expand the Classification section under Models on the left-side and look for the model names you entered in the previous step. This screen shows the model "detect" in blue among the list:
Drag the "detect" model and drop it onto the canvas. Find the "pixelate" model and similarly drag and drop that model onto the canvas. Create a data path between the two models from the output port (right-hand side) of the "detect" model to the input port (left-hand side) of the "pixelate" model. The result should look like the following:
Click the Save button at the top to save the composite solution, give it a meaningful name and version number. This example uses "detect-pixelate" and "1.0". Then click on the Validate button at the top and ensure the Design Studio shows no errors in the bar at the bottom. The newly saved and validated model name should appear in the "My Solutions" tab on the lower right.
Deploy the Composite Solution to Azure
From the Design Studio click on the Deploy button at the top, and in the drop-down click on Azure. This will take you directly to the manage-my-model page with the Export / Deploy to Cloud section active. You should see the following screen:
Enter the appropriate IDs, secret keys and other details as part of your Azure account. Deployment will create a new virtual machine (VM) and several docker containers will be running on that VM, all listening at different ports.
After the deployment finishes the user will receive a notification with the IP address of the deployed VM, as shown in the following screenshot:
Next you will gather the port number information of the constituent models, connector and (if used) the proto viewer ("probe"). The following table lists example details. The BluePrintContainer is the manager of the data flows in this composite solution:
Obtain the VM username and password and login to the account with ssh. At present these credentials are NOT accepted from the user so this step requires admin assistance.
After you are logged in to the new VM issue this command:
A sample session appears below. The external port numbers that you need are shown within the lines in the form "0.0.0.0:EXTERNAL_PORT_NUMBER→internal_port_number" (see below). For example, the face-detect model listens on port 8558, and the face-pixelate model listtens on port 8559.
Test the Composite Solution
You will use a web site with features that pack an image into a Protocol Buffer message, POST the message to the remote site, read the result back and display some of those results. The web site is available within the face-privacy-filter git repository that you cloned above. Change to the web-demo subdirectory and start the web server to listen on port 8000 by typing this command using Python version 3, which might be called just "python" on your system:
If port 8000 is in use, pick any other large number. Then open a web browser and browse to this URL and port:
To use the site you will enter a URL in the form's "Transform URL" field with the remote host address and port information where the models and connector are deployed.
Test the Detect Model
First test the detect model directly. Using the VM IP address and model port number you obtained above, modify the URL shown below appropriately. Also select the appropriate protobuf method:
URL: http://22.214.171.124:8558/detect (use the new IP address here, and you may have to adjust the port also!)
Protobuf Method: detect (input: Image, output: RegionDetectionSet)
Be careful to select the correct protobuf method. Then click one of the images at the bottom left to trigger the test (this example starts with "pexels source"). You should see the following:
Test the Pixelate Model
The test harness does not currently support standalone testing of the pixelate model. You can use command-line tools as follows:
- Follow the directions above and post an image to the detect model
- After the region detection set is shown, click on the Download Encoded Response button at the bottom of the file and save the file
- Use the curl command to post to the pixelate model, with the IP address and port you obtained above:
- Get the pixelate protocol buffer definition file and the package name (e.g., "GdyAUbAFvUrXNzjeDPijvbkPzYzHAEBm")
- Use the protoc command to decode the result using that file and package name:
- protoc --decode=HipTviKTkIkcmyuMCIAIDkeOOQQYyJne.Image model.pixelate.proto
Test the Model Combination via the Model Connector
To test the combined models enter details using the IP address and port you obtained above, plus the specified protobuf method:
Protobuf Method: detect (input: Image, output: Image)
Be careful to select the correct protobuf method. Then click one of the images at the bottom to trigger the test (this example starts with "flickr source"). You should see the following: