Skip to end of metadata
Go to start of metadata

Overview

In this document we’re going to go through an end-to-end example showing the lifetime of an example composite solution in Acumos.  We’re not going to cover every possible way of doing things; this is just meant to be a cookbook recipe to help get people started.

In particular, we are going to cover onboarding two simple Python scikit-learn models using web onboarding, then going to the Design Studio and composing a solution using those models. After that we show how to deploy the model into Azure, and finally present a web site you can use to communicate with the composite solution.

Note that there are really two different user experiences represented here. The first is the experience of the data scientist, who creates the model and onboards it to Acumos. The second is the model user, who wants to deploy the model as a microservice somewhere and use it.

The example presented in this document is called "face privacy filter" because it demonstrates obscuring human faces in images.  The solution joins together a single standalone model for detecting faces in an image with a second model for obscuring regions in an image.

On-Board Models

Prerequisites

Here are the prerequisites for going through this example. Note that these are not strictly all required to use Acumos itself—there are alternatives at various points—but these are what you will need to duplicate the steps in this particular demo. There are slightly different requirements for the Model Onboarder vs. the Model User; of course, in this demo, we will be wearing both hats. Here is the software you will need to have installed on your local system:

  • Model Onboarder: Web browser, Python (3.6+), Acumos Client Library, Protocol Buffers (Protobuf), ML packages of choice, git client
  • Model User: Web Browser

Virtual Environment

This example uses a Python virtual environment.  This ensures first that you avoid problems with existing packages on your machine, and eliminates the need for admin privileges to install software in protected places.

Model Onboarder

The model onboarder will first need to install a few things: Python version 3.6+ and the Protocol Buffers compiler.  You should read the advice on https://pypi.org/project/acumos/ which gives good information on how to install this software. You should know which ML packages you are using, and of course it will be up to you to install these on your local system. Acumos supports Java and R as well as Python, and these have their own client libraries and requirements, but this document/example is Python-specific.

Model User

The model user needs to have access to a deployment environment for running their composite solution. This example will show how to deploy to the cloud, specifically Microsoft Azure. Users will use a web browser to communicate with the deployed solution.

Install Prerequisites

The very first step is to ensure you have Python version 3.6.  You might need to install it using a package, you might use pyenv to install it, etc.; there are simply too many possibilities to cover here.  Ensure you get output something like this:

$ python --version
Python 3.6.4

Next you will create a new directory with a virtual environment; i.e., where all the packages are installed for this demonstration.  This example uses the name "virtenv36" below, a reminder that it's a virtual environment with Python version 3.6. Type the following commands:

virtualenv --python=python3.6 ~/virtenv36
source ~/virtenv36/bin/activate
pip install acumos

An example session from a Linux system appears below. Please note the exact paths shown in the command output may differ on your system.

$ virtualenv --python=python3.6 ~/virtenv36
Running virtualenv with interpreter /usr/local/bin/python3.6
Using base prefix '/usr'
New python executable in /home/user/virtenv36/bin/python3.6
Also creating executable in /home/user/virtenv36/bin/python
Installing setuptools, pip, wheel...done.
$ source ~/virtenv36/bin/activate
(virtenv36) $ pip install acumos
Collecting acumos
   [..much download progress output omitted..]
Installing collected packages: typing, dill, appdirs, filelock, six, protobuf, numpy, chardet, urllib3, certifi, idna, requests, acumos
Successfully installed acumos-0.7.0 appdirs-1.4.3 certifi-2018.4.16 chardet-3.0.4 dill-0.2.7.1 filelock-3.0.4 idna-2.7 numpy-1.14.5 protobuf-3.6.0 requests-2.19.1 six-1.11.0 typing-3.6.4 urllib3-1.23

Clone the Model Source

The next step is to obtain the Python source code and supporting data for the face-detection and face-pixelation models from the Acumos git repository using git.  Type this command:

git clone https://gerrit.acumos.org/r/face-privacy-filter

An example session from a Linux system appears below.

$ git clone https://gerrit.acumos.org/r/face-privacy-filter
Cloning into 'face-privacy-filter'...
remote: Total 470 (delta 0), reused 470 (delta 0)
Receiving objects: 100% (470/470), 10.19 MiB | 7.40 MiB/s, done.
Resolving deltas: 100% (230/230), done.
$ ls face-privacy-filter/
INFO.yaml          README.rst              requirements.txt    tox.ini
INSTALL.txt        docs/                   setup.py            web_demo/
LICENSE.txt        face_privacy_filter/    testing/

Create Model Artifacts

Next you will install the Python prerequisites for these models, and use the source to create model artifacts that can be on-boarded into Acumos.  Type the following commands:

cd face-privacy-filter
pip install -r requirements.txt
python face_privacy_filter/filter_image.py -d . -f detect
python face_privacy_filter/filter_image.py -d . -f pixelate

You should see two new directories with similar contents:

(virtenv36) $ ls face_privacy_filter_detect/
metadata.json    model.proto    model.zip
(virtenv36) $ ls face_privacy_filter_pixelate/
metadata.json    model.proto    model.zip

Prepare Model Bundles to On-Board

In this step you will create zip archives for use in the Web On-Boarding feature of Acumos.  These bundles contain the model artifacts you created in the previous step.  It's important that these zip archives contain no directory or subdirectory, just the required three files.  Type these commands to create the two zip archives:

cd face_privacy_filter_detect
zip ../detect.zip metadata.json model.proto model.zip 
cd ..
cd face_privacy_filter_pixelate
zip ../pixelate.zip metadata.json model.proto model.zip
cd ..

When you are done you should have two new zip archives. The file sizes may differ from the example here:

(virtenv36) $ ls -l *.zip
-rw-r--r--  1 user  staff  134683 Jul 10 11:40 detect.zip
-rw-r--r--  1 user  staff   35522 Jul 10 11:41 pixelate.zip

On-Board the Face Detect Model

Now we will go to the Acumos instance where you wish to onboard this newly created model. Log in to Acumos, select ON-BOARDING MODEL from the left tabs, and fill in the requested data. We have a scikit-learn toolkit model:

We’ve already downloaded the client library, so we just check the “Installation of the toolkit library is completed” box. Next, we click on “Upload Model Bundle” and drag and drop the zip archive we created earlier from the dumped files:

Click “Done”; then will be able to click the "On-Board Model” button which kicks off the on-boarding process. Be patient—this can take quite a long time! You will see a little animation near the top of the page showing progress. When Onboarding completes successfully, you should see something like this:

On-Board the Face Pixelation Model

When the previous on-boarding step completes for the "detect" model, perform the same steps for the "pixelate" model.

View Your Models

Congratulations! Now your models are in the catalog as Unpublished (i.e., private) models. You can click the “View Models” link shown by the final green circle with the checkmark to go directly to your models on the "My Models" page.  Please note these models have different names, but the screen truncates the names after the first 16 characters so they appear to be the same:

Click on your new model (face_privacy_filter_detect) to go to that model’s specific page. There’s not a lot of information there right now, but if you want, you can click on “Manage My Model” and do things like Publish your model (to Company or Public), add information, or share your model with other users.

Set Model Category and Toolkit Type in Metadata

To workaround a current limitation in the Design Studio, you must add some metadata to your newly on-boarded models.  For each one, click on the model to visit its model page, click on the Manage My Model button in the top, then click on "Publish to Company Marketplace" in the left navigation bar.  This will allow you to edit the metadata.

  1. Revise the model name to make it short; e.g., "detect" instead of "face_privacy_filter_detect" and "pixelate" instead of "face_privacy_filter_pixelate". Click on Model Name, wait for a popup to appear, enter the new name, click Update. Do this for both models.
  2. Set a model category and a toolkit type. Click on "Model Category", wait for a popup to appear, select a Category such as Classification on the left, select toolkit type Scikit-Learn on the right, click Done.  Do this for both models.

We’re not going to do any more things right now in this example. We’re at the end of the Onboarding section, and we are now switching hats to become a Model User. You won’t be able to see someone else’s models unless that person publishes the model to company or shares it with you. So, we will continue where we left off as the same user who on-boarded.

Create a Composite Solution

Now that the models are available in the Acumos platform you will create a composite solution and deploy it.

Compose the Solution in Design Studio

Open the Design Studio by clicking on the left navigation bar.  Wait for the screen to populate.  Expand the Classification section under Models on the left-side and look for the model names you entered in the previous step.  This screen shows the model "detect" in blue among the list:

Drag the "detect" model and drop it onto the canvas.  Find the "pixelate" model and similarly drag and drop that model onto the canvas.  Create a data path between the two models from the output port (right-hand side) of the "detect" model to the input port (left-hand side) of the "pixelate" model.  The result should look like the following:

Click the Save button at the top to save the composite solution, give it a meaningful name and version number. This example uses "detect-pixelate" and "1.0".  Then click on the Validate button at the top and ensure the Design Studio shows no errors in the bar at the bottom. The newly saved and validated model name should appear in the "My Solutions" tab on the lower right.

Deploy the Composite Solution to Azure

From the Design Studio click on the Deploy button at the top, and in the drop-down click on Azure. This will take you directly to the  manage-my-model page with the Export / Deploy to Cloud section active.  You should see the following screen:

Enter the appropriate IDs, secret keys and other details as part of your Azure account. Deployment will create a new virtual machine (VM) and several docker containers will be running on that VM, all listening at different ports.

After the deployment finishes the user will receive a notification with the IP address of the deployed VM, as shown in the following screenshot:



Next you will gather the port number information of the constituent models, connector and (if used) the proto viewer ("probe"). The following table lists example details. The BluePrintContainer is the manager of the data flows in this composite solution:

Container Name

IP

Port

BluePrintContainer

40.76.26.211

8555

detect

40.76.26.211

8558

pixelate

40.76.26.211

8557


Obtain the VM username and password and login to the account with ssh. At present these credentials are NOT accepted from the user so this step requires admin assistance.

After you are logged in to the new VM issue this command:

docker ps

A sample session appears below.  The external port numbers that you need are shown within the lines in the form "0.0.0.0:EXTERNAL_PORT_NUMBER→internal_port_number" (see below).  For example, the face-detect model listens on port 8558, and the face-pixelate model listtens on port 8559.

$ docker ps
CONTAINER ID        IMAGE                                                               COMMAND                  CREATED             STATUS              PORTS                    NAMES
86dbdcea3e18        cognitae6reg.azurecr.io/samples/cognita-e6e1531232959938_5:1.0.13   "/bin/sh -c 'java ..."   45 hours ago        Up 45 hours         0.0.0.0:8555->8555/tcp   BluePrintContainer
11b2e24c4cd2        cognitae6reg.azurecr.io/samples/cognita-e6e1531232959938_4:1.5.3    "/bin/sh -c 'redis..."   45 hours ago        Up 45 hours         0.0.0.0:5006->5006/tcp   Probe
59ffd3adda75        cognitae6reg.azurecr.io/samples/cognita-e6e1531232959938_3:latest   "nginx -g 'daemon ..."   45 hours ago        Up 45 hours         0.0.0.0:8559->80/tcp     Nginx
d52204149837        cognitae6reg.azurecr.io/samples/cognita-e6e1531232959938_2:1        "python runner.py"       45 hours ago        Up 45 hours         0.0.0.0:8558->3330/tcp   fpf_detect_0331
32695f149b6c        cognitae6reg.azurecr.io/samples/cognita-e6e1531232959938_1:1        "python runner.py"       45 hours ago        Up 45 hours         0.0.0.0:8557->3330/tcp   fpf_pixelate_0331

Test the Composite Solution

You will use a web site with features that pack an image into a Protocol Buffer message, POST the message to the remote site, read the result back and display some of those results.  The web site is available within the face-privacy-filter git repository that you cloned above.  Change to the web-demo subdirectory and start the web server to listen on port 8000 by typing this command using Python version 3, which might be called just "python" on your system:

python3 simple-cors-http-server-python3.py 8000

If port 8000 is in use, pick any other large number.  Then open a web browser and browse to this URL and port:

http://localhost:8000/face-privacy.html

To use the site you will enter a URL in the form's "Transform URL" field with the remote host address and port information where the models and connector are deployed.

Test the Detect Model

First test the detect model directly. Using the VM IP address and model port number you obtained above, modify the URL shown below appropriately. Also select the appropriate protobuf method:

URL: http://40.76.26.211:8558/detect          (use the new IP address here, and you may have to adjust the port also!)

Protobuf Method: detect (input: Image, output: RegionDetectionSet)

Be careful to select the correct protobuf method.  Then click one of the images at the bottom left to trigger the test (this example starts with "pexels source"). You should see the following:

Test the Pixelate Model

The test harness does not currently support standalone testing of the pixelate model.  You can use command-line tools as follows:

  1. Follow the directions above and post an image to the detect model
  2. After the region detection set is shown, click on the Download Encoded Response button at the bottom of the file and save the file
  3. Use the curl command to post to the pixelate model, with the IP address and port you obtained above:
    1. curl --data-binary @protobuf_regions.out.bin "http://40.76.26.211:8557/pixelate" -H "Content-Type: text/plain" > out.bin
  4. Get the pixelate protocol buffer definition file and the package name (e.g., "GdyAUbAFvUrXNzjeDPijvbkPzYzHAEBm")
  5. Use the protoc command to decode the result using that file and package name:  
    1. protoc --decode=HipTviKTkIkcmyuMCIAIDkeOOQQYyJne.Image model.pixelate.proto

Test the Model Combination via the Model Connector

To test the combined models enter details using the IP address and port you obtained above, plus the specified protobuf method:

URL: http://40.76.26.211:8555/detect

Protobuf Method: detect (input: Image, output: Image)

Be careful to select the correct protobuf method.  Then click one of the images at the bottom to trigger the test (this example starts with "flickr source"). You should see the following:



  • No labels