Usage – Transform, Manage, and Prepare Data

You might now be asking yourself how you can then use the results of the Spearman correlation linear regression modeling AutoML job. The answer is actually provided in Figure 5.46. The Best Model Summary section includes the algorithm name that was found to be most relevant. In this case the best model is VotingEnsemble, which had a Spearman correlation value of 0.10798. That value means there is a very weak correlation between the meditation brain wave reading values and the other scenario values in the modeled dataset. If you select the Models tab, you will notice that the VotingEnsemble algorithm is at the top of the list, which also signifies it as the most relevant model. You can also see an overview of all the algorithm models, their associated values, and the opportunity to explore other results. Selecting the VotingEnsemble algorithm link and then the Metrics tab results in the output illustrated in Figure 5.48.

To create the AML model using the VotingEnsemble algorithm job, select the + Create Model menu option, which walks you through a wizard. Once the model is created, notice the Deploy drop‐down from Figure 5.48. The Deploy drop‐down provides two options, Deploy to Real‐time Endpoint and Deploy to Web Service, both of which results in the provisioning of an HTTPS‐accessible endpoint that can be used to invoke the model. After successful deployment, it is possible to send data in JSON format to the model. The result would be a prediction of a possible future value based on the input provided. The response would be fast, as the model is already trained and will simply parse the data, process it using the modeled algorithm, and return a result for consumption. Other options to predict and score your data using the deployed model endpoint are to use the Azure CLI or any client that can send a request to a REST API—for example, curl, as shown here:

curl –request POST “$ENDPOINT-URL” –data @endpoint/online/model/
   brainwaves.json

FIGURE 5.48 Azure Machine Learning—VotingEnsemble algorithm

Another option for consuming an AML model is from a table hosted in a dedicated SQL pool running in Azure Synapse Analytics. Hover the mouse over a table and click the ellipse (…) to render the menu options, as shown in Figure 5.49.

FIGURE 5.49 Azure Machine Learning—usage prediction with a model workspace

The wizard walks you through the steps to configure the prediction feature for the chosen model. The model is the one you created earlier from the AML workspace. The model in the scenario is accessible using the AML linked service you created in Exercise 5.15, versus the published endpoint or web service. The detailed process for this configuration, execution, and result interpretation is outside the scope of this book. However, one snippet of code is interesting, and you might get a question or see a reference to this SQL command: PREDICT. Review the following SQL statement:

SELECT *
FROM PREDICT (MODEL = (SELECT [model] FROM [Model] WHERE [ID] = “<MODEL_ID>”))
              DATA = [brainjammer].[BRAINJAMMERAML],
              RUNTIME = ONNX WITH ([varialbe_out1] [real])

You use the PREDICT clause to generate a predicted value or score using an existing AML model. The MODEL argument retrieves the model that will be used to predict or score the value. The ID is the name you provided the model when you created it via the + Create Model menu option discussed earlier. Therefore, you would replace <MODEL_ID> with the name you provided. The DATA argument specifies the date to be used with the model to make the scoring or predictive calculations. The RUNTIME argument is set to ONNX, which is currently the only option, and describes the AML engine to use to perform the calculations. One step in the wizard to configure the Predict with a Model feature requires mapping the input and output. As you can see in Figure 5.50, the Model Output and Output Type fields are required.

FIGURE 5.50 Azure Machine Learning Usage—predict with a modelvariable_out1 is the name provided when you configure the output mappings and is queryable after the model scoring or prediction algorithm has run successfully.

Ileana Pecos

Learn More

Leave a Reply

Your email address will not be published. Required fields are marked *