Skip to content
This repository has been archived by the owner on Aug 2, 2023. It is now read-only.

06.3 Cognitive Services (Mobile)

Miguel Veloso edited this page Feb 13, 2019 · 3 revisions

Introduction

Step 1: Setting up Cognitive Services - Custom Vision

You need to follow same steps stated in this Setting up Cognitive Services - Custom Vision for creating a Custom Vision service and training the service using your own custom images.

Step 2: Configure Mobile Applications

The mobile applications are able to classify images in the same way is done in the webmvc application.

  • If the application is configured in Mock mode (settings option), then the mobile application will use an offline model, that is generated in the customvision.ai portal
  • If the application is configured in Online mode, it will use the custom vision through the microservice

In the customvision.ai portal, once the model is trained, you need to access to the performance tab, and then click on the Export button to download the offline model needed by each application.

image

Android

In the right blade, click on the TF icon, and then, Export / Download the button, as shown in the following figure:

image

The downloaded file will contain two files, which need to be copied to the folder eShopOnContainers.Droid / Assets / ModelsAI

image

iOS

In the right blade, click on the iOS icon, and then, Export / Download the button, as shown in the following figure:

image

The downloaded file need must be renamed to customvision.mlmodel, and then copied to the folder eShopOnContainers.iOS / Resources / ModelsAI.

image

NOTE: The iOS application needs at least iOS 11 in order to use CoreML services

UWP

In the right blade, click on the ONNX icon, and then, Export / Download the button, as shown in the following figure:

image

The downloaded file must be copied to the folder eShopOnContainers.Windows / Assets / ModelsAI

image

NOTE: THE UWP application needs Windows 10 update 1803 or greater is need in order to use Windows Machine Learning services.

Step 3: Backend

The mobile applications can be configured to run offline (working with mockups, ideal for quick development) or online. In online mode, you need to provide 3 endpoints:

  • identity.api -> authentication services
  • mobileshoppingapigw -> shopping services
  • mobilemarketingapigw -> marketing and location services

So you need to execute the backend in some host which is available to the mobile apps. You might execute the containers in your local machine, and open your firewall to provide access to the ports where the services are running (in windows, this access should be already been granted if you ran the script add-firewall-rules-for-sts-auth-thru-docker.ps1)

  • identity.api -> 5105
  • mobileshoppingapigw -> 5200
  • mobilemarketingapigw -> 5201

In this case, you need to update the .env file, and provide the correct IP external address, for example:

ESHOP_PROD_EXTERNAL_DNS_NAME_OR_IP=192.168.0.1
ESHOP_AZURE_STORAGE_CATALOG_URL=http://192.168.0.1:5200/api/v1/c/catalog/items/[0]/pic/

And then, you can run run services in background executing the following commands:

docker-compose up -d mobileshoppingapigw
docker-compose up -d mobilemarketingapigw

The third option, if you don't want to fiddle with firewall rules, is to install ngrok. This tool is available for Windows, OS X and Linux, and you can download ngrok from the homepage. There is a template for using the tool in ... which can be run from the console executing the following command:

.\ngrok.exe start -config .\eshopai.mobile.ngrok.yml -all

After executing last command, you should see an screen similar to this:

image

Step 4: Execute mobile application

Note iOS: You need to update the eShopOnContainers.iOS / Info.plist file, in order to grant access the webview control to load the login page. You need to look up the key NSExceptionDomains and change the dictionary subkey to your current domain used by the identity server. By default is set up to use ngrok.io, so if you use ngrok you don't need to change anything.Otherwise, you need to change the value shown in the screenshot:

image

Once the application is built, you can execute it in your native device or via emulator. Either way, you are able to use the endpoints you configure in last step. service, you need to access to Settings, and set up the endpoints, as described in the following screenshot:

image

Once the endpoints are configured, you should be able to login; after some moments, you can browse products, purchase some items or review orders, for example.

NOTE: Sometimes, there is a problem when trying to load the login page in the web view control. Check identity.api service logs for any trace of error. In some cases, you need to update the table Microsoft.eShopOnContainers.Service.IdentityDb.ClientRedirectUris and provide and correct address for the mobile application, for example:

ClientId RedirectUri
2 http://192.168.0.1:5105/xamarincallback
2 http://es_iden.ngrok.io/xamarincallback

Code Walkthough

    public interface IImageClassifier
    {
        Task Init();
        Task<IReadOnlyList<ImageClassification>> ClassifyImage(byte[] image);
    }

The interface IImageClassifier is implemented in each target device, because each target implements a different way to evaluate the model: UWP application uses a model in ONNX format and Windows Machine Learning, Android used a model in Tensorflow, and iOS uses a model in CoreML format. The implementations use a general pattern, where we use a input transformation (image pre-processing), model inference (image probabilities) and output transformation (image classification).

[assembly: Xamarin.Forms.Dependency(typeof(eShopOnContainers.Windows.AI.ImageClassifier))]
namespace eShopOnContainers.Windows.AI
{
    public class ImageClassifier : IImageClassifier
    {
        OnnxModel onnxmodel;

        private bool IsInitialized => onnxmodel != null;

        public async Task<IReadOnlyList<ImageClassification>> ClassifyImage(byte[] image)
        {
            if (!IsInitialized)
                await Init();

            var input = await OnnxModelInput.CreateFrom(image);
            var results = await onnxmodel.EvaluateAsync(input);

            return results.Loss
                .Where(label => label.Value > 0.85)
                .Select(label => new ImageClassification(label.Key, label.Value)).ToList();
        }

        public async Task Init()
        {
            onnxmodel = await OnnxModel.CreateOnnxModel();
        }
    }
}

Each platform provides an implementation for IImageClassifier using Xamarin dependency injection:

News

Setup

Scenarios

Clone this wiki locally