Cognitive Services is a tool, that gives ability to use Machine Learning algorithms without any expirence in creating and training models. Today we’ll create a simple Xamarin app, which will use it.
During the lab you'll learn:
- How to create Cognitive Service in Azure Cloud.
- How to make request to Azure server.
- How to get response from Azure server.
During the lab you'll need:
- Visual Studio Community or higher with Xamarin installed
- At least Android Emulator should be installed
- Active Azure subscription. If you don't have it see all options here (Rus)
- Loading basic Xamarin app from zip.
- Adding some settings to the app.
- Creating Computer Vision service in Azure cloud.
- Adding requset method to the app.
- Handling response from Azure server.
Estimated time to finish this lab: 60 minutes
- Load zip archive
- Unzip project
- Open project in Visual Studio(follow screenshots bellow)
- Now you can deploy your app
- Explore the result
- We need to add permissions to read and write external storge. Also we need to add click action to the button and add one package from NuGet.
- Open NuGet manager and type Microsoft.Azure.CognitiveServices.Vision.ComputerVision
- Click install
- Now let's add permissions.
- Double left click on properties. Go to the Android Manifest in opened window.
- Scroll down and find field Required permissions.
- You should type storage in the search string and pick both found options.
- Now you can go to the MainActivity.cs and do folowing steps:
- Add permission request to the OnCreate method.
ActivityCompat.RequestPermissions(this, new String[] { Manifest.Permission.ReadExternalStorage, Manifest.Permission.WriteExternalStorage }, 1);
- Add action to the button.
btn.Click += delegate {
var imageIntent = new Intent();
imageIntent.SetType("image/*");
imageIntent.SetAction(Intent.ActionGetContent);
StartActivityForResult(Intent.CreateChooser(imageIntent, "Select photo"), 0);
};
- Add OnActivityResult method to the activity.
protected override void OnActivityResult(int requestCode, Result resultCode, Intent data)
{
base.OnActivityResult(requestCode, resultCode, data);
if (resultCode == Result.Ok)
{
}
}
- If you did everything right, you'll see something like this after deploying app.
home screen
after click the button
- Go to the cloud, than follow screenshots bellow.
click portal click create a resource - Now click create button and fill all fields. No matter which location you'll use, it affects only on the result url
- After a while you'll see somthing like this. Click go to resource.
- Now you on yours computer vision service page.
- First of all, you need to return to this page and click Keys.
After click you'll see something like this
- Copy first key and put it in the variable before OnCreate method.
private const string key = "8cb6b523ef3c4ece877682e826561853";
- Now we need to create some more variable before OnCreate method.
private const string urlBase = "https://eastus.api.cognitive.microsoft.com/vision/v2.0/analyze/";
private static readonly HttpClient client = new HttpClient {
DefaultRequestHeaders = { { "Ocp-Apim-Subscription-Key", key } }
};
- And make some changes in the OnActivityResult method. Add this two lines to the conditional operator.
string path = ActualPath.GetActualPathFromFile(data.Data, this);
Analyze(path);
- Now we need to create new async Task for making http requests to the cloud.
async Task Analyze(string imageFilePath)
{
}
- There, we need to create a few variables.
HttpResponseMessage response;
string requestParameters = "visualFeatures=Description";
string uri = urlBase + "?" + requestParameters;
byte[] byteData = GetImageAsByteArray(imageFilePath);
- As you see, there is nonexisting method
GetImageAsByteArray
here. So let's create it.
static byte[] GetImageAsByteArray(string imageFilePath)
{
// Open a read-only file stream for the specified file.
using (FileStream fileStream =
new FileStream(imageFilePath, FileMode.Open, FileAccess.Read))
{
// Read the file's contents into a byte array.
BinaryReader binaryReader = new BinaryReader(fileStream);
return binaryReader.ReadBytes((int)fileStream.Length);
}
}
- Now we need to set some Header values and make request in the async Task.
using (ByteArrayContent content = new ByteArrayContent(byteData))
{
content.Headers.ContentType = new MediaTypeHeaderValue("application/octet-stream");
response = await client.PostAsync(uri, content);
}
- Finally, we need to get data from response.
string contentString = await response.Content.ReadAsStringAsync();
JToken resp = JToken.Parse(contentString);
TextView textView = (TextView)FindViewById(Resource.Id.text);
textView.Text = resp.ToString();
- If you try to run it, you'll see something like this.
{
"description": {
"tags": [
"dog",
"indoor",
"small",
"brown",
"animal",
"mammal",
"sitting",
"laying",
"looking",
"white",
"lying",
"little",
"wearing",
"feet",
"sleeping",
"blanket",
"bed",
"leather",
"head"
],
"captions": [
{
"text": "a small brown and white dog lying on a blanket",
"confidence": 0.76464505730561938
}
]
},
"requestId": "40eb4b52-8746-4fa4-844f-53ddcb3249de",
"metadata": {
"width": 1960,
"height": 4032,
"format": "Jpeg"
}
}
- To avoid this, we need to change some lines.
JToken resp = JToken.Parse(contentString);
string tmp = resp["description"]["captions"][0]["text"].ToString();
TextView textView = (TextView)FindViewById(Resource.Id.text);
textView.Text = tmp;
- Full async Task method:
async Task Analyze(string imageFilePath)
{
HttpResponseMessage response;
string requestParameters = "visualFeatures=Description";
string uri = urlBase + "?" + requestParameters;
byte[] byteData = GetImageAsByteArray(imageFilePath);
using (ByteArrayContent content = new ByteArrayContent(byteData))
{
content.Headers.ContentType = new MediaTypeHeaderValue("application/octet-stream");
response = await client.PostAsync(uri, content);
}
string contentString = await response.Content.ReadAsStringAsync();
JToken resp = JToken.Parse(contentString);
string tmp = resp["description"]["captions"][0]["text"].ToString();
TextView textView = (TextView)FindViewById(Resource.Id.text);
textView.Text = tmp;
}
This lab is about basic usage of Cognitive Services in Xamarin app. I would like to hear your feedback and error reports via email: kon3gor@outlook.com