Image classification allows our Xamarin applications to recognize objects in photographs.More and more common is the ability to take a photo and recognize its contents. We can observe this in our banking applications when making a mobile deposit, in photo applications when adding filters and in HotDog applications to determine whether our food is a hot dog.Thanks to the Azure Custom Vision service, we donβt need to learn complex machine learning algorithms to implement image classification.In this article, we will look at how to implement image classification using the Azure Custom Vision service , TensorFlow Lite (open source machine learning platform), andXamarin.Android .Note: For Xamarin.iOS, we can also use the Azure Custom Vision service with CoreML , but weβd better save it for another article.Image Classification Libraries
We will use the Azure Custom Vision service and TensorFlow Lite to implement our image classification.1. Azure Custom Vision Service
The Azure Custom Vision service simplifies the creation and training of a machine learning model β no experience with Artificail Intelligence (AI) or Machine Learning (ML) is required.Using the web portal of the Custom Vision service , we can do the following without writing any AI / ML code:- Download training image
- Mark Tags / Tags of object (s) in the image
- Repeat (the model gets better with more training data)
- That's it - Custom Vision takes care of the rest!
2. TensorFlow Lite
TensorFlow Lite is an open source machine learning platform that allows us to use TensorFlow in IoT and mobile devices.TensorFlow Lite and TensorFlow are available on open source on GitHub .Implementing image classification using Azure + Xamarin.Android
A complete sample image classification application is available on GitHub .1. Model training
Using the web portal of the Custom Vision service , we first train image classification models.1. On the Custom Vision service web portal, click New Project2. In the Create new project window, set the following parameters:- Name: XamarinImageClassification
- Description: Identify Objects in Images
- Resource: [Create a new resource]
- Project Type: Classification
- Classification Types: Multilabel (Multiple tags per image)
- Domains: General (compact)
- Export Capabilities: Basic platforms
3. In the Create new project window, click Create project4. In the XamarinImageClassification window , click Add images5. Select the images containing the object for identification6. In the Image Upload window, add a tagNote: in this example, we are working with mushroom images7. In the Image window upload click UploadNote: keep uploading images until you have at least 5 images for each tag8. In the XamarinImageClassification window in the upper right corner of the window, click the Train Model button (green image of gears)9. In the Choose Training Type window, select Quick Training10. In the Choose Training Type window, select Train2. Exporting a trained model from the Azure Custom Vision service
Now that we have trained our model, let's export it for use in our mobile application .This will allow us to use the model without an Internet connection, which will ensure the best privacy for the user, because his photos will never leave the mobile device.To export our model, let's do the following:1. In the XamarinImageClassifcation window at the top of the page, select the Performance tab2. On the Performace tab, click the Export button (arrow pointing down)3. In the Choose your platform window, select TensorFlow4. In the Choose your platform drop-down list, select TensorFlow Lite5. In the Choose your platform window, select Download3. Import TensorFlow Lite into our Xamarin.Android app
1. Install the appropriate NuGet package in our Xamarin.Android projectNote: This NuGet package is an open source project created by the Xamarin team at Microsoft. It contains C # bindings for the original TensorFlow Lite library , allowing it to be used in our Xamarin.Android2 application . Unzip the exported model that we downloaded from the Custom Vision service web portalNote: inside the zip file are labels.txt and model.tflite :- labels.txt contains image tags created during preparation for training on the Custom Vision website
- models.tflite is the machine learning model that we use for our forecasts.
3. In Visual Studio, in the Xamarin.Android project, right-click the Assets folder4. In the pop-up menu, select Add β Existing Item ...5. In the Add Existing Item menu, select both recently unpacked files:6. In Visual Studio, in Xamarin.Android β Assets , right-click on labels.txt7. In the pop-up menu, select Properties8. In the Properties window, select Build Action β Android Asset9. In Visual Studio, in Xamarin.Android β Assets , right-click on models.tflite10. In the pop-up menu, select Properties11. In the Properties window, select Build Action β Android Asset4. Implementation of image classification code for Xamarin.Android
Now that we have imported the model, it's time to do some code writing.As a reminder, a fully finished sample image classification application is available on GitHub .In the Xamarin.Android project , add ImageClassifcationModel.cs and TensorflowClassifier.cs :ImageClassificationModel.cs
public class ImageClassificationModel
{
public ImageClassificationModel(string tagName, float probability)
{
TagName = tagName;
Probability = probability;
}
public float Probability { get; }
public string TagName { get; }
}
TensorflowClassifier.cs
using System.Collections.Generic;
using System.IO;
using System.Linq;
using Android.App;
using Android.Graphics;
using Java.IO;
using Java.Nio;
using Java.Nio.Channels;
public class TensorflowClassifier
{
const int FloatSize = 4;
const int PixelSize = 3;
public List<ImageClassificationModel> Classify(byte[] image)
{
var mappedByteBuffer = GetModelAsMappedByteBuffer();
var interpreter = new Xamarin.TensorFlow.Lite.Interpreter(mappedByteBuffer);
var tensor = interpreter.GetInputTensor(0);
var shape = tensor.Shape();
var width = shape[1];
var height = shape[2];
var byteBuffer = GetPhotoAsByteBuffer(image, width, height);
var streamReader = new StreamReader(Application.Context.Assets.Open("labels.txt"));
var labels = streamReader.ReadToEnd().Split('\n').Select(s => s.Trim()).Where(s => !string.IsNullOrEmpty(s)).ToList();
var outputLocations = new float[1][] { new float[labels.Count] };
var outputs = Java.Lang.Object.FromArray(outputLocations);
interpreter.Run(byteBuffer, outputs);
var classificationResult = outputs.ToArray<float[]>();
var classificationModelList = new List<ImageClassificationModel>();
for (var i = 0; i < labels.Count; i++)
{
var label = labels[i]; classificationModelList.Add(new ImageClassificationModel(label, classificationResult[0][i]));
}
return classificationModelList;
}
private MappedByteBuffer GetModelAsMappedByteBuffer()
{
var assetDescriptor = Application.Context.Assets.OpenFd("model.tflite");
var inputStream = new FileInputStream(assetDescriptor.FileDescriptor);
var mappedByteBuffer = inputStream.Channel.Map(FileChannel.MapMode.ReadOnly, assetDescriptor.StartOffset, assetDescriptor.DeclaredLength);
return mappedByteBuffer;
}
private ByteBuffer GetPhotoAsByteBuffer(byte[] image, int width, int height)
{
var bitmap = BitmapFactory.DecodeByteArray(image, 0, image.Length);
var resizedBitmap = Bitmap.CreateScaledBitmap(bitmap, width, height, true);
var modelInputSize = FloatSize * height * width * PixelSize;
var byteBuffer = ByteBuffer.AllocateDirect(modelInputSize);
byteBuffer.Order(ByteOrder.NativeOrder());
var pixels = new int[width * height];
resizedBitmap.GetPixels(pixels, 0, resizedBitmap.Width, 0, 0, resizedBitmap.Width, resizedBitmap.Height);
var pixel = 0;
for (var i = 0; i < width; i++)
{
for (var j = 0; j < height; j++)
{
var pixelVal = pixels[pixel++];
byteBuffer.PutFloat(pixelVal >> 16 & 0xFF);
byteBuffer.PutFloat(pixelVal >> 8 & 0xFF);
byteBuffer.PutFloat(pixelVal & 0xFF);
}
}
bitmap.Recycle();
return byteBuffer;
}
}
That's all! Now we can pass the image to TensorflowClassifier.Classify to get the ImageClassificationModel .Materials for further study
Check out the links below: