Google is going all in on AI going by this year’s I/O conference, and it’s helping developers access some of these capabilities with its ML Kit set of APIs.
ML Kit is a new suite of cross-platform APIs from Google enabling app developers to use machine learning for things such as face recognition, text scanning, reading barcodes, and even identifying objects and landmarks.
From the ML Kit documentation page:
“We want the entire device experience to be smarter, not just the OS, so we’re bringing the power of Google’s machine learning to app developers with the launch of ML Kit, a new set of cross-platform APIs available through Firebase.
ML Kit offers developers on-device APIs for text recognition, face detection, image labelling and more. So mobile developers building apps like Lose It!, a nutrition tracker, can easily deploy our text recognition model to scan nutritional information and ML Kit’s custom model APIs to automatically classify over 200 different foods with your phone’s camera.”
Many of these abilities can run offline but are more limited than when connected to Google’s cloud. For example, the on-device version of the API could detect a dog is in a photo – but when connected to the internet – it could recognise the specific breed.
Google says any data sent to its cloud is deleted after processing.
ML Kit is simplifying what used to be a complicated process and making AI more accessible. Rather than having to learn how to use complex machine learning libraries such as TensorFlow, retrieve enough data to train a model, and then make it light enough to run on a mobile device… ML Kit enables access to many common features via an API call on Google Firebase.
Developers wanting to get started with ML Kit can find it in the Firebase console.
What are your thoughts on Google’s ML Kit? Let us know in the comments.