内容简介:Posted by Christiaan Prins, Product Manager, ML Kit and Shiyu Hu, Tech Lead Manager, ML KitTwo years ago at I/O 2018 we introduced ML Kit, making it easier for mobile developers to integrate machine learning into your apps. Today, more than 25,000 applicat
Posted by Christiaan Prins, Product Manager, ML Kit and Shiyu Hu, Tech Lead Manager, ML Kit
Two years ago at I/O 2018 we introduced ML Kit, making it easier for mobile developers to integrate machine learning into your apps. Today, more than 25,000 applications on Android and iOS make use of ML Kit’s features. Now, we are introducing some changes that will make it even easier to use ML Kit. In addition, we have a new feature and a set of improvements we’d like to discuss.
A new ML Kit SDK, fully focused on on-device ML
ML Kit's APIs are built to help you tackle common challenges in the Vision and Natural Language domains. We make it easy to recognize text, scan barcodes, track and classify objects in real-time, do translation of text, and more.
The original version of ML Kit was tightly integrated with Firebase, and we heard from many of you that you wanted more flexibility when implementing it in your apps. As a result, we are now making all the on-device APIs available in a new standalone ML Kit SDK that no longer requires a Firebase project. You can still use both ML Kit and Firebase to get the best of both products if you choose to.
With this change, ML Kit is now fully focused on on-device machine learning, giving you access to the unique benefits that on-device versus cloud ML offers:
- It’s fast, unlocking real-time use cases - since processing happens on the device, there is no network latency. This means, we can do inference on a stream of images / video or multiple times a second on text strings.
- Works offline - you can rely on our APIs even when the network is spotty or your app’s end-user is in an area without connectivity.
- Privacy is retained : since all processing is performed locally, there is no need to send sensitive user data over the network to a server.
Naturally, you still get access to Google’s on-device models and processing pipelines, all accessible through easy-to-use APIs, and offered at no cost.
All ML Kit resources can now be found on our new website where we made it a lot easier to access sample apps, API reference docs and our community channels that are there to help you if you have questions.
What does this mean if I already use ML Kit today?
If you are using ML Kit for Firebase’s on-device APIs in your app today, we recommend you to migrate to the new standalone ML Kit SDK to benefit from new features and updates. For more information and step-by-step instructions to update your app, please follow our Migration guide . The cloud-based APIs, model deployment and AutoML Vision Edge remain available through Firebase Machine Learning .
Shrink your app footprint with Google Play Services
Apart from making ML Kit easier to use, developers also asked if we can ship ML Kit through Google Play Services resulting in a smaller app footprint and the model can be reused between apps. Apart from Barcode scanning and Text recognition, we have now added Face detection / contour (model size: 20MB) to the list of APIs that support this functionality.
// Face detection / Face contour model // Delivered via Google Play Services outside your app's APK… implementation 'com.google.android.gms:play-services-mlkit-face-detection:16.0.0' // …or bundled with your app's APK implementation 'com.google.mlkit:face-detection:16.0.0'
Jetpack Lifecycle / CameraX support
Android Jetpack Lifecycle support has been added to all APIs. Developers can use addObserver to automatically manage teardown of ML Kit APIs as the app goes through screen rotation or closure by the user / system. This makes CameraX integration easier. With this release, we are also recommending that developers adopt CameraX in their apps due to the ease of integration and image quality improvements (compared to Camera1) on a wide range of devices.
// ML Kit now supports Lifecycle val recognizer = TextRecognizer.newInstance() lifecycle.addObserver(recognizer) // ... // Just like CameraX val camera = cameraProvider.bindToLifecycle( /* lifecycleOwner= */this, cameraSelector, previewUseCase, analysisUseCase)
For an overview of all recent changes, check out the release notes for the new SDK.
Codelab of the day - ML Kit x CameraX
To help you get started with the new ML Kit and its support for CameraX, we have created this code lab to Recognize, Identify Language and Translate text . If you have any questions regarding this code lab, please raise them at StackOverflow and tag it with [google-mlkit] . Our team will monitor this.
Early access program
Through our early access program, developers have an opportunity to partner with the ML Kit team and get access to upcoming features. Two new APIs are now available as part of this program:
- Entity Extraction - Detect entities in text & make them actionable. We have support for phone numbers, addresses, payment numbers, tracking numbers, date/time and more.
- Pose Detection - Low-latency pose detection supporting 33 skeletal points, including hands and feet tracking.
If you are interested, head over to our early access page for details.
Tomorrow - Support for custom models
ML Kit's turn-key solutions are built to help you take common challenges. However, if you needed to have a more tailored solution, one that required custom models, you typically needed to build an implementation from scratch. To help, we are now providing the option to swap out the default Google models with a custom TensorFlow Lite model. We’re starting with the Image Labeling and Object Detection and Tracking APIs, that now support custom image classification models.
Tomorrow, we will dive a bit deeper into how to find or train a TensorFlow Lite model and use it either with ML Kit, or with Android Studio’s new ML binding functionality.
以上所述就是小编给大家介绍的《On-device machine learning solutions with ML Kit, now even easier to use》,希望对大家有所帮助,如果大家有任何疑问请给我留言,小编会及时回复大家的。在此也非常感谢大家对 码农网 的支持!
猜你喜欢:本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们。