内容简介:The FaceNet Keras model is available onTo implement a live Camera feed, we use CameraX. I have used the code available in theAll our classification code will come in the
1. Convert the Keras model to a TFLite model
The FaceNet Keras model is available on
nyoki-mtl/keras-facenet
repo. After downloading the .h5
model, we’ll use the tf.lite.TFLiteConverter
API to convert our Keras model to a TFLite model.
2. Setting up a Preview and ImageAnalyser using CameraX
To implement a live Camera feed, we use CameraX. I have used the code available in the official docs
. Next, we create a FrameAnalyser
class which implements ImageAnalysis
class, which will help us retrieve camera frames and run inference on them.
All our classification code will come in the analyze
method. First, using Firebase MLKit, we’ll get bounding boxes for all faces present in the camera frame ( a Bitmap
object ). We’ll create a FirebaseVisionFaceDetector
which runs the face detection model on an FirebaseVisionInputImage
object.
3. Producing Face Embeddings using FaceNet and Comparing them.
First, we’ll produce face embeddings using our FaceNet model. Before, we’ll create a helper class for handling the FaceNet model. This helper class will,
-
Crop the given camera frame using the bounding box ( as
Rect
) which we got from Firebase MLKit. -
Transform this cropped image from a
Bitmap
to aByteBuffer
with normalized pixel values. -
Finally, feed the
ByteBuffer
to our FaceNet model using theInterpreter
class provided by TF Lite Android library.
In the snippet below, see the getFaceEmbedding()
method which encapsulates all the above steps.
Now, we have a class that would return us the 128-dimensional embedding for all faces present in the given image. We come back to a FrameAnalyser
‘s analyze()
method. Using the helper class which just created, we’ll produce face embeddings and compare each of them with a set of embeddings that we already have.
Before that, we need to get the set of predefined embeddings, right? These embeddings will refer to the people whom we need to recognize. So, the app will read the images
folder present in the internal storage of the user’s device. If the user wants to recognize, two users, namely Rahul
and Neeta
, then he/she needs to create two separate directories within the images
folder. Then he/she has to place an image of Rahul
and Neeta
in their respective sub-directories.
images -> rahul -> image_rahul.png neeta -> image_neeta.png
Our aim to read these images and produce a HashMap<String,FloatArray>
object where the key ( String
)will the subject’s name like Rahul
or Neeta
and the value ( FloatArray
) will the corresponding face embedding. You’ll get an idea of the process like studying the code below.
We’ll compare the embeddings using the cosine similarity metrics which will return a similarity score in the interval [ -1 , 1 ]
.
The predictions
array is then supplied to the boundingBoxOverlay
class which draws the bounding boxes and also displays the label. In the BoundingBoxOverlay.kt
class,
The Results
Using the app, I have tried to recognize the faces of Jeff Bezos and Elon Musk,
Also, I had stored in the images in my internal storage as such,
The End
I hope you liked the story. I have included any APK in the GitHub repo so that you can try the app on your device. Thanks for reading!
以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持 码农网
猜你喜欢:本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们。
公众号运营实战手册
中信出版社 / 2018-11 / 58
作者粥左罗在刚入行做新媒体的一年时间里,就写了100篇阅读量10万+的公众号文章,但是在此之前,他足足花了两个月的时间研究公众号运营和爆款文章的逻辑和打法。 这本书就是他总结和归纳自己公众号写作和运营的全部秘诀和技巧,是一本行之有效的实战指南。 从如何注册一个公号,给公号起什么名字? 多长时间更新一次为好? 到如何找选题,如何积累爆款素材? 如何编辑内容,如何做版面设......一起来看看 《公众号运营实战手册》 这本书的介绍吧!