内容简介:“You Only Look Once” (YOLO) is an object detection algorithm that is known for its high accuracy while it is also being able to run in real-time due to its speed detection. Unlike the previous algorithm, the third version facilitates an efficient tradeoff
An Explanation of YOLO v3 in a nutshell with Keras Implementation
Jun 15 ·5min read
About YOLO v3 Algorithm
“You Only Look Once” (YOLO) is an object detection algorithm that is known for its high accuracy while it is also being able to run in real-time due to its speed detection. Unlike the previous algorithm, the third version facilitates an efficient tradeoff between speed and accuracy simply by changing the size of the model where retraining is not necessary.
Before we start to implement object detection with YOLO v3, we need to download the pre-train model weights . Downloading this may take a while, so you can prepare your coffee while waiting. YOLO v3 is written in the DarkNet framework which is open-source Neural Network in C. This makes me feel so intimidated in the first place.
But thankfully, this code is strongly inspired by experiencor’s keras-yolo3 projec t for performing the YOLO v3 model using Keras. Throughout this whole implementation, I am going to run this on Google Colab. Besides, we are going to using these cute dogs image for object detection.
So let’s get our hand dirty!!
Step 1:
Jump into the very first step, the following are the necessary libraries and dependencies.
Step 2:
Next, the WeightReader
class is used to parse the “yolov3. weights” file and load the model weights into memory.
Step 3:
YOLO v3 is using a new network to perform feature extraction which is undeniably larger compare to YOLO v2. This network is known as Darknet-53 as the whole network composes of 53 convolutional layers with shortcut connections (Redmon & Farhadi, 2018) .
Therefore, the code below composes several components which are:
-
_conv_block
function which is used to construct a convolutional layer -
make_yolov3_model
function which is used to create layers of convolutional and stack together as a whole.
Step 4:
Next, the following code is explained as below:
save
Step 4:
This step involves decoding the prediction output into bounding boxes
The output of the YOLO v3 prediction is in the form of a list of arrays that hardly to be interpreted. As YOLO v3 is a multi-scale detection, it is decoded into three different scales in the shape of (13, 13, 225), (26, 26, 225), and (52, 52, 225)
-
decode_netout
function is used to decode the prediction output into boxes
In a nutshell, this is how the decode_netout
function work as illustrated below:
以上就是本文的全部内容,希望本文的内容对大家的学习或者工作能带来一定的帮助,也希望大家多多支持 码农网
猜你喜欢:本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们。
人类思维如何与互联网共同进化
[美] 约翰·布罗克曼 / 付晓光 / 浙江人民出版社 / 2017-3 / 79.90元
➢人类是否因互联网的诞生进入了公平竞争的场域? “黑天鹅事件”频频发生,我们的预测能力是否正在退化? 智人的第四阶段有哪些特征? 全球脑会使人类成为“超级英雄”吗? 虚拟现实技术会不会灭绝人类的真实体验? 还有更多不可预知答案的问题,你将在本书中找到属于自己的答案! ➢ 我们的心智正和互联网发生着永无止境的共振,人类思维会因此产生怎样的进化效应?本书编者约翰•布......一起来看看 《人类思维如何与互联网共同进化》 这本书的介绍吧!