YOLO v3 Object Detection with Keras

栏目: IT技术 · 发布时间: 4年前

内容简介:“You Only Look Once” (YOLO) is an object detection algorithm that is known for its high accuracy while it is also being able to run in real-time due to its speed detection. Unlike the previous algorithm, the third version facilitates an efficient tradeoff

An Explanation of YOLO v3 in a nutshell with Keras Implementation

Video by YOLO author,

Joseph Redmon

About YOLO v3 Algorithm

“You Only Look Once” (YOLO) is an object detection algorithm that is known for its high accuracy while it is also being able to run in real-time due to its speed detection. Unlike the previous algorithm, the third version facilitates an efficient tradeoff between speed and accuracy simply by changing the size of the model where retraining is not necessary.

Before we start to implement object detection with YOLO v3, we need to download the pre-train model weights . Downloading this may take a while, so you can prepare your coffee while waiting. YOLO v3 is written in the DarkNet framework which is open-source Neural Network in C. This makes me feel so intimidated in the first place.

But thankfully, this code is strongly inspired by experiencor’s keras-yolo3 projec t for performing the YOLO v3 model using Keras. Throughout this whole implementation, I am going to run this on Google Colab. Besides, we are going to using these cute dogs image for object detection.

YOLO v3 Object Detection with Keras

Photo by Alvan Nee on Unsplash

So let’s get our hand dirty!!

Step 1:

Jump into the very first step, the following are the necessary libraries and dependencies.

Step 2:

Next, the WeightReader class is used to parse the “yolov3. weights” file and load the model weights into memory.

Step 3:

YOLO v3 is using a new network to perform feature extraction which is undeniably larger compare to YOLO v2. This network is known as Darknet-53 as the whole network composes of 53 convolutional layers with shortcut connections (Redmon & Farhadi, 2018) .

YOLO v3 Object Detection with Keras

YOLO v3 network has 53 convolutional layers (Redmon & Farhadi, 2018)

Therefore, the code below composes several components which are:

  • _conv_block function which is used to construct a convolutional layer
  • make_yolov3_model function which is used to create layers of convolutional and stack together as a whole.

Step 4:

Next, the following code is explained as below:

save

Step 4:

This step involves decoding the prediction output into bounding boxes

The output of the YOLO v3 prediction is in the form of a list of arrays that hardly to be interpreted. As YOLO v3 is a multi-scale detection, it is decoded into three different scales in the shape of (13, 13, 225), (26, 26, 225), and (52, 52, 225)

YOLO v3 Object Detection with Keras

A slice of YOLOv3 prediction output before it gets decoded
YOLO v3 Object Detection with Keras
  • decode_netout function is used to decode the prediction output into boxes

In a nutshell, this is how the decode_netout function work as illustrated below:


以上就是本文的全部内容,希望本文的内容对大家的学习或者工作能带来一定的帮助,也希望大家多多支持 码农网

查看所有标签

猜你喜欢:

本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们

让创意更有黏性

让创意更有黏性

[美] 奇普·希思、[美] 丹·希思 / 姜奕晖 / 中信出版社 / 2014-1-8 / 49.00元

你或许相信在太空中唯一能看到的人工建筑就是万里长城,可乐能腐蚀人体骨骼,我们的大脑使用了10%;与此同时,你却记不得上周例会上领导的安排,昨天看过的那本书里写了什么,上次参加培训的主要内容…… 为什么? 这就引发出《让创意更有黏性》的核心问题:什么样的观点或创意具有强有力的黏性,能被他人牢牢记住? 国际知名行为心理学家希思兄弟根据大量的社会心理学研究案例,揭示了让创意或观点具有黏......一起来看看 《让创意更有黏性》 这本书的介绍吧!

HTML 编码/解码
HTML 编码/解码

HTML 编码/解码

XML 在线格式化
XML 在线格式化

在线 XML 格式化压缩工具

HEX HSV 转换工具
HEX HSV 转换工具

HEX HSV 互换工具