Image Treatment in Artificial Intelligence
Artificial Intelligence has different tasks for Image Treatment. In this article, I’m covering the difference between Object Detection and Image Segmentation.
In both tasks, we want to find locations of certain items of interest on an image. For example, we could have a set of security camera pictures and on each picture, we want to identify the locations of all humans in the pictures.
There are two methods that can be generally used for this: Object Detection and Image Segmentation.
Object Detection — Predicting Bounding Boxes
When we talk about Object Detection, we generally talk about bounding boxes. This means that our Image Treatment will identify rectangles around each human in our pictures.
Bounding boxes are generally defined by the position of the top left corner (2 coordinates) and a width and height (in number of pixels).
How to understand Object Detection methods?
The logic of object detection by bounding boxes can be understood if we go back to the task: identify all humans on a picture.
The first intuition of a solution could be to cut the image in small parts and apply on each sub-image an image-classification to say whether this image is a human or not. Classifying a single image is an easier task and of Object Detection, therefore, they took this step-by-step approach.
Currently speaking, the YOLO model (You Only Look Once) has been a great invention that solves this problem. The developers of the YOLO model have built a Neural Network that is capable of doing the whole bounding box approach at once!
Current best models for Object Detection
- YOLO
- Faster RCNN
Image Segmentation — Predicting Masks
The logical alternative for scanning the image step by step is to stay away from drawing boxes, but rather to annotate an image pixel by pixel.
If you do this, you are going to have a more detailed model, which is basically a transformation of the input image.
How to understand Image Segmentation methods?
The idea is basic: even when scanning a bar code on a product, it is possible to apply an algorithm that transforms the input (by applying all sorts of filters) so that all information other than the barcode sequence becomes invisible in the final picture.
This is the basic approach of locating a barcode on an image but is comparable to what happens in Image Segmentation.
The return format of Image Segmentation is called a mask: an image that has the same size as the original image, but for each pixel, it has simply a boolean indicating whether the object is present or not present.
It can be made more complicated if we allow for multiple categories: then it could split for example a beach landscape into three categories: air, sea, and sand.
Current best models for Object Detection
- Mask RCNN
- Unet
- Segnet
The Comparison In Short
Object Detection
- the input is a matrix (input image) with 3 values per pixel (red, green and blue) or 1 value per pixel if black and white
- the output is a list of bounding boxes defined by upper left corner and size
Image Segmentation
- the input is a matrix (input image) with 3 values per pixel (red, green and blue) or 1 value per pixel if black and white
- the output is a matrix (mask image) with 1 value per pixel containing the assigned category
I hope this short article was useful for you. Thanks for reading!
以上所述就是小编给大家介绍的《What is the difference between Object Detection and Image Segmentation?》,希望对大家有所帮助,如果大家有任何疑问请给我留言,小编会及时回复大家的。在此也非常感谢大家对 码农网 的支持!
猜你喜欢:本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们。
Head First JavaScript程序设计
[美]Eric T. Freeman、[美] Elisabeth Robson / 袁国忠 / 人民邮电出版社 / 2017-9 / 129.00 元
本书语言和版式活泼,内容讲解深入浅出,是难得的JavaScript入门书。本书内容涵盖JavaScript的基本知识以及对象、函数和浏览器文档对象模型等高阶主题。书中配备了大量有趣的实例、图示和练习,让读者轻轻松松掌握JavaScript。一起来看看 《Head First JavaScript程序设计》 这本书的介绍吧!