内容简介:That's cool!✌We may need to import OpenCV using cv2 and we are also going to see how to work with remote images that require
This post will focus on the core concepts in image processing. These areas will act as the building blocks for more intricate image manipulations in a later post. Becoming familiar with these characteristics of using Python and OpenCV, we will then be able to jump around to the different concepts more easily.
Face processing is a hot topic in artificial intelligence because a lot of information can be automatically extracted from faces using computer vision algorithms.
The face plays an important role in visual communication because a great deal of non-verbal information, such as identity, intent, and emotion, can be extracted from human faces.
Face processing is a really interesting topic for computer vision learners because it touches on different areas of expertise, such as object detection, image processing, and landmark detection or object tracking.
Introduction:
In this post, we are going to learn to play with an image using OpenCV and try to learn with existing tools like Haar cascades and build youtube inspired face-detect - crop - blur.
Face detection using Haar cascades is a machine learning-based approach where a cascade function is trained with a set of input data. OpenCV already contains many pre-trained classifiers for face, eyes, smiles, etc.. Today we will be using the face classifier. You can experiment with other classifiers as well.
This post will be using the google colab for development
- How to open images
- View them using the built-in Python and OpenCV tools
- Crop the faces.
- Blur the face in the original image (multiple faces).
OpenCV provides two approaches for face detection:
- Haar cascade based face detectors
- Deep learning-based face detectors
The framework proposed by Viola and Jones (see Rapid Object Detection Using a Boosted Cascade of Simple Features (2001) ) is an effective object detection method. This framework is very popular because OpenCV provides face detection algorithms based on this framework.
Pre - requisite
You need to download the trained classifier XML file (haarcascade_frontalface_default.xml), which is available in OpenCV's GitHub repository . Save it to your working location.
Demo (Final result can be seen at the botton of the post) :
Let do some coding!
1- Downloading the trained classifier XML files in colab
First, we will start with the pre-requisite of downloading the trained classifier XML files for frontal face detection and to do this in colab we are going to use wget command to directly download
###Eye cascade xml File !wget https://raw.githubusercontent.com/opencv/opencv/master/data/haarcascades/haarcascade_eye.xml -P drive/xxx ###Frontal cascade xml File !wget https://raw.githubusercontent.com/opencv/opencv/master/data/haarcascades/haarcascade_frontalface_default.xml -P drive/xxx
That's cool!✌
2 - Importing the modules
We may need to import OpenCV using cv2 and we are also going to see how to work with remote images that require urlib.request and NumPy for operations.
Note: we also need to import the cv2_imshow() since the OpenCV imshow() functions cause the jupyter colab to crash.
import cv2 from urllib.request import urlopen import numpy as np from google.colab.patches import cv2_imshow
3- Loading the cascade (pre-trained models + remote image)
we are going to load the cascade and remote image using the below code.
cascades_path = 'drive/xxx/' # Load the cascade face_cascade = cv2.CascadeClassifier(cascades_path +'haarcascade_frontalface_default.xml') # Read the input image req = urlopen('https://tvseriesfinale.com/wp-content/uploads/2014/04/siliconvalley03-590x331.jpg') arr = np.asarray(bytearray(req.read()), dtype=np.uint8) img = cv2.imdecode(arr, -1) # 'Load it as it is' result_image = img.copy()
4 - Convert the image to grayscale
In this step, we are converting our color image (RGB) to Gray-Scale Image and this can be done by using the cv2.COLOR_BGR2GRAY function which converts the original images to grayscale.
# Convert frame to grayscale: gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
5 - Detect the face object using detect multiscale
detectMultiScale - Detects objects of different sizes in the input image. The detected objects are returned as a list of rectangles and its a part of face_cascade. we are going to use scaleFactor of 1.1 and minNeighbors = 4 which is the default value.
# Detect faces faces = face_cascade.detectMultiScale(gray, 1.1, 4)
6. Finally, draw a rectangle around the face and display the result
Since we get the rectangle x,y with height and width from the previous step we are going to loop around to draw for all the identified faces,
# Iterate over each detected face: for (x, y, w, h) in faces: # Draw a rectangle to see the detected face (debugging purposes): cv2.rectangle(img, (x, y), (x + w, y + h), (255, 255, 0), 2) # Display the resulting frame cv2_imshow(img)
The complete source code till this will look like this
cascades_path = 'drive/my-drive/' face_crop = [] # Load the cascade face_cascade = cv2.CascadeClassifier(cascades_path +'haarcascade_frontalface_default.xml') # Read the input image req = urlopen('https://tvseriesfinale.com/wp-content/uploads/2014/04/siliconvalley03-590x331.jpg') arr = np.asarray(bytearray(req.read()), dtype=np.uint8) img = cv2.imdecode(arr, -1) # 'Load it as it is' result_image = img.copy() #img = cv2.imread('https://intrasee.com/wp-content/uploads/2016/08/silicon-valley-hero.jpg') # Convert into grayscale gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) # Detect faces faces = face_cascade.detectMultiScale(gray, 1.1, 4) print(faces) # Iterate over each detected face: for (x, y, w, h) in faces: cv2.rectangle(img, (x, y), (x+w, y+h), (255, 0, 0), 2) # Display the resulting frame cv2_imshow(img)
In the next part of the post, we are going to see how to crop the detected face and display the result for future recognition.
Cropping the detected part
I got this idea while reading checking out the youtube studio blur feature, where it detects all the faces within a video and gives option for blurring the selected face.
So just wanted to build this crop face + select feature(can be passed via Flask API).:sweat_smile:
Since we have already detected the faces in an image we can easily crop or select and store the pixels
First, we will create an empty List to store all the pixels value and next we will select the pixels inside the rectangle.
face_crop = [] for (x, y, w, h) in faces: cv2.rectangle(img, (x, y), (x+w, y+h), (255, 0, 0), 2) face_crop.append(gray[y:y+h, x:x+w]) for face in face_crop: cv2_imshow(face)
Result:
you can see that the area differs for each image (rectangle).
Finally, blurring out (Anonymizing) all the detected faces.
To blur out faces in OpenCV we are going to work with gaussian blur
Image blurring is achieved by convolving the image with a low-pass filter kernel. It is useful for removing noise. It actually removes high-frequency content (eg: noise, edges) from the image. So edges are blurred a little bit in this operation (there are also blurring techniques which don't blur the edges).
In this method, instead of a box filter, a Gaussian kernel is used. It is done with the function, cv.GaussianBlur() . We should specify the width and height of the kernel which should be positive and odd. We also should specify the standard deviation in the X and Y directions, sigmaX and sigmaY respectively. Gaussian blurring is highly effective in removing Gaussian noise from an image.
Since we have already cropped the faces we know the area to be blurred (img[y:y+h, x:x+w]) so we will pick each image to be blurred and apply the gaussian blur and finally merge the blurred part with the original image.
Let see how it works!
# create a sub face from detected faces: sub_face = img[y:y+h, x:x+w] # apply a gaussian blur on this new recangle image sub_face = cv2.GaussianBlur(sub_face,(23, 23), 30) # merge this blurry rectangle to our final image result_image[y:y+sub_face.shape[0], x:x+sub_face.shape[1]] = sub_face # Display the output cv2_imshow(result_image)
The final result can be seen as:
My most favorite show to date and nothing can beat its flow of events and quirky comedy. Amazing performance coupled with a great story and accurate depiction of the tech scene in Silicon Valley has been Ended :worried: peace!
Conclusion:
We can use this simple technique to achieve the same feature similar to youtube (low on cpu!) and can be tried even on video. In the next post, I will be moving towards the deep learning model and if you have any idea or would like to suggest some improvement, please do comment below.
以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持 码农网
猜你喜欢:本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们。
Web信息架构(第3版)
Peter Morville、Louis Rosenfeld / 陈建勋 / 电子工业出版社 / 2008年8月 / 85.00
本书涵盖了信息架构基本原理和实践应用的方方面面。全书共7个部分,包括信息架构概述、信息架构的基本原理、信息架构的开发流程和方法论、信息架构实践、信息架构与组织、两个案例研究,以及参考资料清单。 本书兼具较高的理论价值和实用价值,曾被Web设计领域多本书籍重点推荐,是信息架构领域公认的经典书,不论新手还是专家都能各取所需。本书可供Web设计与开发者、Web架构师、网站管理者及信息管理相关人员参......一起来看看 《Web信息架构(第3版)》 这本书的介绍吧!