iOS 相机流人脸识别(一)-人脸框检测(基于iOS原生)

栏目: IOS · 发布时间: 6年前

内容简介:近几年随着移动设备硬件设备越来越优各种美颜相机App应运而生,美颜、瘦脸、添加挂件等等一系列的功能,这其中的原理一定离不开一个关键的技术,那就是人脸识别一般的检测流程检测图片中是否有人脸,或者有多少个人脸,同时会给出人脸的位置信息

近几年随着移动设备硬件设备越来越优各种美颜相机App应运而生,美颜、瘦脸、添加挂件等等一系列的功能,这其中的原理一定离不开一个关键的技术,那就是人脸识别

一般的检测流程

(1) 人脸检测

检测图片中是否有人脸,或者有多少个人脸,同时会给出人脸的位置信息

(2) 人脸关键点检测

第一步我们找出来图中是否有人脸的信息,然后通过人脸的位置,与图片信息,获取人脸的关键点

(3) 处理信息

通过关键点,来做一些你需要的东西

扫盲:什么是关键点

我们来看一张图

iOS 相机流人脸识别(一)-人脸框检测(基于iOS原生)

这张图通过 68 个点描述了人脸的轮廓,这 68 个点 就是关键点,也有 5 个点的关键点和其他的规格;

人脸检测

今天我们来通过iOS系统本身的AVFoundation 框架 来检测视频流中出现的人脸,并把检测出来的框绘制到视频流中,我们先看一下效果是什么样子的

iOS 相机流人脸识别(一)-人脸框检测(基于iOS原生)

Mars 可能太酷 都检测不到!

原料

AVFoundation

opencv2.framework 下载opencv2

ps:opencv 有的库带有iOS 用的一些方法 有的版本不带,我忘记了 大家自行下载查阅,没有的话也可以自己写方法,主要是做转换用的,你的controller的.m文件要换成.mm

#import "ViewController.h"
#import 
<avfoundation avfoundation="" h="">
 
#import 
 <opencv2 imgproc="" types_c="" h="">
  
#import 
  <opencv2 imgproc="" imgproc_c="" h="">
   
#import 
   <opencv2 imgcodecs="" ios="" h="">
    
#import 
    <opencv2 opencv="" hpp="">
     
@interface ViewController ()
     <avcapturevideodataoutputsamplebufferdelegate nbsp="" avcapturemetadataoutputobjectsdelegate="">
      
@property (nonatomic,strong) AVCaptureSession *session;
@property (nonatomic,strong) UIImageView *cameraView;
@property (nonatomic,strong) dispatch_queue_t sample;
@property (nonatomic,strong) dispatch_queue_t faceQueue;
@property (nonatomic,copy) NSArray *currentMetadata; //?< 如果检测到了人脸系统会返回一个数组 我们将这个数组存起来
@end
@implementation ViewController
- (void)viewDidLoad {
    [super viewDidLoad];
    _currentMetadata = [NSMutableArray arrayWithCapacity:0];
   
    [self.view addSubview: self.cameraView];
    
    _sample = dispatch_queue_create("sample", NULL);
    _faceQueue = dispatch_queue_create("face", NULL);
    
    NSArray *devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
    AVCaptureDevice *deviceF;
    for (AVCaptureDevice *device in devices )
    {
        if ( device.position == AVCaptureDevicePositionFront )
        {
            deviceF = device;
            break;
        }
    }
    
    AVCaptureDeviceInput*input = [[AVCaptureDeviceInput alloc] initWithDevice:deviceF error:nil];
    AVCaptureVideoDataOutput *output = [[AVCaptureVideoDataOutput alloc] init];
    
    [output setSampleBufferDelegate:self queue:_sample];
    
    AVCaptureMetadataOutput *metaout = [[AVCaptureMetadataOutput alloc] init];
    [metaout setMetadataObjectsDelegate:self queue:_faceQueue];
    self.session = [[AVCaptureSession alloc] init];
    
    
    [self.session beginConfiguration];
    if ([self.session canAddInput:input]) {
        [self.session addInput:input];
    }
    
    if ([self.session canSetSessionPreset:AVCaptureSessionPreset640x480]) {
        [self.session setSessionPreset:AVCaptureSessionPreset640x480];
    }
    if ([self.session canAddOutput:output]) {
        [self.session addOutput:output];
    }
    
    if ([self.session canAddOutput:metaout]) {
        [self.session addOutput:metaout];
    }
    [self.session commitConfiguration];
    
    
    NSString     *key           = (NSString *)kCVPixelBufferPixelFormatTypeKey;
    NSNumber     *value         = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA];
    NSDictionary *videoSettings = [NSDictionary dictionaryWithObject:value forKey:key];
    
    [output setVideoSettings:videoSettings];
    
    //这里 我们告诉要检测到人脸 就给我一些反应,里面还有QRCode 等 都可以放进去,就是 如果视频流检测到了你要的 就会出发下面第二个代理方法
    [metaout setMetadataObjectTypes:@[AVMetadataObjectTypeFace]];
    
    AVCaptureSession* session = (AVCaptureSession *)self.session;
    //前置摄像头一定要设置一下 要不然画面是镜像
    for (AVCaptureVideoDataOutput* output in session.outputs) {
        for (AVCaptureConnection * av in output.connections) {
            //判断是否是前置摄像头状态
            if (av.supportsVideoMirroring) {
                //镜像设置
                av.videoOrientation = AVCaptureVideoOrientationPortrait;
                av.videoMirrored = YES;
            }
        }
    }
    [self.session startRunning];
}
#pragma mark - AVCaptureSession Delegate -
- (void)captureOutput:(AVCaptureOutput *)output didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
    
    NSMutableArray *bounds = [NSMutableArray arrayWithCapacity:0];
    //每一帧,我们都看一下  self.currentMetadata 里面有没有东西,然后将里面的 
   //AVMetadataFaceObject 转换成  AVMetadataObject,其中AVMetadataObject 的bouns 就是人脸的位置 ,我们将bouns 存到数组中
    for (AVMetadataFaceObject *faceobject in self.currentMetadata) {
        AVMetadataObject *face = [output transformedMetadataObjectForMetadataObject:faceobject connection:connection];
        [bounds addObject:[NSValue valueWithCGRect:face.bounds]];
    }
}
- (void)captureOutput:(AVCaptureOutput *)output didOutputMetadataObjects:(NSArray<__kindof AVMetadataObject *> *)metadataObjects fromConnection:(AVCaptureConnection *)connection
{
    //当检测到了人脸会走这个回调
    _currentMetadata = metadataObjects;
}
- (UIImage*)imageFromPixelBuffer:(CMSampleBufferRef)p {
    CVImageBufferRef buffer;
    buffer = CMSampleBufferGetImageBuffer(p);
    
    CVPixelBufferLockBaseAddress(buffer, 0);
    uint8_t *base;
    size_t width, height, bytesPerRow;
    base = (uint8_t *)CVPixelBufferGetBaseAddress(buffer);
    width = CVPixelBufferGetWidth(buffer);
    height = CVPixelBufferGetHeight(buffer);
    bytesPerRow = CVPixelBufferGetBytesPerRow(buffer);
    
    CGColorSpaceRef colorSpace;
    CGContextRef cgContext;
    colorSpace = CGColorSpaceCreateDeviceRGB();
    cgContext = CGBitmapContextCreate(base, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
    CGColorSpaceRelease(colorSpace);
    
    CGImageRef cgImage;
    UIImage *image;
    cgImage = CGBitmapContextCreateImage(cgContext);
    image = [UIImage imageWithCGImage:cgImage];
    CGImageRelease(cgImage);
    CGContextRelease(cgContext);
    
    CVPixelBufferUnlockBaseAddress(buffer, 0);
    
    
    return image;
}
- (UIImageView *)cameraView
{
    if (!_cameraView) {
        _cameraView = [[UIImageView alloc] initWithFrame:self.view.bounds];
        //不拉伸
        _cameraView.contentMode = UIViewContentModeScaleAspectFill;
    }
    return _cameraView;
}
     </avcapturevideodataoutputsamplebufferdelegate>
    </opencv2>
   </opencv2>
  </opencv2>
 </opencv2>
</avfoundation>

注意的地方

1.output的设置一定在添加之后

2.info.plist 要设置相机权限 Privacy - Camera Usage Description

现在我们视频流拿到了 但是还没有显示出来,下面我们会通过opencv 将人脸框绘制在视频流上,并通过UIImageView 将 处理后的图像显示出来

将人脸框绘制到显示的视频流上

(1). 转换

我们先写一个方法 将CMSampleBufferRef 转换成 UIImage(其实也可以直接CMSampleBufferRef 转换成cv::Mat)

- (UIImage*)imageFromPixelBuffer:(CMSampleBufferRef)p {
    CVImageBufferRef buffer;
    buffer = CMSampleBufferGetImageBuffer(p);
    
    CVPixelBufferLockBaseAddress(buffer, 0);
    uint8_t *base;
    size_t width, height, bytesPerRow;
    base = (uint8_t *)CVPixelBufferGetBaseAddress(buffer);
    width = CVPixelBufferGetWidth(buffer);
    height = CVPixelBufferGetHeight(buffer);
    bytesPerRow = CVPixelBufferGetBytesPerRow(buffer);
    
    CGColorSpaceRef colorSpace;
    CGContextRef cgContext;
    colorSpace = CGColorSpaceCreateDeviceRGB();
    cgContext = CGBitmapContextCreate(base, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
    CGColorSpaceRelease(colorSpace);
    
    CGImageRef cgImage;
    UIImage *image;
    cgImage = CGBitmapContextCreateImage(cgContext);
    image = [UIImage imageWithCGImage:cgImage];
    CGImageRelease(cgImage);
    CGContextRelease(cgContext);
    
    CVPixelBufferUnlockBaseAddress(buffer, 0);
    
    
    return image;
}

(2).绘制

我们在继续在 AVCaptureVideoDataOutputSampleBufferDelegate 去处理视频流,已经可以拿到 有关人脸的信息了 我们直接绘制上去就可以了

- (void)captureOutput:(AVCaptureOutput *)output didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
    
    NSMutableArray *bounds = [NSMutableArray arrayWithCapacity:0];
    for (AVMetadataFaceObject *faceobject in self.currentMetadata) {
        AVMetadataObject *face = [output transformedMetadataObjectForMetadataObject:faceobject connection:connection];
        [bounds addObject:[NSValue valueWithCGRect:face.bounds]];
    }
   
   //转换成UIImage
    UIImage *image = [self imageFromPixelBuffer:sampleBuffer];
    cv::Mat mat;
    //转换成cv::Mat
    UIImageToMat(image, mat);
    
    for (NSValue *rect in bounds) {
        CGRect r = [rect CGRectValue];
        //画框
        cv::rectangle(mat, cv::Rect(r.origin.x,r.origin.y,r.size.width,r.size.height), cv::Scalar(255,0,0,1));
    }
    
    //这里不考虑性能 直接怼Image
    dispatch_async(dispatch_get_main_queue(), ^{
       self.cameraView.image = MatToUIImage(mat);
    });
}

忘记一个东西 依赖库 大家别忘记 点一波

iOS 相机流人脸识别(一)-人脸框检测(基于iOS原生)


以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持 码农网

查看所有标签

猜你喜欢:

本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们

Head First Design Patterns

Head First Design Patterns

Elisabeth Freeman、Eric Freeman、Bert Bates、Kathy Sierra、Elisabeth Robson / O'Reilly Media / 2004-11-1 / USD 49.99

You're not alone. At any given moment, somewhere in the world someone struggles with the same software design problems you have. You know you don't want to reinvent the wheel (or worse, a flat tire),......一起来看看 《Head First Design Patterns》 这本书的介绍吧!

JSON 在线解析
JSON 在线解析

在线 JSON 格式化工具

Base64 编码/解码
Base64 编码/解码

Base64 编码/解码

html转js在线工具
html转js在线工具

html转js在线工具