如何用Avfoundation 实现照相机镜头回不去了拉近拉远

& 首先导入头文件
#import &AVFoundation/AVFoundation.h&
&导入头文件后创建几个相机必须实现的对象
AVCaptureSession对象来执行输入设备和输出设备之间的数据传递
@property (nonatomic, strong) AVCaptureSession*
@property (nonatomic, strong) AVCaptureDeviceInput* videoI
照片输出流
@property (nonatomic, strong) AVCaptureStillImageOutput* stillImageO
@property (nonatomic, strong) AVCaptureVideoPreviewLayer* previewL
AVCaptureSession控制输入和输出设备之间的数据传递
AVCaptureDeviceInput调用所有的输入硬件。例如摄像头和麦克风
AVCaptureStillImageOutput用于输出图像
AVCaptureVideoPreviewLayer镜头捕捉到得预览图层
&在viewDidLoad里面初始化所有对象
- (void)initAVCaptureSession{
self.session = [[AVCaptureSession alloc] init];
AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
//更改这个设置的时候必须先锁定设备,修改完后再解锁,否则崩溃
[device lockForConfiguration:nil];
//设置闪光灯为自动
[device setFlashMode:AVCaptureFlashModeAuto];
[device unlockForConfiguration];
self.videoInput = [[AVCaptureDeviceInput alloc] initWithDevice:device error:&error];
if (error) {
NSLog(@&%@&,error);
self.stillImageOutput = [[AVCaptureStillImageOutput alloc] init];
//输出设置。AVVideoCodecJPEG
输出jpeg格式图片
NSDictionary * outputSettings = [[NSDictionary alloc] initWithObjectsAndKeys:AVVideoCodecJPEG,AVVideoCodecKey, nil];
[self.stillImageOutput setOutputSettings:outputSettings];
if ([self.session canAddInput:self.videoInput]) {
[self.session addInput:self.videoInput];
if ([self.session canAddOutput:self.stillImageOutput]) {
[self.session addOutput:self.stillImageOutput];
//初始化预览图层
self.previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:self.session];
[self.previewLayer setVideoGravity:AVLayerVideoGravityResizeAspect];
NSLog(@&%f&,kMainScreenWidth);
self.previewLayer.frame = CGRectMake(0, 0,kMainScreenWidth, kMainScreenHeight - 64);
self.backView.layer.masksToBounds = YES;
[self.backView.layer addSublayer:self.previewLayer];
&然后在ViewWillAppear、ViewDidDisappear方法里开启和关闭session
- (void)viewWillAppear:(BOOL)animated{
[super viewWillAppear:YES];
if (self.session) {
[self.session startRunning];
- (void)viewDidDisappear:(BOOL)animated{
[super viewDidDisappear:YES];
if (self.session) {
[self.session stopRunning];
输出图像的时候需要用到AVCaptureConnection这个类,session通过AVCaptureConnection连接AVCaptureStillImageOutput进行图片输出
&先设置设备方向,在配置图片输出的时候需要用到
-(AVCaptureVideoOrientation)avOrientationForDeviceOrientation:(UIDeviceOrientation)deviceOrientation
AVCaptureVideoOrientation result = (AVCaptureVideoOrientation)deviceO
if ( deviceOrientation == UIDeviceOrientationLandscapeLeft )
result = AVCaptureVideoOrientationLandscapeR
else if ( deviceOrientation == UIDeviceOrientationLandscapeRight )
result = AVCaptureVideoOrientationLandscapeL
- (IBAction)takePhotoButtonClick:(UIBarButtonItem *)sender {
AVCaptureConnection *stillImageConnection = [self.stillImageOutput connectionWithMediaType:AVMediaTypeVideo];
UIDeviceOrientation curDeviceOrientation = [[UIDevice currentDevice] orientation];
AVCaptureVideoOrientation avcaptureOrientation = [self avOrientationForDeviceOrientation:curDeviceOrientation];
//设置设备方向
[stillImageConnection setVideoOrientation:avcaptureOrientation];
//设置焦距
[stillImageConnection setVideoScaleAndCropFactor:1];
[self.stillImageOutput captureStillImageAsynchronouslyFromConnection:stillImageConnection completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error) {
NSData *jpegData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer];
CFDictionaryRef attachments = CMCopyDictionaryOfAttachments(kCFAllocatorDefault,
imageDataSampleBuffer,
kCMAttachmentMode_ShouldPropagate);
ALAuthorizationStatus author = [ALAssetsLibrary authorizationStatus];
if (author == ALAuthorizationStatusRestricted || author == ALAuthorizationStatusDenied){
//保存图片到手机相册操作(如果要实现这功能需要导入头文件&AssetsLibrary/AssetsLibrary.h&)
ALAssetsLibrary *library = [[ALAssetsLibrary alloc] init];
[library writeImageDataToSavedPhotosAlbum:jpegData metadata:(__bridge id)attachments completionBlock:^(NSURL *assetURL, NSError *error) {
至此相机的拍照功能已经完成
[stillImageConnection setVideoScaleAndCropFactor:1];这个方法是控制焦距用的暂时先固定为1,后边写手势缩放焦距的时候会修改这里照片写入相册之前需要进行旋转(我在代码里并没有进行旋转)写入相册之前需要判断用户是否允许了程序访问相册,否则程序会崩溃,包括在开启相机的时候和拍摄按钮点击的时候都需要做安全验证,验证设别是否支持拍照,用户是否允许程序访问相机
接下来设置闪光灯
- (IBAction)flashButtonClick:(UIBarButtonItem *)sender {
NSLog(@&flashButtonClick&);
AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
//修改前必须先锁定
[device lockForConfiguration:nil];
//必须判定是否有闪光灯,否则如果没有闪光灯会崩溃
if ([device hasFlash]) {
if (device.flashMode == AVCaptureFlashModeOff) {
device.flashMode = AVCaptureFlashModeOn;
[sender setTitle:@&flashOn&];
} else if (device.flashMode == AVCaptureFlashModeOn) {
device.flashMode = AVCaptureFlashModeA
[sender setTitle:@&flashAuto&];
} else if (device.flashMode == AVCaptureFlashModeAuto) {
device.flashMode = AVCaptureFlashModeO
[sender setTitle:@&flashOff&];
NSLog(@&设备不支持闪光灯&);
[device unlockForConfiguration];
闪光灯的设置非常简单,只需要修改device的flashMode属性即可,这里需要注意的是,修改device时候需要先锁住,修改完成后再解锁,否则会崩溃,设置闪光灯的时候也需要做安全判断,验证设备是否支持闪光灯,有些iOS设备是没有闪光灯的,如果不做判断还是会crash掉
最后实现镜头切回
- (IBAction)switchCameraSegmentedControlClick:(UISegmentedControl *)sender {
NSLog(@&%ld&,(long)sender.selectedSegmentIndex);
AVCaptureDevicePosition desiredP
if (isUsingFrontFacingCamera){
desiredPosition = AVCaptureDevicePositionB
desiredPosition = AVCaptureDevicePositionF
for (AVCaptureDevice *d in [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo]) {
if ([d position] == desiredPosition) {
[self.previewLayer.session beginConfiguration];
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:d error:nil];
for (AVCaptureInput *oldInput in self.previewLayer.session.inputs) {
[[self.previewLayer session] removeInput:oldInput];
[self.previewLayer.session addInput:input];
[self.previewLayer.session commitConfiguration];
isUsingFrontFacingCamera = !isUsingFrontFacingC
}isUsingFrontFacingCamera这个属性是个BOOL值变量,前面忘记写这个属性了。用于防止重复切换统一摄像头,调用这个点击方法的控件是个segement
最后实现焦距调节
声明两个属性,遵循手势的协议&UIGestureRecognizerDelegate&
记录开始的缩放比例
@property(nonatomic,assign)CGFloat beginGestureS
* 最后的缩放比例
@property(nonatomic,assign)CGFloat effectiveS& & &这两个属性分别用于记录缩放的比例。相机支持的焦距是1.0~67.5,所以再控制器加载的时候分别给这两个属性附上一个初值
1.0。之后给view添加一个缩放手势,手势调用的方法如下
//缩放手势 用于调整焦距
- (void)handlePinchGesture:(UIPinchGestureRecognizer *)recognizer{
BOOL allTouchesAreOnThePreviewLayer = YES;
NSUInteger numTouches = [recognizer numberOfTouches],
for ( i = 0; i & numT ++i ) {
CGPoint location = [recognizer locationOfTouch:i inView:self.backView];
CGPoint convertedLocation = [self.previewLayer convertPoint:location fromLayer:self.previewLayer.superlayer];
if ( ! [self.previewLayer containsPoint:convertedLocation] ) {
allTouchesAreOnThePreviewLayer = NO;
if ( allTouchesAreOnThePreviewLayer ) {
self.effectiveScale = self.beginGestureScale * recognizer.
if (self.effectiveScale & 1.0){
self.effectiveScale = 1.0;
NSLog(@&%f--------------&%f------------recognizerScale%f&,self.effectiveScale,self.beginGestureScale,recognizer.scale);
CGFloat maxScaleAndCropFactor = [[self.stillImageOutput connectionWithMediaType:AVMediaTypeVideo] videoMaxScaleAndCropFactor];
NSLog(@&%f&,maxScaleAndCropFactor);
if (self.effectiveScale & maxScaleAndCropFactor)
self.effectiveScale = maxScaleAndCropF
[CATransaction begin];
[CATransaction setAnimationDuration:.025];
[self.previewLayer setAffineTransform:CGAffineTransformMakeScale(self.effectiveScale, self.effectiveScale)];
[CATransaction commit];
&然后在实现手势的代理方法
- (BOOL)gestureRecognizerShouldBegin:(UIGestureRecognizer *)gestureRecognizer
if ( [gestureRecognizer isKindOfClass:[UIPinchGestureRecognizer class]] ) {
self.beginGestureScale = self.effectiveS
return YES;
& &在每次手势开始的时候把上一次实际缩放值赋给初始缩放值,如果不这么做的话你会发现每次手势开始的时候界面都会跳来跳去的。一个简单功能的相机基本上完成了,最后一步就是之前我们在拍照的方法里写死了一个1.0,我们还需要修改一下它,,否则虽然你看到的界面焦距改变了,但是实际拍出来的照片是没有变化的。找到拍照方法里的
[stillImageConnection setVideoScaleAndCropFactor:1.0];
[stillImageConnection setVideoScaleAndCropFactor:self.effectiveScale];
本文已收录于以下专栏:
相关文章推荐
/index.php/tutorials/ios4-take-photos-with-live-video-preview-using-avfoundati...
IOS开发 -- AVFoundation自定义相机
导入依赖库 AVFoundation.framework
一般需要使用相机时候, 调用系统的相机就可以了, 但是如果有复杂的自定义拍照需求的话,...
Linux老难题解决了!
Linux工程师很多,甚至有很多有多年工作经验,但是对一些关键概念的理解非常模糊,比如不理解CPU、内存资源等的真正分布,具体的工作机制,这使得他们对很多问题的分析都摸不到方向。比如进程的调度延时是多少?linux能否硬实时?多核下多线程如何执行?
最近公司的项目中用到了相机,由于不用系统的相机,UI给的相机切图,必须自定义才可以。就花时间简单研究了一下相机的自定义。
  相机属于系统硬件,这就需要我们来手动调用iPhone的相机硬件,分为以下...
/p/e70a184d1f32 
在iOS中要拍照和录制视频最简单的方式就是调用UIImagePickerController,UIImagePick...
在非UI线程中进行对UI的更新或处理,报出此错误。一般解决方式有以下几种:
1,使用最底层的handler方法解决
2,使用对handler进行封装了的RunOnUIThread()解决,以及通过...
首先导入头文件
 导入头文件后创建几个相机必须实现的对象
AVCaptureSession对象来执行输入设备和输出设备之间的数据传递
他的最新文章
您举报文章:
举报原因:
原文地址:
原因补充:
(最多只允许输入30个字)5013人阅读
关于iOS调用摄像机来获取照片,通常我们都会调用UIImagePickerController来调用系统提供的相机来拍照,这个控件非常好用。但是有时UIImagePickerController控件无法满足我们的需求,例如我们需要更加复杂的OverlayerView,这时候我们就要自己构造一个摄像机控件了。
这需要使用AVFoundation.framework这个framework里面的组件了,所以我们先要导入&AVFoundation/AVFoundation.h&这个头文件,另外还需要的组件官方文档是这么说的:
● An instance of AVCaptureDevice to represent the input device, such as a camera or microphone
● An instance of a concrete subclass of AVCaptureInput to configure the ports from the input device
● An instance of a concrete subclass of AVCaptureOutput to manage the output to a movie file or still image
● An instance of AVCaptureSession to coordinate the data flow from the input to the output
这里我只构造了一个具有拍照功能的照相机,至于录影和录音功能这里就不加说明了。
总结下来,我们需要以下的对象:
01.@property&(nonatomic,
strong)&&&&&& AVCaptureSession&&&&&&&&&&& *
03.@property&(nonatomic,
strong)&&&&&& AVCaptureDeviceInput&&&&&&& * videoI
05.@property&(nonatomic,
strong)&&&&&& AVCaptureStillImageOutput&& * stillImageO
07.@property&(nonatomic,
strong)&&&&&& AVCaptureVideoPreviewLayer& * previewL
09.@property&(nonatomic,
strong)&&&&&& UIBarButtonItem&&&&&&&&&&&& * toggleB
11.@property&(nonatomic,
strong)&&&&&& UIButton&&&&&&&&&&&&&&&&&&& * shutterB
13.@property&(nonatomic,
strong)&&&&&& UIView&&&&&&&&&&&&&&&&&&&&& * cameraShowV
我的习惯是在init方法执行的时候创建这些对象,然后在viewWillAppear方法里加载预览图层。现在就让我们看一下代码就清楚了。
initialSession
04.self.session
= [[AVCaptureSession alloc] init];
05.self.videoInput
= [[AVCaptureDeviceInput alloc] initWithDevice:[self frontCamera] error:nil];
07.self.stillImageOutput
= [[AVCaptureStillImageOutput alloc] init];
08.NSDictionary
* outputSettings = [[NSDictionary alloc] initWithObjectsAndKeys:AVVideoCodecJPEG,AVVideoCodecKey, nil];
10.[self.stillImageOutput
setOutputSettings:outputSettings];
12.if&([self.session
canAddInput:self.videoInput]) {
13.[self.session
addInput:self.videoInput];
15.if&([self.session
canAddOutput:self.stillImageOutput]) {
16.[self.session
addOutput:self.stillImageOutput];
这是获取前后摄像头对象的方法
(AVCaptureDevice *)cameraWithPosition:(AVCaptureDevicePosition) position {
02.NSArray
*devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
03.for&(AVCaptureDevice
*device in devices) {
04.if&([device
position] == position) {
05.return&
08.return&
(AVCaptureDevice *)frontCamera {
13.return&[self
cameraWithPosition:AVCaptureDevicePositionFront];
(AVCaptureDevice *)backCamera {
17.return&[self
cameraWithPosition:AVCaptureDevicePositionBack];
接下来在viewWillAppear方法里执行加载预览图层的方法
setUpCameraLayer
03.if&(_cameraAvaible
== NO)&return;
05.if&(self.previewLayer
06.self.previewLayer
= [[AVCaptureVideoPreviewLayer alloc] initWithSession:self.session];
* view = self.cameraShowV
08.CALayer
* viewLayer = [view layer];
09.[viewLayer
setMasksToBounds:YES];
bounds = [view bounds];
12.[self.previewLayer
setFrame:bounds];
13.[self.previewLayer
setVideoGravity:AVLayerVideoGravityResizeAspect];
15.[viewLayer
insertSublayer:self.previewLayer below:[[viewLayer sublayers] objectAtIndex:0]];
注意以下的方法,在viewDidAppear和viewDidDisappear方法中启动和关闭session
viewDidAppear:(BOOL)animated
03.[super&viewDidAppear:animated];
04.if&(self.session)
05.[self.session
startRunning];
viewDidDisappear:(BOOL)animated
11.[super&viewDidDisappear:
animated];
12.if&(self.session)
13.[self.session
stopRunning];
接着我们就来实现切换前后镜头的按钮,按钮的创建我就不多说了
(void)toggleCamera
02.NSUInteger
cameraCount = [[AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo] count];
03.if&(cameraCount
04.NSError
05.AVCaptureDeviceInput
*newVideoI
06.AVCaptureDevicePosition
position = [[_videoInput device] position];
08.if&(position
== AVCaptureDevicePositionBack)
09.newVideoInput
= [[AVCaptureDeviceInput alloc] initWithDevice:[self frontCamera] error:&error];
10.else&if&(position
== AVCaptureDevicePositionFront)
11.newVideoInput
= [[AVCaptureDeviceInput alloc] initWithDevice:[self backCamera] error:&error];
13.return;
15.if&(newVideoInput
16.[self.session
beginConfiguration];
17.[self.session
removeInput:self.videoInput];
18.if&([self.session
canAddInput:newVideoInput]) {
19.[self.session
addInput:newVideoInput];
setVideoInput:newVideoInput];
21.}&else&{
22.[self.session
addInput:self.videoInput];
24.[self.session
commitConfiguration];
25.}&else&if&(error)
26.NSLog(@&toggle
carema failed, error = %@&,
这是切换镜头的按钮方法
shutterCamera
03.AVCaptureConnection
* videoConnection = [self.stillImageOutput connectionWithMediaType:AVMediaTypeVideo];
04.if&(!videoConnection)
05.NSLog(@&take
photo failed!&);
06.return;
09.[self.stillImageOutput
captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error) {
10.if&(imageDataSampleBuffer
== NULL) {
11.return;
* imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer];
14.UIImage
* image = [UIImage imageWithData:imageData];
15.NSLog(@&image
size = %@&,NSStringFromCGSize(image.size));
这是拍照按钮的方法
这样自定义照相机的简单功能就完成了,如果你想要再添加其他复杂的功能,可以参考一下下面这篇文章,希望对你们有所帮助。
/blog/Blog.pzs/archive//10882.html
&&相关文章推荐
* 以上用户言论只代表其个人观点,不代表CSDN网站的观点或立场
访问:74026次
排名:千里之外
原创:12篇
转载:72篇
(2)(1)(1)(1)(2)(1)(2)(1)(2)(1)(3)(1)(1)(3)(2)(4)(1)(5)(5)(1)(1)(3)(3)(1)(5)(1)(1)(4)(6)(3)(5)(1)(9)(1)佳能macro lens ef 100mm镜头怎么拉近拉远_百度知道
色情、暴力
我们会通过消息、邮箱等方式尽快将举报结果通知您。
佳能macro lens ef 100mm镜头怎么拉近拉远
我有更好的答案
F 100mm f&#47.8L IS USM(包括EF 100mm f/2.8 USM)是定焦微距镜头,不能通过转动变焦环来改变镜头焦距;2,要想改变物体的成像大小(拉近拉远)
采纳率:92%
这个是定焦,不能拉远近
为您推荐:
其他类似问题
您可能关注的内容
佳能的相关知识
等待您来回答利用AVFoundation自定义相机 - 简书
利用AVFoundation自定义相机
如果只是简单的调用相机来拍照的话苹果为我们提供了UIImagePickerController这个简单的拍照功能的实现。但是它的界面是固定的。当你想自定义拍照界面以及使用其他更高级的功能的时候就需要用到AVFoundation这个框架。
AVFoundation 相关类
AVFoundation 框架基于以下几个类实现图像捕捉 ,通过这些类可以访问来自相机设备的原始数据并控制它的组件。
AVCaptureDevice 是关于相机硬件的接口。它被用于控制硬件特性,诸如镜头的位置、曝光、闪光灯等。
AVCaptureDeviceInput 提供来自设备的数据。
AVCaptureOutput 是一个抽象类,描述 capture session 的结果。以下是三种关于静态图片捕捉的具体子类:AVCaptureStillImageOutput 用于捕捉静态图片AVCaptureMetadataOutput 启用检测人脸和二维码AVCaptureVideoDataOutput (原文显示为AVCaptureVideoOutput,但是我用到的是这个)为实时预览图提供原始帧
AVCaptureSession 管理输入与输出之间的数据流,以及在出现问题时生成运行时错误。
AVCaptureVideoPreviewLayer是 CALayer的子类,可被用于自动显示相机产生的实时图像。它还有几个工具性质的方法,可将 layer 上的坐标转化到设备上。它看起来像输出,但其实不是。另外,它拥有 session (outputs 被 session 所拥有)。
以上引用自
如上文所说AVCaptureSession是管理输入输出的类。担任管理调度的角色。因此需要先创建它_session = [[AVCaptureSession alloc] init];
_session.sessionPreset = AVCaptureSessionPresetP
使用AVCaptureSessionPresetPhoto会自动设置为最适合的拍照配置。比如它可以允许我们使用最高的感光度 (ISO) 和曝光时间,基于相位检测的自动对焦, 以及输出全分辨率的 JPEG 格式压缩的静态图片。
AVCaptureDevice是用来控制硬件的接口。在拍照的时候我们需要一个摄像头的设备。因此我们需要遍历所有设备找到相应的摄像头。// 用来返回是前置摄像头还是后置摄像头
-(AVCaptureDevice *)cameraWithPosition:(AVCaptureDevicePosition) position {
// 返回和视频录制相关的所有默认设备
NSArray *devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
// 遍历这些设备返回跟position相关的设备
for (AVCaptureDevice *device in devices) {
if ([device position] == position) {
AVCaptureDeviceInput,找到相应的摄像头之后就能通过这个获取来自硬件的数据。// 后置摄像头输入
-(AVCaptureDeviceInput *)backCameraInput {
if (_backCameraInput == nil) {
_backCameraInput = [[AVCaptureDeviceInput alloc] initWithDevice:[self backCamera] error:&error];
if (error) {
NSLog(@"获取后置摄像头失败~");
return _backCameraI
获得到AVCaptureDeviceInput后加入到AVCaptureSession上// 添加后置摄像头的输入
if ([_session canAddInput:self.backCameraInput]) {
[_session addInput:self.backCameraInput];
self.currentCameraInput = self.backCameraI
AVCaptureOutput获取输出数据的类。拍照的时候用的是AVCaptureStillImageOutput。它是用来捕获静态图片的类。// 静态图像输出
-(AVCaptureStillImageOutput *)stillImageOutput
if (_stillImageOutput == nil) {
_stillImageOutput = [[AVCaptureStillImageOutput alloc] init];
NSDictionary *outputSettings = [[NSDictionary alloc] initWithObjectsAndKeys:AVVideoCodecJPEG,AVVideoCodecKey,nil];
_stillImageOutput.outputSettings = outputS
return _stillImageO
// 添加静态图片输出(拍照)
if ([_session canAddOutput:self.stillImageOutput]) {
[_session addOutput:self.stillImageOutput];
AVCaptureMetadataOutput用来进行人脸或者二维码一维码的识别。不同于AVCaptureStillImageOutput,它需要在加载如session中之后才能进行设置,不然会报错。// 添加元素输出(识别)
if ([_session canAddOutput:self.metaDataOutput]) {
[_session addOutput:self.metaDataOutput];
// 人脸识别
[_metaDataOutput setMetadataObjectTypes:@[AVMetadataObjectTypeFace]];
// 二维码,一维码识别
[_metaDataOutput setMetadataObjectTypes:@[AVMetadataObjectTypeCode39Code,AVMetadataObjectTypeCode128Code,AVMetadataObjectTypeCode39Mod43Code,AVMetadataObjectTypeEAN13Code,AVMetadataObjectTypeEAN8Code,AVMetadataObjectTypeCode93Code]];
[_metaDataOutput setMetadataObjectsDelegate:self queue:self.sessionQueue];
AVCaptureVideoDataOutput用来录制视频或者从输出数据流捕捉单一的图像帧。比如进行身份证和手机号识别的过程中就需要不断从数据流中获取图像,这个时候需要用到它。// 视频输出
-(AVCaptureVideoDataOutput *)videoDataOutput {
if (_videoDataOutput == nil) {
_videoDataOutput = [[AVCaptureVideoDataOutput alloc] init];
[_videoDataOutput setSampleBufferDelegate:self queue:self.sessionQueue];
NSDictionary* setcapSettings = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange], kCVPixelBufferPixelFormatTypeKey,
_videoDataOutput.videoSettings = setcapS
return _videoDataO
启动相机在session和相机设备中完成的操作都利用block来调用。因此这些操作都建议分配到后台串行队列中。dispatch_async(self.sessionQueue, ^{
[self.session startRunning];
拍照利用AVCaptureStillImageOutput来进行拍照,开启闪光灯的话会在拍照后关闭,有快门声音。注意,通过拍照方法获取的照片旋转了90度,并且其大小并不是预览窗口的大小,需要进行截取。
#pragma mark - 拍照
-(void)takePhotoWithImageBlock:(void (^)(UIImage *, UIImage *, UIImage *))block
__weak typeof(self) weak =
[self.stillImageOutput captureStillImageAsynchronouslyFromConnection:[self imageConnection] completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error) {
if (!imageDataSampleBuffer) {
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer];
UIImage *originImage = [[UIImage alloc] initWithData:imageData];
CGFloat squareLength = weak.previewLayer.bounds.size.
CGFloat previewLayerH = weak.previewLayer.bounds.size.
CGSize size = CGSizeMake(squareLength * 2, previewLayerH * 2);
UIImage *scaledImage = [originImage resizedImageWithContentMode:UIViewContentModeScaleAspectFill bounds:size interpolationQuality:kCGInterpolationHigh];
CGRect cropFrame = CGRectMake((scaledImage.size.width - size.width) / 2, (scaledImage.size.height - size.height) / 2, size.width, size.height);
UIImage *croppedImage = [scaledImage croppedImage:cropFrame];
UIDeviceOrientation orientation = [UIDevice currentDevice].
if (orientation != UIDeviceOrientationPortrait) {
CGFloat degree = 0;
if (orientation == UIDeviceOrientationPortraitUpsideDown) {
degree = 180;// M_PI;
} else if (orientation == UIDeviceOrientationLandscapeLeft) {
degree = -90;// -M_PI_2;
} else if (orientation == UIDeviceOrientationLandscapeRight) {
degree = 90;// M_PI_2;
croppedImage = [croppedImage rotatedByDegrees:degree];
scaledImage = [scaledImage rotatedByDegrees:degree];
originImage = [originImage rotatedByDegrees:degree];
if (block) {
block(originImage,scaledImage,croppedImage);
识别利用AVCaptureMetadataOutputObjectsDelegate方法筛选相应的元素对象#pragma mark - AVCaptureMetadataOutputObjectsDelegate
-(void)captureOutput:(AVCaptureOutput *)captureOutput didOutputMetadataObjects:(NSArray *)metadataObjects fromConnection:(AVCaptureConnection *)connection
if (self.faceRecognition) {
for(AVMetadataObject *metadataObject in metadataObjects) {
if([metadataObject.type isEqualToString:AVMetadataObjectTypeFace]) {
AVMetadataObject *transform = [self.previewLayer transformedMetadataObjectForMetadataObject:metadataObject];
dispatch_async(dispatch_get_main_queue(), ^{
[self showFaceImageWithFrame:transform.bounds];
从输出数据流捕捉单一的图像帧利用AVCaptureVideoDataOutputSampleBufferDelegate获取相应的数据流,然后获取某一帧。与拍照一样,这种方式获取的图片依然角度大小不正确,要进行相应的处理。#pragma mark - 从输出数据流捕捉单一的图像帧
-(void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
if (self.isStartGetImage) {
UIImage *originImage = [self imageFromSampleBuffer:sampleBuffer];
CGFloat squareLength = self.previewLayer.bounds.size.
CGFloat previewLayerH = self.previewLayer.bounds.size.
CGSize size = CGSizeMake(squareLength*2, previewLayerH*2);
UIImage *scaledImage = [originImage resizedImageWithContentMode:UIViewContentModeScaleAspectFill bounds:size interpolationQuality:kCGInterpolationHigh];
CGRect cropFrame = CGRectMake((scaledImage.size.width - size.width) / 2, (scaledImage.size.height - size.height) / 2, size.width, size.height);
UIImage *croppedImage = [scaledImage croppedImage:cropFrame];
UIDeviceOrientation orientation = [UIDevice currentDevice].
if (orientation != UIDeviceOrientationPortrait) {
CGFloat degree = 0;
if (orientation == UIDeviceOrientationPortraitUpsideDown) {
degree = 180;// M_PI;
} else if (orientation == UIDeviceOrientationLandscapeLeft) {
degree = -90;// -M_PI_2;
} else if (orientation == UIDeviceOrientationLandscapeRight) {
degree = 90;// M_PI_2;
croppedImage = [croppedImage rotatedByDegrees:degree];
dispatch_async(dispatch_get_main_queue(), ^{
if (self.getimageBlock) {
self.getimageBlock(croppedImage);
self.getimageBlock =
self.isStartGetImage = NO;
对焦通过设置AVCaptureDevice的AVCaptureFocusMode来进行设置对焦模式。AVCaptureFocusMode是个枚举,描述了可用的对焦模式:Locked 指镜片处于固定位置AutoFocus指一开始相机会先自动对焦一次,然后便处于 Locked模式。ContinuousAutoFocus 指当场景改变,相机会自动重新对焦到画面的中心点。
可以通过变换 “感兴趣的点 (point of interest)” 来设定另一个区域。这个点是一个 CGPoint,它的值从左上角{0,0}到右下角 {1,1},{0.5,0.5} 为画面的中心点。通常可以用视频预览图上的点击手势识别来改变这个点,想要将 view 上的坐标转化到设备上的规范坐标,我们可以使用[self.previewLayer captureDevicePointOfInterestForPoint:devicePoint]转换view上的坐标到感兴趣的点。(在进行二维码识别的时候也可以通过设置这个调整识别的重点位置)
总的来说自定义相机能做的事情还是挺多的,还能够对曝光、白平衡进行调节。项目中暂时没用到,用到再进行补充。注意点1. 通过拍照方法获取的照片旋转了90度,并且其大小并不是预览窗口的大小,需要进行截取2. 所有对相机进行的操作建议都放到后台进行,包括切换相机之类3. 更改相机配置时需要先锁定相机,更改完成后再解开锁定}

我要回帖

更多关于 照相机镜头伸缩结构图 的文章

更多推荐

版权声明:文章内容来源于网络,版权归原作者所有,如有侵权请点击这里与我们联系,我们将及时删除。

点击添加站长微信