res摄像机输出信号怎么没有信号

Android4.2.2 CameraService服务启动和应用端camera初始化记录
之前的10篇博文主要是记录了4.2.2的SurfaceFlinger的相关内容,为何之前会投入那么多的时间,原因就在于之前在看camera的架构时,遇到了本地的ANativeWindow和Surface的内容。而这些是SurfaceFlinger中最常见的应用端的使用品。故而在学习完了SurfaceFlinger之后就来看看Camera的的架构内容。这里先和大家分享android4.2.2的CameraService的启动过程与其的架构。 1.cameraService在何处启动mediaserver启动了我们cameraservice,即所谓的多媒体相关的服务总管。int main(int argc, char** argv)
signal(SIGPIPE, SIG_IGN);
sp proc(ProcessState::self());
sp sm = defaultServiceManager();
ALOGI("ServiceManager: %p", sm.get());
AudioFlinger::instantiate();//多媒体服务的启动包括音频,摄像头等
MediaPlayerService::instantiate();
CameraService::instantiate();
AudioPolicyService::instantiate();
ProcessState::self()->startThreadPool();
IPCThreadState::self()->joinThreadPool();
之前的文章有提到过一般的Service的启动方式,这里是典型的一种BinderService的启动class BinderService
static status_t publish(bool allowIsolated = false) {
sp sm(defaultServiceManager());
return sm->addService(String16(SERVICE::getServiceName()), new SERVICE(), allowIsolated);
static void publishAndJoinThreadPool(bool allowIsolated = false) {
sp sm(defaultServiceManager());
sm->addService(String16(SERVICE::getServiceName()), new SERVICE(), allowIsolated);
ProcessState::self()->startThreadPool();
IPCThreadState::self()->joinThreadPool();
static void instantiate() { publish(); }//两种初始化binder服务的方式
static status_t shutdown() {
return NO_ERROR;
}; CameraService::CameraService()
:mSoundRef(0), mModule(0)
ALOGI("CameraService started (pid=%d)", getpid());
gCameraService =
void CameraService::onFirstRef()//camerservice生成sp后被调用
BnCameraService::onFirstRef();
if (hw_get_module(CAMERA_HARDWARE_MODULE_ID,
(const hw_module_t **)&mModule) get_number_of_cameras();//通过hal获取camera的数目
if (mNumberOfCameras > MAX_CAMERAS) {
ALOGE("Number of cameras(%d) > MAX_CAMERAS(%d).",
mNumberOfCameras, MAX_CAMERAS);
mNumberOfCameras = MAX_CAMERAS;
for (int i = 0; i < mNumberOfC i++) {
setCameraFree(i);
}调用CameraService的构造函数后,会自动执行onFirstRef().在该函数内部主要实现了对Camera的Hal层的操作。通过hw_get_module()获得加载HAL层的模块句柄到mModule成员变量之中,并获得硬件的Camera的个数到mNumberOfCameras之中。可以看到CameraService比起SurfaceFinger这个强大的service来说简单了很多。 step3,camera客户端的建立 通java层的new Camera()来到JNI层,依次经过camera.java再到本地JNI的android_hardware_Camera.cpp。camera.java中Camera.open()函数的执行
public static Camera open(int cameraId) {
return new Camera(cameraId);
* Creates a new Camera object to access the first back-facing camera on the
* device. If the device does not have a back-facing camera, this returns
* @see #open(int)
public static Camera open() {
int numberOfCameras = getNumberOfCameras();
CameraInfo cameraInfo = new CameraInfo();
for (int i = 0; i < numberOfC i++) {
getCameraInfo(i, cameraInfo);
if (cameraInfo.facing == CameraInfo.CAMERA_FACING_BACK) {
return new Camera(i);
Camera中的类
Camera(int cameraId) {
mShutterCallback =
mRawImageCallback =
mJpegCallback =
mPreviewCallback =
mPostviewCallback =
mZoomListener =
if ((looper = Looper.myLooper()) != null) {
native_setup(new WeakReference(this), cameraId);
JNI层的android_hardware_camera.cpp中:// connect to camera service
static void android_hardware_Camera_native_setup(JNIEnv *env, jobject thiz,
jobject weak_this, jint cameraId)
sp camera = Camera::connect(cameraId);//调用Camera的connect函数
if (camera == NULL) {
jniThrowRuntimeException(env, "Fail to connect to camera service");
// make sure camera hardware is alive
if (camera->getStatus() != NO_ERROR) {
jniThrowRuntimeException(env, "Camera initialization failed");
jclass clazz = env->GetObjectClass(thiz);
if (clazz == NULL) {
jniThrowRuntimeException(env, "Can't find android/hardware/Camera");
}来到Camera应用层的类connect函数,目标是请求CameraService新建一个Camera客户端。 step4:camera客户端的connect函数sp Camera::connect(int cameraId)
ALOGV("connect");
sp c = new Camera();//BnCameraClient
const sp& cs = getCameraService();//获取一个Bpcamerservice
if (cs != 0) {
c->mCamera = cs->connect(c, cameraId);//基于binder驱动最终会去调用camerservice侧的connect,mCamera指向一个Bpcamera
if (c->mCamera != 0) {
c->mCamera->asBinder()->linkToDeath(c);
c->mStatus = NO_ERROR;
c.clear();
新建一个应用端的Camera,该类class Camera : public BnCameraClient, public IBinder::DeathRecipient继承关系如下。cs = getCameraService()通过SM获取CameraService在本地的一个代理。调用connect函数后最终调用CameraService侧的connect()函数。
virtual sp connect(const sp& cameraClient, int cameraId)
Parcel data,
data.writeInterfaceToken(ICameraService::getInterfaceDescriptor());
data.writeStrongBinder(cameraClient->asBinder());
data.writeInt32(cameraId);//发送到服务端的数据包
remote()->transact(BnCameraService::CONNECT, data, &reply);//实际调用的是Bpbinder的transact
return interface_cast(reply.readStrongBinder());//传入Bpbinder的handle数值返回一个new BpCamera,实际是服务端的Bncamera
};在这里可以看到这边传入一个本地Camera类对象即new Camera(),这个匿名的binder对象将通过Binder驱动传递给CameraService。主要用于后续CameraService的回调给应用层的Camera来处理step5:CameraSevice端的connect函数sp CameraService::connect(
const sp& cameraClient, int cameraId) {
int callingPid = getCallingPid();
LOG1("CameraService::connect E (pid %d, id %d)", callingPid, cameraId);
if (!mModule) {
ALOGE("Camera HAL module not loaded");
return NULL;
if (cameraId = mNumberOfCameras) {
ALOGE("CameraService::connect X (pid %d) rejected (invalid cameraId %d).",
callingPid, cameraId);
return NULL;
char value[PROPERTY_VALUE_MAX];
property_get("sys.secpolicy.camera.disabled", value, "0");
if (strcmp(value, "1") == 0) {
// Camera is disabled by DevicePolicyManager.
ALOGI("Camera is disabled. connect X (pid %d) rejected", callingPid);
return NULL;
Mutex::Autolock lock(mServiceLock);
if (mClient[cameraId] != 0) {
client = mClient[cameraId].promote();
if (client != 0) {
if (cameraClient->asBinder() == client->getCameraClient()->asBinder()) {
LOG1("CameraService::connect X (pid %d) (the same client)",
callingPid);
ALOGW("CameraService::connect X (pid %d) rejected (existing client).",
callingPid);
return NULL;
mClient[cameraId].clear();
if (mBusy[cameraId]) {
ALOGW("CameraService::connect X (pid %d) rejected"
" (camera %d is still busy).", callingPid, cameraId);
return NULL;
struct camera_
if (mModule->get_camera_info(cameraId, &info) != OK) {//获取camera的相关信息
ALOGE("Invalid camera id %d", cameraId);
return NULL;
int deviceV
if (mModule->common.module_api_version == CAMERA_MODULE_API_VERSION_2_0) {
deviceVersion = info.device_
deviceVersion = CAMERA_DEVICE_API_VERSION_1_0;
switch(deviceVersion) {
case CAMERA_DEVICE_API_VERSION_1_0:
client = new CameraClient(this, cameraClient, cameraId,
info.facing, callingPid, getpid());//client是CameraClient的基类,新建一个camerservice侧的cameraclient
case CAMERA_DEVICE_API_VERSION_2_0:
client = new Camera2Client(this, cameraClient, cameraId,
info.facing, callingPid, getpid());
ALOGE("Unknown camera device HAL version: %d", deviceVersion);
return NULL;
if (client->initialize(mModule) != OK) {//cameraclient init初始化,实际调用的是CameraClient
return NULL;
cameraClient->asBinder()->linkToDeath(this);
mClient[cameraId] =//连接请求后新建立的
LOG1("CameraService::connect X (id %d, this pid is %d)", cameraId, getpid());
//返回CameraClient
分以下几个过程来分析这个函数:a. sp一个CameraService内部的客户端类先查看当前服务端维护的camera client个数mClient[cameraId] != 0,初次启动是该数为0. b.获取底层camera模块的信息get_camera_info,查看当前的api版本信息是CAMERA_MODULE_API_VERSION_2_0还是CAMERA_MODULE_API_VERSION_1_0.我的平台是1.0故执行如下:
switch(deviceVersion) {
case CAMERA_DEVICE_API_VERSION_1_0:
client = new CameraClient(this, cameraClient, cameraId,
info.facing, callingPid, getpid());//client是CameraClient的基类,新建一个camerservice侧的cameraclient
c.CameraClient的建立,该类继承了public CameraService::Client这个CameraService的内部类,Client继承了BnCamera。 d.client->initialize(mModule)的处理,和硬件相关status_t CameraClient::initialize(camera_module_t *module) {//一个cameraClient新建一个CameraHardwareInterface硬接口
int callingPid = getCallingPid();
LOG1("CameraClient::initialize E (pid %d, id %d)", callingPid, mCameraId);
char camera_device_name[10];
snprintf(camera_device_name, sizeof(camera_device_name), "%d", mCameraId);
mHardware = new CameraHardwareInterface(camera_device_name);//新建一个camera硬件接口,camera_device_name为设备名
res = mHardware->initialize(&module->common);//直接底层硬件的初始
if (res != OK) {
ALOGE("%s: Camera %d: unable to initialize device: %s (%d)",
__FUNCTION__, mCameraId, strerror(-res), res);
mHardware.clear();
return NO_INIT;
mHardware->setCallbacks(notifyCallback,
dataCallback,
dataCallbackTimestamp,
(void *)mCameraId);//将camerservice处的回调函数注册到HAL处
// Enable zoom, error, focus, and metadata messages by default
enableMsgType(CAMERA_MSG_ERROR | CAMERA_MSG_ZOOM | CAMERA_MSG_FOCUS |
CAMERA_MSG_PREVIEW_METADATA | CAMERA_MSG_FOCUS_MOVE |
CAMERA_MSG_CONTINUOUSSNAP | CAMERA_MSG_SNAP | CAMERA_MSG_SNAP_THUMB |
CAMERA_MSG_SNAP_FD); //enable the continuoussnap and singlesnap message by fuqiang
LOG1("CameraClient::initialize X (pid %d, id %d)", callingPid, mCameraId);
return OK;
}这里出现了一个封装Camera底层操作的一个硬件接口类CameraHardwareInterface,可以屏蔽不同的平台硬件特性,主要是实现的HAL的相关操作。 step6.CameraHardwareInterface接口类的实现initialize()函数。
status_t initialize(hw_module_t *module)
ALOGI("Opening camera %s", mName.string());
int rc = module->methods->open(module, mName.string(),
(hw_device_t **)&mDevice);//这里打开camera硬件设备
if (rc != OK) {
ALOGE("Could not open camera %s: %d", mName.string(), rc);
initHalPreviewWindow();//初始preview的相关流opspreview_stream_ops,初始化hal的预览窗口
这里的module就是底层的camera模块,最终完成open的操作,这里占时不说明HAL的操作,后续会专门分享camera的HAL的实现。 step7:setCallbacks()设置回调函数,即注册回调函数到HAL处
void setCallbacks(notify_callback notify_cb,
data_callback data_cb,
data_callback_timestamp data_cb_timestamp,
void* user)
mNotifyCb = notify_
mDataCb = data_
mDataCbTimestamp = data_cb_
ALOGV("%s(%s)", __FUNCTION__, mName.string());
if (mDevice->ops->set_callbacks) {
mDevice->ops->set_callbacks(mDevice,
__notify_cb,
__data_cb,
__data_cb_timestamp,
__get_memory,
this);//传入的是__notify_cb函数
}//硬件设备设置回调
分别消息回调,数据回调,时间戳回调,以及内存相关操作的回调。 step8:mClient[cameraId] = client将新建好的cameraclient对象维护到CameraService中并返回退出connect,而最终通过Binder驱动返回到客户端的是CameraClient的代理BpCameraClient,是一个匿名的Binder服务。c->mCamera = cs->connect(c, cameraId);将这个服务端的cameraclient维护到本地应用端的Camera的mCamera成员中。而后续的Camera的相关操作都通过该mCamera成员和CameraService进行进一步的交互操作。 camera的一个调用架构图:
(window.slotbydup=window.slotbydup || []).push({
id: '2467140',
container: s,
size: '1000,90',
display: 'inlay-fix'
(window.slotbydup=window.slotbydup || []).push({
id: '2467141',
container: s,
size: '1000,90',
display: 'inlay-fix'
(window.slotbydup=window.slotbydup || []).push({
id: '2467143',
container: s,
size: '1000,90',
display: 'inlay-fix'
(window.slotbydup=window.slotbydup || []).push({
id: '2467148',
container: s,
size: '1000,90',
display: 'inlay-fix'如何将摄像头数据实时使用h264格式编码传送出去_百度知道
如何将摄像头数据实时使用h264格式编码传送出去
提问者采纳
&#47.Window, &,H264Android&quot.SurfaceHolder.drawBitmap(VideoB0)
mCamera.v(&quot.PixelFormat!= raf)
raf, 320.RGB_565), byte[] out).orientation == C&#47.content.decoder = InitDecoder();
public static native int htonl(int i), ex.LayoutParams.h264&192import java.camera);
raf = new RandomAccessFile(file, Camera camera) {
public void surfaceChanged(SurfaceH&#47.orientation == C
mCffmpeg&quot.createBitmap(
System.UDP_SendBufData(&quot, 288);
mSurfaceHolder.System.
} catch (Throwable e) {
&#47.FLAG_FULLSCREEN,int height).open().Parameters p = mC
private native int CompressEnd(long encoder).v(&quot, result).168;import android.finalize().Intent.SurfaceView,
Camera.ORIENTATION_PORTRAIT) {
} catch (Exception ex) {
}protected void onDraw(Canvas canvas) {&#47.PictureCallback {
private SurfaceView mSurfaceView = null.copyPixelsFromBuffer(buffer);
mPreviewRunning =
mCamera.os.out&
VideoB, int height) {
foxsocket=new foxSocket();
setContentView(R;S
canvas, 288)).startPreview(), -1.File.surface_camera), 0,
WindowManager.out&quot.printStackTrace();
h264Buff = new byte[width * height *2].setPreviewSize(352;
public native int GetH264Height(long encoder).toString()), int insize.import java.setType(SurfaceHolder.RGB_565).1;
getWindow(););
if (count, int format.out&, int width.setFlags(WindowM
} catch (Exception ex) {
Log.LayoutP), result);
Cpublic class AndroidVideo extends Activity implements Callback.length.graphics.FEATURE_NO_TITLE),
int height) {
if (mPreviewRunning) {
mCamera.getConfiguration();
public native int UninitDecoder(long encoder);&#47.close(), height).addCallback(this).ByteBuffer, Camera camera) {
if (count&lt.close();
long decoder=0,byte[] out);50)
int result=CompressBuffer(encoder, 6000.
public void surfaceDestroyed(SurfaceHolder holder) {
if (mCamera .FLAG_FULLSCREEN), 0.SURFACE_TYPE_PUSH_BUFFERS), h264Buff.stopPreview();
Bitmap VideoBit = Bitmap.V
}class H264Encoder implements CS
RandomAccessFile raf=null.onCreate(savedInstanceState).Callback.stopPreview().release().setPreviewCallback(new H264Encoder(352.loadLibrary(&quot.setParameters(p).
public static native short ntohs(short i).
private Camera mCamera = null.getResources().
Log.setPreviewCallback(null);
File file = new File(&
private native int CompressBuffer(long encoder.Config.layout.createBitmap(mPixel.finalize&
getWindow();
private native long CompressBegin(int width.Config, ex.release();&#47.RandomAccessFile.makeBuffer(data565,byte[] in.onConfigurationChanged(newConfig);import android.PreviewCallback {
long encoder=0;
public native int GetH264Width(long encoder);import java.Bundle.WindowMrw&import android.io.os.v(&quot.out&
foxsocket!= raf)
private H264Encoder(){};
mCamera = TODO Auto-generated catch block
e.getParameters().toString()).
public void onPreviewFrame(byte[] data,;
byte[] h264Buff =
private SurfaceHolder mSurfaceHolder =
mCamera = null, C
if (result&
public static native int ntohl(int i).ORIENTATION_LANDSCAPE) {
} else if (this, &
} catch (Exception ex) {
mSurfaceView = (SurfaceView)
mC.setPreviewDisplay(holder);).Bundle, null);
CompressEnd(encoder).B
public static native short htons(short i).loadLibrary(&quot.A;public H264Encoder(S}@Override
public void onPictureTaken(byte[] data!= null) {
mC&#47.toString()), B
private foxSocket foxsocket=),
mSurfaceHolder = mSurfaceView.ARGB_8888);
count = 0, 480;
count++.Simport android,h264Buff).B&#47!= null) {
} catch (Exception ex) {
Log.findViewById(R, data.v(&quot.wrap( mPixel ).getConfiguration().graphics.out& 此处设定不同的分辨率
int height = 288.v(&quot.out&quot, byte[]
p.setFormat(PixelF
byte [] mPixel = new byte[width*height*2];
private boolean mPreviewRunning = false.write(h264B&#47, 0;public native long InitDecoder(),import android.getResources().util.res.TRANSLUCENT).SurfaceHolder.getHolder().v(&
public void onCreate(Bundle savedInstanceState) {
public native int DecoderNal(
mSurfaceHolder.C
Bitmap tmpBit = Bitmap.toString()),
protected void finalize()
CompressEnd(encoder);int width = 352;
protected void onDestroy() {
public void surfaceCreated(SurfaceHolder holder) {
mCamera = Csdcard/;);
} catch (Exception ex) {
ByteBuffer buffer = ByteBuffer.stopPreview();
encoder = CompressBegin(width,
long count=0, N));System.C
mPreviewRunning =
public void onConfigurationChanged(Configuration newConfig) {
/finalize&
requestWindowFeature(Window.17&quot!=9999)
count = 9999;
} catch (Exception ex) {
Log, &quot
其他类似问题
为您推荐:
h264的相关知识
其他1条回答
包头采用ascii,内容用的是内容本身的编码,比如二进制数据
等待您来回答
下载知道APP
随时随地咨询
出门在外也不愁几种摄像机标定方法的比较_图文_百度文库
两大类热门资源免费畅读
续费一年阅读会员,立省24元!
几种摄像机标定方法的比较
上传于||暂无简介
阅读已结束,如果下载本文需要使用2下载券
想免费下载本文?
你可能喜欢SONY摄像机的RES是什么意思_百度知道
SONY摄像机的RES是什么意思
res是resolution(分辨率、清晰度)的缩写hi-res的意思就是高清
其他类似问题
为您推荐:
摄像机的相关知识
其他1条回答
相机的分辨率。
等待您来回答
下载知道APP
随时随地咨询
出门在外也不愁打开DV学科学(选题参考)
  随着科学技术的发展,人民生活水平的提高,DV摄像机在我国城乡居民家庭中已经有了一定的普及率,如何应用DV技术为青少年一代的教育和成长服务,成为科技教育工作者的一个新的课题。在国际上,一些发达国家已把DV技术应用于青少年科学探究活动。2006年,中国科协青少年科技中心借鉴国外青少年科技活动的最新发展,认识到利用DV技术开展科学探究活动,是青少年科技活动的一个崭新领域。为此,中国科协青少年科技中心策划开展了青少年科学DV活动,为我国青少年开创了一个利用DV技术开展科学探究活动的新平台。
从2006年7月开始,中国科协青少年科技中心组织科学、教育、传媒等方面的专家、老师,组成了青少年科学DV活动资源包开发项目组,开始探索如何在我国青少年中开展这项活动,并开发一套适合在我国中小学校、科学俱乐部、科学工作室、科技类场馆中推广使用的青少年科学DV活动资源包。在资源包开发过程中,项目组的专家、老师从多学科交叉研究入手,以举办青少年科学DV活动作品比赛为突破口,不断研究,不断探索,在实践中寻求符合青少年特点的活动内容和形式。三年来,项目组的专家、老师秉承严谨认真的工作态度和开放务实的创新精神,经过无数次的讨论、无数次的否定、无数次的修订,今天,这套科学DV活动资源包终于诞生了。
《青少年科学DV活动资源包》以《打开DV学科学&学生手册》为主线,设计了17个参与式的活动,引领青少年朋友们由浅入深、由表及里地参与科学DV 活动的全过程,真正让青少年能够在学中玩,在玩中学,体验学习乐趣、感受探索的快乐。同时,资源包还配套编著了《打开DV学科学&教师手册》、《打开DV 学科学&选题参考》、《打开DV学科学&样片赏析》、《打开DV学科学&DV技术100问》等,形成了一套完整的&青少年科学DV活动资源包&体系。
点击下载:[文件大小:11.08MB]下载人次:479次
点击浏览:}

我要回帖

更多关于 apktool反编译没有res 的文章

更多推荐

版权声明:文章内容来源于网络,版权归原作者所有,如有侵权请点击这里与我们联系,我们将及时删除。

点击添加站长微信