Android Camera调用流程

jopen 12年前

Android中Camera的调用流程可分为以下几个层次:
Package->Framework->JNI->Camera(cpp)--(binder)-->CameraService->Camera HAL->Camera Driver

以拍照流程为例:
1. 各个参数设置完成,对焦完成后,位于Package的Camera.java会调用Framework中Camera.java的takePicture函数,如下:

public final void takePicture(ShutterCallback shutter, PictureCallback raw,              PictureCallback postview, PictureCallback jpeg) {          mShutterCallback = shutter;          mRawImageCallback = raw;          mPostviewCallback = postview;          mJpegCallback = jpeg;          native_takePicture();  }

此函数保存Package层传下的callback函数,同时调用JNI层的native_takePicture

2. JNI层的native_takePicture自己并没有做太多事情,只是简单地调用cpp的Camera中的takePicture函数。此前已经把JNI中的一个对象注册成了Camera.cpp的listener

3. 位于frameworks/base/libs/camera是向CameraService请求服务的客户端,但它本身也继承了一个BnCameraClient类,用于CameraService回调自己。
class ICameraClient: public IInterface  {  public:      DECLARE_META_INTERFACE(CameraClient);        virtual void            notifyCallback(int32_t msgType, int32_t ext1, int32_t ext2) = 0;      virtual void            dataCallback(int32_t msgType, const sp<IMemory>& data) = 0;      virtual void            dataCallbackTimestamp(nsecs_t timestamp, int32_t msgType, const sp<IMemory>& data) = 0;  };

从上面的接口定义可以看到,这个类就是用于回调。

Camera.cpp的takePicture函数是利用open Camera时得到的ICamera对象来继续调用takePicture

4. 接下来通过binder转到另一个进程CameraService中的处理。CameraService中之前已经实例化了一个HAL层的 CameraHardware,并把自己的data callback传递给了CameraHardware,这些工作都是由CameraService的内部类Client来完成的,这个Client类继承自BnCamera,是真正提供Camera操作API的类

5. 然后自然是调用HAL层CameraHardware的takePicture函数。从HAL层向下就不是Android的标准代码了,各个厂商有自己不同的实现。但思路应该都是相同的:Camera遵循V4L2架构,利用ioctl发送VIDIOC_DQBUF命令得到有效的图像数据,接着回调HAL层的data callback接口以通知CameraService,CameraService会通过binder通知Camera.cpp,如下:
void CameraService::Client::dataCallback(int32_t msgType,          const sp<IMemory>& dataPtr, void* user) {      LOG2("dataCallback(%d)", msgType);        sp<Client> client = getClientFromCookie(user);      if (client == 0) return;      if (!client->lockIfMessageWanted(msgType)) return;        if (dataPtr == 0) {          LOGE("Null data returned in data callback");          client->handleGenericNotify(CAMERA_MSG_ERROR, UNKNOWN_ERROR, 0);          return;      }        switch (msgType) {          case CAMERA_MSG_PREVIEW_FRAME:              client->handlePreviewData(dataPtr);              break;          case CAMERA_MSG_POSTVIEW_FRAME:              client->handlePostview(dataPtr);              break;          case CAMERA_MSG_RAW_IMAGE:              client->handleRawPicture(dataPtr);              break;          case CAMERA_MSG_COMPRESSED_IMAGE:              client->handleCompressedPicture(dataPtr);              break;          default:              client->handleGenericData(msgType, dataPtr);              break;      }  }  // picture callback - compressed picture ready  void CameraService::Client::handleCompressedPicture(const sp<IMemory>& mem) {      int restPictures =  mHardware->getPictureRestCount();      if (!restPictures)      {          disableMsgType(CAMERA_MSG_COMPRESSED_IMAGE);      }        sp<ICameraClient> c = mCameraClient;      mLock.unlock();      if (c != 0) {          c->dataCallback(CAMERA_MSG_COMPRESSED_IMAGE, mem);      }  }


6. Camera.cpp会继续通知它的listener:
// callback from camera service when frame or image is ready  void Camera::dataCallback(int32_t msgType, const sp<IMemory>& dataPtr)  {      sp<CameraListener> listener;      {          Mutex::Autolock _l(mLock);          listener = mListener;      }      if (listener != NULL) {          listener->postData(msgType, dataPtr);      }  }


7. 而这个listener就是我们的JNI层的JNICameraContext对象了:
void JNICameraContext::postData(int32_t msgType, const sp<IMemory>& dataPtr)  {      // VM pointer will be NULL if object is released      Mutex::Autolock _l(mLock);      JNIEnv *env = AndroidRuntime::getJNIEnv();      if (mCameraJObjectWeak == NULL) {          LOGW("callback on dead camera object");          return;      }        // return data based on callback type      switch(msgType) {      case CAMERA_MSG_VIDEO_FRAME:          // should never happen          break;      // don't return raw data to Java      case CAMERA_MSG_RAW_IMAGE:          LOGV("rawCallback");          env->CallStaticVoidMethod(mCameraJClass, fields.post_event,                  mCameraJObjectWeak, msgType, 0, 0, NULL);          break;      default:          // TODO: Change to LOGV          LOGV("dataCallback(%d, %p)", msgType, dataPtr.get());          copyAndPost(env, dataPtr, msgType);          break;      }  }


8. 可以看到JNI层最终都会调用来自java层的函数postEventFromNative,这个函数会发送对应的消息给自己的 eventhandler,收到消息后就会根据消息的类型回调Package层Camera.java最初传下来的callback函数。至此,我们就在最上层拿到了图像数据。