Development record of developer who study hard everyday.

레이블이 camera2인 게시물을 표시합니다. 모든 게시물 표시
레이블이 camera2인 게시물을 표시합니다. 모든 게시물 표시
, , , ,

안드로이드 Camera2 API 사용법 및 예제

 안드로이드 Camera2 API 사용법 및 예제

회사에서 유지보수 계약을 맺은 프로젝트가 있는데 커스텀 카메라 기능이 포함되어있었다.

Camera API로 구현되어있었는데 API 21부터 deprecated 되었다.

그래서그런지 서버로 이미지를 전송했을 때, 이미지의 화질이 떨어지는 등 알 수 없는 버그가 있었다.

어쨌든 Camera API를 Camera2 API로 교체하는 작업을 시작했다.

migration을 하면서 공부했던 구글 샘플을 가지고 간단한 커스텀 카메라 코드를 작성해보았다.

공식문서 링크는 아래와 같다.

https://developer.android.com/training/camera2


1.  fragment_camera.xml

<?xml version="1.0" encoding="utf-8"?>
<layout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools">

<data>

</data>

<FrameLayout
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context=".fragment.CameraFragment">


<com.antwhale.sample.camera2.view.AutoFitSurfaceView
android:id="@+id/viewFinder"
android:layout_width="match_parent"
android:layout_height="match_parent" />

<ImageButton
android:id="@+id/captureButton"
android:layout_width="96dp"
android:layout_height="96dp"
android:scaleType="fitCenter"
android:background="@drawable/ic_shutter"
android:layout_gravity="bottom|center"
android:layout_margin="16dp"
/>

</FrameLayout>
</layout>

<FrameLayout> 안에 AutoFitSurfaceView와 ImageButton를 넣어주었다.

AutoFitSurfaceView는 SurfaceView를 확장한 custom SurfaceView이다.

카메라 프리뷰를 Center-Crop 해주는 기능을 가지고 있다.

ImageButton을 클릭하면 사진촬영을 한다.


2. AutoFitSurfaceView 정의

class AutoFitSurfaceView @JvmOverloads constructor(
context: Context,
attrs: AttributeSet? = null,
defStyle: Int = 0
) : SurfaceView(context, attrs, defStyle){
private var aspectRatio = 0f
    //1
fun setAspectRatio(width: Int, height: Int) {
require(width > 0 && height > 0) { "Size cannot be negative" }
aspectRatio = width.toFloat() / height.toFloat()
holder.setFixedSize(width, height)
requestLayout()
}
    //2
override fun onMeasure(widthMeasureSpec: Int, heightMeasureSpec: Int) {
super.onMeasure(widthMeasureSpec, heightMeasureSpec)
val width = MeasureSpec.getSize(widthMeasureSpec)
val height = MeasureSpec.getSize(heightMeasureSpec)
if (aspectRatio == 0f) {
setMeasuredDimension(width, height)
} else {

// Performs center-crop transformation of the camera frames
val newWidth: Int
val newHeight: Int
val actualRatio = if (width > height) aspectRatio else 1f / aspectRatio
if (width < height * actualRatio) {
newHeight = height
newWidth =
(height * actualRatio).roundToInt()
} else {
newWidth = width
newHeight =
(width / actualRatio).roundToInt()
}

Log.d(TAG, "Measured dimensions set: $newWidth x $newHeight")
setMeasuredDimension(newWidth, newHeight)
}
}

companion object {
private val TAG = AutoFitSurfaceView::class.java.simpleName
}
}

1 => 가로 세로 높이의 비율을 계산해서 SurfaceView에 적용한다.

2 => onMeasure메소드는 뷰가 그려질 때, 사이즈를 계산하여 적용하는 메소드이다.

SurfaceView의 비율에 따라 Center-Crop을 하여 프리뷰의 화면비율을 유지시켜준다.


3. 변수선언

class CameraFragment : Fragment() {
private val TAG = CameraFragment::class.java.simpleName
private lateinit var binding: FragmentCameraBinding

private lateinit var camera: CameraDevice    //카메라장치
private val IMAGE_BUFFER_SIZE = 3            //이미지 버퍼크기
private val IMAGE_CAPTURE_TIMEOUT_MILLIS = 5000L    //사진촬영 타임아웃

private val cameraThread = HandlerThread("CameraThread").apply { start() }    //카메라 촬영시 사용할 쓰레드
private val cameraHandler = Handler(cameraThread.looper)

private val imageReaderThread = HandlerThread("imageReaderThread").apply { start() }    //촬영된 이미지 다룰 쓰레드
private val imageReaderHandler = Handler(imageReaderThread.looper) }

카메라와 관련된 변수들을 선언한다.


4. Camera 초기화

override fun onViewCreated(view: View, savedInstanceState: Bundle?) {
super.onViewCreated(view, savedInstanceState)
Log.d(TAG, "onViewCreated")
    //1
binding.viewFinder.holder.addCallback(object : SurfaceHolder.Callback {
override fun surfaceCreated(p0: SurfaceHolder) {    //2
val previewSize = getPreviewOutputSize(
binding.viewFinder.display,
characteristics,
SurfaceHolder::class.java
)

Log.d(TAG, "View finder size: ${binding.viewFinder.width} x ${binding.viewFinder.height}")
Log.d(TAG, "Selected preview size: $previewSize")
binding.viewFinder.setAspectRatio(previewSize.width, previewSize.height)
            //3
view.post{ initializeCamera() }
}

override fun surfaceChanged(p0: SurfaceHolder, p1: Int, p2: Int, p3: Int) {
Log.d(TAG, "surfaceChanged")
}

override fun surfaceDestroyed(p0: SurfaceHolder) {
Log.d(TAG, "surfaceDestroyed")
}
})
    //4
relativeOrientation = OrientationLiveData(requireContext(), characteristics).apply {
observe(viewLifecycleOwner) { orientation ->
Log.d(TAG, "Orientation changed: $orientation")
}
}

binding.captureButton.setOnClickListener {
//카메라 사진촬영 코드
}
}
fun <T> getPreviewOutputSize(
display: Display,
characteristics: CameraCharacteristics,
targetClass: Class<T>,
format: Int? = null
): Size {

// Find which is smaller: screen or 1080p
val screenSize = getDisplaySmartSize(display)
val hdScreen = screenSize.long >= SIZE_1080P.long || screenSize.short >= SIZE_1080P.short
val maxSize = if (hdScreen) SIZE_1080P else screenSize

// If image format is provided, use it to determine supported sizes; else use target class
val config = characteristics.get(
CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP)!!
if (format == null)
assert(StreamConfigurationMap.isOutputSupportedFor(targetClass))
else
assert(config.isOutputSupportedFor(format))
val allSizes = if (format == null)
config.getOutputSizes(targetClass) else config.getOutputSizes(format)

// Get available sizes and sort them by area from largest to smallest
val validSizes = allSizes
.sortedWith(compareBy { it.height * it.width })
.map { SmartSize(it.width, it.height) }.reversed()

// Then, get the largest output size that is smaller or equal than our max size
return validSizes.first { it.long <= maxSize.long && it.short <= maxSize.short }.size
}

1 => SurfaceView에 상태값에 따른 콜백을 등록해준다.

2 => SurfaceView가 만들어지면 기기가 지원하는 가장 큰 프리뷰사이즈를 계산해서 SurfaceView에 적용시킨다.

이 과정에서 AutofitSurfaceView를 사용했기 때문에 프리뷰 사이즈의 비율을 유지한채 center-crop이 되어 화질이 그대로 유지된다.

3 => SurfaceView의 화면크기가 정해지면 카메라 초기화 시작

4 => 사진의 orientation 상태를 저장하는 LiveData를 observe한다.

class OrientationLiveData(
context: Context,
characteristics: CameraCharacteristics
) : LiveData<Int>() {

private val listener = object : OrientationEventListener(context.applicationContext) {
override fun onOrientationChanged(orientation: Int) {
val rotation = when {
orientation <= 45 -> Surface.ROTATION_0
orientation <= 135 -> Surface.ROTATION_90
orientation <= 225 -> Surface.ROTATION_180
orientation <= 315 -> Surface.ROTATION_270
else -> Surface.ROTATION_0
}

val relative = computeRelativeRotation(characteristics, rotation)
if(relative != value) postValue(relative)
}
}

override fun onActive() {
super.onActive()
listener.enable()
}

override fun onInactive() {
super.onInactive()
listener.disable()
}

companion object {
@JvmStatic
private fun computeRelativeRotation(
characteristics: CameraCharacteristics,
surfaceRotation: Int
): Int {
val sensorOrientationDegrees =
characteristics.get(CameraCharacteristics.SENSOR_ORIENTATION)!!

val deviceOrientationDegrees = when(surfaceRotation) {
Surface.ROTATION_0 -> 0
Surface.ROTATION_90 -> 90
Surface.ROTATION_180 -> 180
Surface.ROTATION_270 -> 270
else -> 0
}

val sign = if (characteristics.get(CameraCharacteristics.LENS_FACING) == CameraCharacteristics.LENS_FACING_FRONT) 1 else -1

return (sensorOrientationDegrees - (deviceOrientationDegrees * sign) + 360) % 360
}
}
}


5. 카메라 초기화

private fun initializeCamera() = lifecycleScope.launch(Dispatchers.Main) {
Log.d(TAG, "initializeCamera")
camera = openCamera()    //1

val size = characteristics.get(CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP)!!
.getOutputSizes
(ImageFormat.JPEG).maxByOrNull { it.height * it.width }!!    //2

imageReader = ImageReader.newInstance(size.width, size.height, ImageFormat.JPEG, IMAGE_BUFFER_SIZE)
val targets = listOf(binding.viewFinder.holder.surface, imageReader.surface)       //3

session = createCaptureSession(camera, targets, cameraHandler)    //4

val captureRequest = camera.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW)
.apply { addTarget(binding.viewFinder.holder.surface) }    //5

session.setRepeatingRequest(captureRequest.build(), null, cameraHandler)    //6
}

1 => 카메라 장치를 연다

2 => 카메라가 지원하는 가장 큰 사이즈의 이미지크기를 구한다.

3 => 촬영된 이미지를 읽어들이는 ImageReader에 이미지 크기와 버퍼크기를 설정한다.

프리뷰를 보여줄 대상이될 surface를 리스트에 저장한다

프리뷰 이미지를 바로 유저에게 보여주려면 SufaceView의 surface를 사용하고 넘어오는 프레임 각각을 처리하고싶다면 ImageReader의 surface를 사용한다.

4 => 카메라 세션을 만든다.

5 => 프리뷰를 위한 CaptureRequest를 만든다.

6 => 주기적으로 프리뷰 세션을 요청한다.

@SuppressLint("MissingPermission")
private suspend fun openCamera() : CameraDevice = suspendCancellableCoroutine { cont ->
cameraManager.openCamera(cameraId, object : CameraDevice.StateCallback() {
override fun onOpened(device: CameraDevice) {
Log.d(TAG, "onOpened")
cont.resume(device)
}

override fun onDisconnected(device: CameraDevice) {
Log.d(TAG, "onDisconnected")
requireActivity().finish()
}

override fun onError(device: CameraDevice, error: Int) {
val msg = when (error) {
ERROR_CAMERA_DEVICE -> "Fatal (device)"
ERROR_CAMERA_DISABLED -> "Device policy"
ERROR_CAMERA_IN_USE -> "Camera in use"
ERROR_CAMERA_SERVICE -> "Fatal (service)"
ERROR_MAX_CAMERAS_IN_USE -> "Maximum cameras in use"
else -> "Unknown"
}
val exc = RuntimeException("Camera $cameraId error: ($error) $msg")
Log.e(TAG, exc.message, exc)
if(cont.isActive) cont.resumeWithException(exc)
}
}, cameraHandler)
}

=> 카메라를 여는 openCamera() 메소드이다.

비동기적으로 실행하기위해 Coroutine을 사용했고 특히, 콜백을 비동기적으로 처리하기위해서 suspendCancellableCoroutine을 사용했다.

private suspend fun createCaptureSession(
device: CameraDevice,
targets: List<Surface>,
handler:Handler? = null
): CameraCaptureSession = suspendCoroutine { cont ->
Log.d(TAG, "createCameraPreviewSession: ")

device.createCaptureSession(targets, object : CameraCaptureSession.StateCallback() {
override fun onConfigured(session: CameraCaptureSession) {
Log.d(TAG, "onConfigured")
cont.resume(session)
}

override fun onConfigureFailed(session: CameraCaptureSession) {
val exc = RuntimeException("Camera ${device.id} session configuration failed")
Log.e(TAG, exc.message, exc)
cont.resumeWithException(exc)
}
}, handler)
}

=> CaptureSession을 만드는 createCaptureSession() 메소드이다.


6. 사진 촬영하기

카메라 프리뷰 화면을 보여주기까지 완료했다.

이제 촬영을 해보자.

binding.captureButton.setOnClickListener {
it.isEnabled = false

lifecycleScope.launch(Dispatchers.IO) {
takePhoto().use { result ->
Log.d(TAG, "Result received: $result")

//Save the result to disk
val output = saveResult(result)
Log.d(TAG, "Image saved: ${output.absolutePath}")

if(output.extension == "jpg") {
val exif = androidx.exifinterface.media.ExifInterface(output.absolutePath)
exif.setAttribute(ExifInterface.TAG_ORIENTATION, result.orientation.toString())
exif.saveAttributes()
Log.d(TAG, "EXIF metadata saved: ${output.absolutePath}")
}

lifecycleScope.launch(Dispatchers.Main) { //사진 저장 후, 다른 화면으로 이미지 path 전달    
val path = output.absolutePath
val action = CameraFragmentDirections.actionCameraFragmentToImageViewFragment(path)
findNavController().navigate(action)
}


it.post{ it.isEnabled = true }
}
}
}

우선, ImageButton을 클릭시 사진촬영을 시작한다.

private suspend fun saveResult(result: CombinedCaptureResult): File = suspendCoroutine { cont ->
when(result.foramt) {
ImageFormat.JPEG, ImageFormat.DEPTH_JPEG -> {
val buffer = result.image.planes[0].buffer
val bytes = ByteArray(buffer.remaining()).apply { buffer.get(this) }

try {
val output = createFile(requireContext(), "jpg")
FileOutputStream(output).use { it.write(bytes) }
cont.resume(output)
} catch (exc: IOException) {
Log.e(TAG, "Unable to write JPEG image to file", exc)
cont.resumeWithException(exc)
}
}

else -> {
val exc = RuntimeException("Unknown image format: ${result.image.format}")
Log.e(TAG, exc.message, exc)
cont.resumeWithException(exc)
}
}
}

=> 사진 저장하는 saveResult() 메소드이다.

private suspend fun takePhoto():
CombinedCaptureResult = suspendCoroutine
{ cont ->
@Suppress("ControlFlowWithEmptyBody")
while (imageReader.acquireNextImage() != null){}

val imageQueue = ArrayBlockingQueue<Image>(IMAGE_BUFFER_SIZE)    //1
imageReader.setOnImageAvailableListener({ reader ->            //2
val image = reader.acquireNextImage()
Log.d(TAG, "Image available in queue: ${image.timestamp}")
imageQueue.add(image)
}, imageReaderHandler)    

val captureRequest = session.device.createCaptureRequest(CameraDevice.TEMPLATE_STILL_CAPTURE)
.apply { addTarget(imageReader.surface) }    //3
session.capture(captureRequest.build(), object : CameraCaptureSession.CaptureCallback() {
//here
override fun onCaptureStarted(
session: CameraCaptureSession,
request: CaptureRequest,
timestamp: Long,
frameNumber: Long,
) {
super.onCaptureStarted(session, request, timestamp, frameNumber)
Log.d(TAG, "onCaptureStarted")
}

override fun onCaptureCompleted(
session: CameraCaptureSession,
request: CaptureRequest,
result: TotalCaptureResult,
) {//4
super.onCaptureCompleted(session, request, result)
val resultTimestamp = result.get(CaptureResult.SENSOR_TIMESTAMP)
Log.d(TAG, "Capture result received: $resultTimestamp")

// Set a timeout in case image captured is dropped from the pipeline
val exc = TimeoutException("Image dequeuing took too long")
val timeoutRunnable = Runnable { cont.resumeWithException(exc) }
imageReaderHandler.postDelayed(timeoutRunnable, IMAGE_CAPTURE_TIMEOUT_MILLIS)

@Suppress("BlockingMethodInNonBlockingContext")
lifecycleScope.launch(cont.context) {
while(true) {
//Dequeue images while timestamps don't match
val image = imageQueue.take()

if(Build.VERSION.SDK_INT >= Build.VERSION_CODES.Q &&
image.
format != ImageFormat.DEPTH_JPEG &&
image.
timestamp != resultTimestamp) continue
Log.d(TAG, "Matching image dequeued: ${image.timestamp}")

//Unset the image reader listener
imageReaderHandler.removeCallbacks(timeoutRunnable)
imageReader.setOnImageAvailableListener(null, null)

while(imageQueue.size > 0) {
imageQueue.take().close()
}

//Compute EXIF orientation metadata
val rotation = relativeOrientation.value ?: 0
val mirrored = characteristics.get(CameraCharacteristics.LENS_FACING) == CameraCharacteristics.LENS_FACING_FRONT
val exifOrientation = computeExifOrientation(rotation, mirrored)

cont.resume(CombinedCaptureResult(
image, result, exifOrientation, imageReader.imageFormat
))

// There is no need to break out of the loop, this coroutine will suspend
}
}
}


}, cameraHandler)
}

1 => 이미지큐에 촬영된 이미지 저장

2 => ImageReader에 이미지 넘어오면 반응하는 콜백 등록

3 => CaptureRequest 만들기

4 => Capture 완료 후, CombinedCaptureResult라는 data class에 이미지 포맷형식과 oriendtaion 담아서 저장.


7. ImageViewFragment 정의

class ImageViewFragment : Fragment() {
val TAG = ImageViewFragment::class.java.simpleName
private lateinit var binding: FragmentImageViewBinding

private lateinit var bitmapTransformation: Matrix

private val bitmapOptions = BitmapFactory.Options().apply {
inJustDecodeBounds = false
// Keep Bitmaps at less than 1 MP
if (max(outHeight, outWidth) > DOWNSAMPLE_SIZE) {
val scaleFactorX = outWidth / DOWNSAMPLE_SIZE + 1
val scaleFactorY = outHeight / DOWNSAMPLE_SIZE + 1
inSampleSize = max(scaleFactorX, scaleFactorY)
}
}

override fun onCreateView(
inflater: LayoutInflater, container: ViewGroup?,
savedInstanceState: Bundle?,
): View? {
binding = FragmentImageViewBinding.inflate(inflater, container, false)


// Inflate the layout for this fragment
return binding.root
}

override fun onViewCreated(view: View, savedInstanceState: Bundle?) {
super.onViewCreated(view, savedInstanceState)
        //navigation args를 통해 image path 받기
val args : ImageViewFragmentArgs by navArgs<ImageViewFragmentArgs>()
val filePath = args.filePath
Log.d(TAG, "filePath: $filePath")

bitmapTransformation = decodeExifOrientation(androidx.exifinterface.media.ExifInterface.ORIENTATION_ROTATE_90)

val imgFile = File(filePath)
if(imgFile.exists()) Log.d(TAG, "imgFile exists")

lifecycleScope.launch(Dispatchers.IO) {

            //비트맵으로 변환 후, 이미지뷰를 통해 보여주기
val imgByteArray = loadInputBuffer(imgFile)
val bitmapImg = decodeBitmap(imgByteArray)


withContext(Dispatchers.Main) {
binding.imageView.setImageBitmap(bitmapImg)
}
}

}

private fun loadInputBuffer(inputFile: File): ByteArray {

return BufferedInputStream(inputFile.inputStream()).let { stream ->
ByteArray(stream.available()).also {
stream.read(it)
stream.close()
}
}
}

private fun decodeBitmap(buffer : ByteArray): Bitmap {
val bitmap = BitmapFactory.decodeByteArray(buffer, 0, buffer.size, bitmapOptions)
return Bitmap.createBitmap(bitmap, 0, 0, bitmap.width, bitmap.height, bitmapTransformation, true)
}


companion object {
private const val DOWNSAMPLE_SIZE: Int = 1024 // 1MP


}
}
<?xml version="1.0" encoding="utf-8"?>
<layout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools">

<data>

</data>

<FrameLayout
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context=".fragment.ImageViewFragment">

<ImageView
android:id="@+id/imageView"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:src="@color/black"/>

</FrameLayout>
</layout>

코드 설명은 Camera2 API에 한해서 핵심적인 부분만 정리했다.

전체 Full code는 깃허브를 통해 확인하기바란다.

https://github.com/AntWhale/SmapleCamera2


Share:
Read More