You're building a document scanner. Or an app that detects faces in the camera preview. Or a barcode reader that works offline without Google's ML Kit. The common denominator: you need to process image data — pixel by pixel, frame by frame — at speeds that Dart can't reach.
OpenCV (Open Source Computer Vision Library) has been solving these problems since 1999. It's written in C++, runs on everything from Raspberry Pis to data center GPUs, and has pre-built algorithms for hundreds of computer vision tasks. Instead of writing face detection from scratch, you call a function.
Getting it into Flutter takes some work. Let's walk through it.
The approach
OpenCV is C++, not C. Dart FFI speaks C. So the integration pattern is:
OpenCV (C++)
↓ wrapped by
Thin C API (extern "C" functions)
↓ called by
Dart FFI bindings
↓ wrapped by
Clean Dart API
↓ used by
Flutter widgetsYou write a thin C wrapper that exposes the OpenCV operations you need as plain C functions. Dart calls those functions via FFI. This is the same extern "C" pattern from the FFI foundations series.
Getting OpenCV for mobile
Option 1: Pre-built SDKs (recommended)
OpenCV publishes pre-built SDKs for Android and iOS:
- Android: Download the OpenCV Android SDK — it includes prebuilt
.sofiles and Java wrappers. We only need the native libraries. - iOS: Download the OpenCV iOS Framework — an
.xcframeworkbundle.
After downloading:
Android — copy the native libraries:
android/app/src/main/jniLibs/
├── arm64-v8a/
│ └── libopencv_java4.so # ~15MB
└── x86_64/
└── libopencv_java4.so # For emulatorOr, better — build only the modules you need (see Option 2) to reduce size.
iOS — add opencv2.xcframework to your Xcode project. Set to "Embed & Sign."
Option 2: Custom build (smaller binary)
The full OpenCV SDK is large (~30MB per ABI). If you only need a few modules, build from source with only those modules enabled:
# Android cross-compile with CMake
cmake -DCMAKE_TOOLCHAIN_FILE=$ANDROID_NDK/build/cmake/android.toolchain.cmake \
-DANDROID_ABI=arm64-v8a \
-DANDROID_NATIVE_API_LEVEL=24 \
-DBUILD_SHARED_LIBS=ON \
-DBUILD_opencv_core=ON \
-DBUILD_opencv_imgproc=ON \
-DBUILD_opencv_objdetect=ON \
-DBUILD_opencv_features2d=ON \
-DBUILD_opencv_java=OFF \
-DBUILD_opencv_python=OFF \
-DBUILD_TESTS=OFF \
-DBUILD_PERF_TESTS=OFF \
-DBUILD_EXAMPLES=OFF \
-DWITH_OPENCL=OFF \
../opencvThis builds only core, image processing, object detection, and feature detection — typically 5-8MB per ABI instead of 30MB.
The C wrapper
Create a C++ file with extern "C" functions that expose the OpenCV operations you need:
// native/src/cv_bridge.cpp
#include <opencv2/core.hpp>
#include <opencv2/imgproc.hpp>
#include <opencv2/objdetect.hpp>
#include <cstdint>
#include <cstring>
using namespace cv;
static CascadeClassifier faceCascade;
extern "C" {
// Initialize the face detector with a cascade file path
int32_t cv_init_face_detector(const char* cascadePath) {
if (faceCascade.load(cascadePath)) {
return 0; // Success
}
return -1; // Failed to load
}
// Detect faces in an RGBA image buffer
// Returns the number of faces found. Face rectangles are written to outRects.
int32_t cv_detect_faces(
const uint8_t* rgbaData,
int32_t width,
int32_t height,
int32_t* outRects, // [x, y, w, h, x, y, w, h, ...] — 4 ints per face
int32_t maxFaces
) {
// Wrap the buffer as an OpenCV Mat (no copy — Mat points to the same memory)
Mat rgba(height, width, CV_8UC4, (void*)rgbaData);
// Convert to grayscale for detection
Mat gray;
cvtColor(rgba, gray, COLOR_RGBA2GRAY);
// Equalize histogram for better detection in varying lighting
equalizeHist(gray, gray);
// Detect
std::vector<Rect> faces;
faceCascade.detectMultiScale(
gray,
faces,
1.1, // scaleFactor
3, // minNeighbors
0, // flags
Size(30, 30) // minimum face size
);
int count = std::min((int)faces.size(), maxFaces);
for (int i = 0; i < count; i++) {
outRects[i * 4 + 0] = faces[i].x;
outRects[i * 4 + 1] = faces[i].y;
outRects[i * 4 + 2] = faces[i].width;
outRects[i * 4 + 3] = faces[i].height;
}
return count;
}
// Find document edges in an image — returns 4 corner points
// Useful for document scanning
int32_t cv_find_document_edges(
const uint8_t* rgbaData,
int32_t width,
int32_t height,
int32_t* outCorners // [x1,y1, x2,y2, x3,y3, x4,y4]
) {
Mat rgba(height, width, CV_8UC4, (void*)rgbaData);
Mat gray, blurred, edges;
cvtColor(rgba, gray, COLOR_RGBA2GRAY);
GaussianBlur(gray, blurred, Size(5, 5), 0);
Canny(blurred, edges, 75, 200);
// Find contours
std::vector<std::vector<Point>> contours;
findContours(edges, contours, RETR_LIST, CHAIN_APPROX_SIMPLE);
// Sort by area, largest first
std::sort(contours.begin(), contours.end(),
[](const std::vector<Point>& a, const std::vector<Point>& b) {
return contourArea(a) > contourArea(b);
});
// Find the largest 4-sided contour
for (const auto& contour : contours) {
double peri = arcLength(contour, true);
std::vector<Point> approx;
approxPolyDP(contour, approx, 0.02 * peri, true);
if (approx.size() == 4 && contourArea(approx) > 1000) {
for (int i = 0; i < 4; i++) {
outCorners[i * 2 + 0] = approx[i].x;
outCorners[i * 2 + 1] = approx[i].y;
}
return 1; // Found
}
}
return 0; // No document found
}
// Apply perspective warp to extract/flatten a document
int32_t cv_warp_perspective(
const uint8_t* rgbaData,
int32_t srcWidth,
int32_t srcHeight,
const int32_t* corners, // 4 corner points from cv_find_document_edges
int32_t dstWidth,
int32_t dstHeight,
uint8_t* outData // Pre-allocated output buffer (dstWidth * dstHeight * 4)
) {
Mat src(srcHeight, srcWidth, CV_8UC4, (void*)rgbaData);
Mat dst(dstHeight, dstWidth, CV_8UC4, outData);
Point2f srcPts[4] = {
Point2f(corners[0], corners[1]),
Point2f(corners[2], corners[3]),
Point2f(corners[4], corners[5]),
Point2f(corners[6], corners[7]),
};
Point2f dstPts[4] = {
Point2f(0, 0),
Point2f(dstWidth - 1, 0),
Point2f(dstWidth - 1, dstHeight - 1),
Point2f(0, dstHeight - 1),
};
Mat transform = getPerspectiveTransform(srcPts, dstPts);
warpPerspective(src, dst, transform, Size(dstWidth, dstHeight));
return 0;
}
} // extern "C"Build configuration
Android — CMakeLists.txt
cmake_minimum_required(VERSION 3.18.1)
project("cv_bridge")
# Path to the OpenCV Android SDK
set(OpenCV_DIR "${CMAKE_SOURCE_DIR}/../../../../native/opencv-android-sdk/sdk/native/jni")
find_package(OpenCV REQUIRED)
add_library(cv_bridge SHARED
../../../../native/src/cv_bridge.cpp
)
target_include_directories(cv_bridge PRIVATE ${OpenCV_INCLUDE_DIRS})
target_link_libraries(cv_bridge ${OpenCV_LIBS} log)Add to android/app/build.gradle:
android {
externalNativeBuild {
cmake {
path "src/main/cpp/CMakeLists.txt"
}
}
}iOS — Podspec
If you're building a plugin, create a podspec. If it's in-app, add the source file to the Xcode project and link opencv2.xcframework.
Dart FFI bindings
import 'dart:ffi';
import 'dart:io';
import 'dart:typed_data';
import 'package:ffi/ffi.dart';
class OpenCVBridge {
static final DynamicLibrary _lib = Platform.isAndroid
? DynamicLibrary.open('libcv_bridge.so')
: DynamicLibrary.process();
// Init face detector
static final _initFaceDetector = _lib.lookupFunction<
Int32 Function(Pointer<Utf8>),
int Function(Pointer<Utf8>)
>('cv_init_face_detector');
// Detect faces
static final _detectFaces = _lib.lookupFunction<
Int32 Function(Pointer<Uint8>, Int32, Int32, Pointer<Int32>, Int32),
int Function(Pointer<Uint8>, int, int, Pointer<Int32>, int)
>('cv_detect_faces');
// Find document edges
static final _findDocumentEdges = _lib.lookupFunction<
Int32 Function(Pointer<Uint8>, Int32, Int32, Pointer<Int32>),
int Function(Pointer<Uint8>, int, int, Pointer<Int32>)
>('cv_find_document_edges');
static bool initFaceDetector(String cascadePath) {
final pathPtr = cascadePath.toNativeUtf8();
try {
return _initFaceDetector(pathPtr) == 0;
} finally {
calloc.free(pathPtr);
}
}
static List<Rect> detectFaces(Uint8List rgbaBytes, int width, int height) {
const maxFaces = 20;
final dataPtr = calloc<Uint8>(rgbaBytes.length);
final rectsPtr = calloc<Int32>(maxFaces * 4);
try {
// Copy image data to native memory
dataPtr.asTypedList(rgbaBytes.length).setAll(0, rgbaBytes);
final count = _detectFaces(dataPtr, width, height, rectsPtr, maxFaces);
final rects = <Rect>[];
for (int i = 0; i < count; i++) {
rects.add(Rect(
rectsPtr[i * 4 + 0].toDouble(),
rectsPtr[i * 4 + 1].toDouble(),
rectsPtr[i * 4 + 2].toDouble(),
rectsPtr[i * 4 + 3].toDouble(),
));
}
return rects;
} finally {
calloc.free(dataPtr);
calloc.free(rectsPtr);
}
}
}
class Rect {
final double x, y, width, height;
Rect(this.x, this.y, this.width, this.height);
}Using it from Flutter
Face detection on camera frames
import 'package:camera/camera.dart';
class FaceDetectorScreen extends StatefulWidget {
@override
State<FaceDetectorScreen> createState() => _FaceDetectorScreenState();
}
class _FaceDetectorScreenState extends State<FaceDetectorScreen> {
late CameraController _controller;
List<Rect> _faces = [];
bool _isProcessing = false;
@override
void initState() {
super.initState();
_initCamera();
// Copy cascade XML from assets to temp directory, then init
_initDetector();
}
Future<void> _initDetector() async {
// Copy haarcascade_frontalface_default.xml from assets to a temp file
final data = await rootBundle.load('assets/haarcascade_frontalface_default.xml');
final dir = await getTemporaryDirectory();
final file = File('${dir.path}/haarcascade_frontalface_default.xml');
await file.writeAsBytes(data.buffer.asUint8List());
OpenCVBridge.initFaceDetector(file.path);
}
Future<void> _initCamera() async {
final cameras = await availableCameras();
_controller = CameraController(cameras.first, ResolutionPreset.medium);
await _controller.initialize();
_controller.startImageStream((CameraImage image) {
if (_isProcessing) return; // Skip frames if we're still processing
_isProcessing = true;
_processFrame(image);
});
if (mounted) setState(() {});
}
Future<void> _processFrame(CameraImage image) async {
// Convert CameraImage (YUV420) to RGBA on a background isolate
final rgbaBytes = await Isolate.run(() => _convertYuvToRgba(image));
final faces = OpenCVBridge.detectFaces(
rgbaBytes,
image.width,
image.height,
);
if (mounted) {
setState(() => _faces = faces);
}
_isProcessing = false;
}
@override
Widget build(BuildContext context) {
if (!_controller.value.isInitialized) {
return const Center(child: CircularProgressIndicator());
}
return Stack(
children: [
CameraPreview(_controller),
// Draw face rectangles as overlays
..._faces.map((face) => Positioned(
left: face.x,
top: face.y,
child: Container(
width: face.width,
height: face.height,
decoration: BoxDecoration(
border: Border.all(color: Colors.green, width: 2),
),
),
)),
],
);
}
@override
void dispose() {
_controller.dispose();
super.dispose();
}
}Common errors
Haar cascade file not found at runtime
Cause: You bundled the .xml cascade file in assets/ but tried to pass the asset path directly to OpenCV. OpenCV needs a filesystem path, not a Flutter asset path.
Fix: Copy the file from assets to the temp directory first (as shown above). Flutter assets aren't regular files on disk — they're packed inside the APK/IPA.
Image is rotated or mirrored in detection
Cause: Camera frames have rotation metadata. The raw pixel buffer from CameraImage might be rotated 90 or 270 degrees depending on the device and camera orientation. OpenCV doesn't read EXIF rotation.
Fix: Rotate the image buffer before passing it to OpenCV, or adjust the coordinate output to match the display orientation.
Detection is too slow for real-time use
Cause: Processing every camera frame at full resolution. A 1080p frame is 8MB of RGBA data. Face detection on that takes 50-100ms — too slow for 30fps.
Fix:
- Downscale before detection: process at 320x240 or 480x360, then scale the coordinates back up
- Skip frames: process every 3rd or 5th frame
- Run detection on a background isolate
- Use
detectMultiScalewith a largerscaleFactor(1.3 instead of 1.1) — fewer passes, faster, slightly less accurate
Linker errors: "undefined reference to cv::Mat::Mat"
Cause: The C++ standard library isn't linked, or the OpenCV libraries aren't in the link order.
Fix: In CMakeLists.txt, ensure you're linking against OpenCV and the C++ standard library:
target_link_libraries(cv_bridge ${OpenCV_LIBS} log)On Android, the NDK's libc++ is linked automatically. If it's not, add -lc++_shared explicitly.
App crashes with SIGBUS on iOS
Cause: Memory alignment issue. If you're casting raw byte pointers to typed pointers (Pointer<Int32>) and the address isn't aligned to 4 bytes, ARM processors will fault.
Fix: Use calloc for allocating output buffers (it returns aligned memory). Don't reinterpret arbitrary positions within a byte buffer as Int32*.
APK too large (30MB+ per ABI from OpenCV)
Cause: The full OpenCV SDK includes modules you don't need (ML, video I/O, stitching, etc.).
Fix: Build OpenCV from source with only the modules you need. For document scanning, you typically need only core, imgproc, and features2d. This brings the size down to 5-8MB per ABI.
This is Post 14 of the FFI series. Next: On-Device ML With TensorFlow Lite.