For those who want to develop artificial intelligence applications in embedded systems, TensorFlow Lite (TFLite) offers low-size and fast-running models. By integrating with the C++ language, high-performance object recognition systems can be established, especially on edge devices such as Android devices, Raspberry Pi, and Nvidia Jetson.
In this guide, we explain step by step how to develop a COCO-labeled object recognition application from scratch using TensorFlow Lite C++, covering every detail from performance optimization to label management.
What is TensorFlow Lite?
TensorFlow Lite is an open-source library developed by Google that enables TensorFlow models to run quickly on low-power devices.
-
Low latency
-
Small model files (.tflite)
-
Android/iOS/Linux supported
-
C++, Java, Python, Swift integrations
⚙️ Preparation: Required Files and Configuration
-
Your
.tflite
model file (mobilnetv2-ssd, yolov5-lite vs.) -
labelmap.txt
orcoco_labels.txt
(class names belonging to the COCO dataset) -
TensorFlow Lite C++ API
-
OpenCV (for image processing)
-
Build system with CMake or Makefile
sudo apt install libopencv-dev
What is COCO Label?
COCO (Common Objects in Context) is a common object recognition dataset consisting of 80 classes.
Sample classes:
-
person, bicycle, car, motorbike, airplane, dog, cat, chair, tv...
The coco_labels.txt
file you will use is usually in the following format:
0
person
bicycle
car
...
The first line should be empty (background class). The index order may vary depending on the model.
️ Loading TensorFlow Lite Model with C++
#include "tensorflow/lite/model.h"
#include "tensorflow/lite/interpreter.h"
#include "tensorflow/lite/kernels/register.h"
#include "tensorflow/lite/tools/gen_op_registration.h"
std::unique_ptr model =
tflite::FlatBufferModel::BuildFromFile("model.tflite");
Interpreter configuration:
std::unique_ptr interpreter;
tflite::ops::builtin::BuiltinOpResolver resolver;
tflite::InterpreterBuilder(*model, resolver)(&interpreter);
interpreter->AllocateTensors();
Preparing Image Input (with OpenCV)
cv::Mat frame;
cv::resize(frame, resized, cv::Size(300, 300)); // model input size
uint8_t* input = interpreter->typed_input_tensor(0);
memcpy(input, resized.data, 300 * 300 * 3);
Reading Output Tensors
float* boxes = interpreter->typed_output_tensor(0);
float* class_ids = interpreter->typed_output_tensor(1);
float* scores = interpreter->typed_output_tensor(2);
int* detections = interpreter->typed_output_tensor(3);
Each box: [ymin, xmin, ymax, xmax] and all values are between 0-1.
Visualizing Detection Results
for (int i = 0; i < detections[0]; ++i) {
if (scores[i] > 0.5) {
int class_id = class_ids[i];
cv::rectangle(frame, cv::Rect(...));
cv::putText(frame, labels[class_id], ...);
}
}
Performance Improvement Suggestions
-
Use Quantized model (INT8)
-
Delegate usage: GPU Delegate, NNAPI
-
Reduce model size (mobilnet vs.)
-
Reduce input size (300x300 → 224x224)
-
Limit the analysis time per frame
Raspberry Pi Application Scenario
-
Install Raspberry Pi OS Lite and compile OpenCV + TFLite libraries
-
Use Raspberry Pi Camera Module or connect a USB camera
-
Provide real-time detection with C++ performance instead of Python
-
You can use additional hardware such as Coral USB Accelerator for GPU-supported operation
C++ Object Detection with Android NDK
-
NDK installation is done via Android Studio
-
TFLite C++ Library is called with JNI
-
Camera integration can be done using OpenCV Android SDK
-
The .tflite model is added to the
assets
folder and loaded withAssetManager
❗️ Frequently Encountered Problems and Solutions
Problem | Description and Solution |
---|---|
Model input size mismatch | Resize to the appropriate size with OpenCV resize (e.g. 300x300) |
Interpreter AllocateTensors gives an error | The model file may be corrupted, make sure the correct file is loaded |
Only numbers appear instead of class names | Read the label file line by line, check if the indexes are compatible |
Performance is very low | Use INT8 quantized model, enable OpenGL acceleration on Raspberry Pi |
Model cannot be loaded on Android | Make sure you open the file from the assets folder with AssetManager |
Frame is not updated instantly | Connect the camera stream correctly with cv::VideoCapture |
Conclusion
Setting up real-time object recognition systems running on the C++ side with TensorFlow Lite is quite flexible thanks to common datasets such as COCO. Label management, input/output tensor analysis, the deployment process to devices such as Raspberry Pi & Android, and performance optimizations are the main factors that determine success in such projects.