Real-time multi-object recognition and classification on resource-constrained devices for automated robots
Keywords:
YOLOv8n, Separable CNN, OpenVINO, Albumentations, Robocon 2026Abstract
In order to solve the critical challenges of deploying classical convolutional neural networks on resource-constrained edge devices---such as excessive computational demands, high latency, and reduced accuracy in complex environments---this paper proposes a robust, real-time multi-object recognition and classification framework for automated robots. Focused on the dynamic conditions of the Robocon 2026 competition, which demands distinguishing 31 distinct object categories, we introduce an optimized two-stage computer vision architecture. The proposed pipeline leverages a lightweight YOLOv8n model for rapid Region of Interest (ROI) extraction and introduces a custom Separable Convolutional Neural Network (CNN) equipped with a Flatten and Dense(256) classification head, compressed to only 2.11 million parameters for precise feature identification. To enhance model resilience against geometric distortions and minimize the input payload, morphological preprocessing techniques including Gray Padding and grayscale spatial transformation are implemented. Furthermore, to effectively balance the trade-off between detection speed and accuracy, the system is strictly accelerated through Int8 network quantization and Intel's OpenVINO toolkit, significantly reducing hardware latency. Real-world processing speeds, hardware execution latency, and classification accuracy are utilized as primary benchmarks for evaluation. Experimental results conducted on a low-power Intel Surface Go 2 platform demonstrate that the proposed architecture achieves an impressive 96.15\% accuracy while maintaining a stable real-time processing speed of 30-35 Frames Per Second (FPS). A comprehensive comparison against state-of-the-art lightweight architectures, including MobileNetV3 Small, SqueezeNet 1.1, and EfficientNet Lite0, shows that the proposed model outperforms existing solutions by delivering higher detection precision with drastically lower computational overhead. These outcomes confirm that our methodology provides a highly feasible, scalable, and efficient solution well-suited for deployment in intelligent edge robotics.
Downloads
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2026 Journal of Measurement, Control and Automation

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.



