YYantrix
Edge AI · On-device inspection

Zero-cloud defect detection camera on ESP32-S3, case study.

For an electronics-assembly client, Yantrix designed a custom inspection camera that flags solder and placement defects on-device. The whole pipeline — sensor, model, firmware, enclosure — fits in a fanless 40 x 40 x 25 mm module.

Edge AI defect detection camera on electronics assembly line

Overview

Why this study matters

A production conveyor inspection camera running a quantized INT8 CNN entirely on an ESP32-S3 — no cloud, no PC, 18 FPS at 0.4 W.

Project Type: Edge AI + Custom Hardware

Industry: Electronics manufacturing

Service Used: Edge AI + PCB Design + Embedded Firmware

Objective

What the project needed to achieve

  • Detect defined defect classes on placed components with production-grade accuracy
  • Run the model entirely on microcontroller-class hardware
  • Eliminate cloud and on-prem PC dependencies
  • Fit into a compact, fanless enclosure mountable above the conveyor

Challenge

Engineering constraint

The client had a line-side PC-based vision rig that was expensive to scale across many stations and introduced a cloud dependency the plant IT policy didn't allow. They wanted a self-contained inspection module per station — low power, low cost, zero external dependencies.

Deliverables

What the client receives

  • Custom PCB design files and fabrication package
  • Quantized INT8 model and training pipeline
  • FreeRTOS firmware with OTA update path
  • Enclosure CAD with thermal and mounting study
  • Commissioning report and retraining playbook

Approach

How Yantrix approached the work

  • Captured a labelled dataset of nominal and defective assemblies across lighting variants at the client plant.
  • Designed a compact CNN sized for ESP32-S3 constraints and trained it with aggressive INT8 quantization-aware training.
  • Designed a custom 4-layer PCB combining ESP32-S3-WROOM, an OV5640 camera module, a local ring light, and MQTT telemetry over Wi-Fi.
  • Delivered firmware with a ring buffer for failure-case capture so the client can periodically retrain from real edge cases.

Outcome

What improved by the end

  • 18 FPS continuous inference at approximately 0.4 W steady-state
  • 96.8% accuracy on the validation set; on-line operator override provided for ambiguous cases
  • No PC, no cloud — fully autonomous per-station operation
  • Unit cost reduced to a fraction of the previous PC-based rig
  • Failure-case ring buffer enables continuous dataset growth

Tools used

  • ESP32-S3-WROOM-1 (8 MB PSRAM)
  • OV5640 camera module
  • TFLite Micro with ESP-DL acceleration
  • PyTorch (quantization-aware training)
  • KiCad for PCB design
  • FreeRTOS firmware
  • MQTT for telemetry

Impact

  • Inspection coverage extended from selected stations to the entire line
  • Capex per station reduced by roughly 6x vs. the PC-based alternative
  • Plant IT policy satisfied — no external network traffic required

Conclusion

The project shows that thoughtfully quantized models plus hardware co-design can put real-time ML into truly constrained devices — not just on single-board computers.

Next step

Need an inspection system that scales across dozens of stations without a PC fleet? We design the device, the model, and the firmware as one thing.

Let's build

Have a machine to build? Let's scope it together.

Tell us about your project. We'll respond within 1-2 business days with a preliminary scope and timeline — no boilerplate, no up-sell.