Applied AI · Vision-guided robotics
Vision-guided bin picking at 80 ms end-to-end
Yantrix built a production vision stack that lets a 6-DOF arm pick randomly oriented SKUs out of a cluttered bin — running entirely on an edge device.
We build AI that ships with a product, not AI that sits in a slide deck. Our models run on robots, on embedded boards, and inside the engineering workflows that move real work forward.

What we do
Yantrix designs, trains, optimizes, and deploys machine-learning systems for robotics, embedded products, and industrial engineering. We handle the full lifecycle — data strategy, labelling, model selection, training at scale, quantization and hardware-specific optimization, deployment, and MLOps — with a deliberate focus on real-world physical deployment.
We adapt the same engineering service to different product contexts depending on the load case, packaging problem, validation target, or deployment environment.
Relevant when the project needs focused applied ai & machine learning support.
Relevant when the project needs focused applied ai & machine learning support.
Relevant when the project needs focused applied ai & machine learning support.
Relevant when the project needs focused applied ai & machine learning support.
Relevant when the project needs focused applied ai & machine learning support.
Relevant when the project needs focused applied ai & machine learning support.
These links help visitors move from service intent to real examples of engineering work.
Applied AI · Vision-guided robotics
Yantrix built a production vision stack that lets a 6-DOF arm pick randomly oriented SKUs out of a cluttered bin — running entirely on an edge device.
Edge AI · On-device inspection
A production conveyor inspection camera running a quantized INT8 CNN entirely on an ESP32-S3 — no cloud, no PC, 18 FPS at 0.4 W.
ML-accelerated engineering
A physics-informed neural network trained on 12,000 ANSYS runs replaces the full solver for early-stage topology exploration — predicting von-Mises stress fields in ~40 ms.
Service-specific questions are useful for both users and search visibility around intent-driven queries.
No. Edge and on-device is our signature because it's where most teams struggle, but we also build and deploy models that run on servers, GPUs, or cloud — whichever makes sense for the product.
Yes. A large part of what we do is model optimization — quantization (INT8, FP16), graph surgery, TensorRT / ONNX conversion, and hardware-specific acceleration so an existing model runs fast and small on the target chip.
Held-out test sets, confusion matrices per class, edge-case sweeps, slice-based evaluation, and field-trial data collection. We write the evaluation harness before we celebrate any accuracy number.
Yes. We routinely work with confidential CAD, vision datasets, and product telemetry. NDAs and private data pipelines are part of most engagements.
Send the problem, your current design stage, and any existing files. We can scope the work from there.