Applied AI · Vision-guided robotics
Vision-guided bin picking at 80 ms end-to-end
Yantrix built a production vision stack that lets a 6-DOF arm pick randomly oriented SKUs out of a cluttered bin — running entirely on an edge device.
Vision systems win or lose on integration. We build perception stacks where the model, the robot, and the product engineering decision all line up.

What we do
Yantrix delivers computer-vision systems for vision-guided pick-and-place, bin picking, defect detection on conveyors, SKU recognition, and robotic manipulation. We design the data pipeline, pick and train the model (YOLO detection, SAM-2 segmentation, custom classifiers), integrate it into ROS 2 nodes or a PLC-facing service, and ship benchmarks against the target FPS and accuracy envelope.
We adapt the same engineering service to different product contexts depending on the load case, packaging problem, validation target, or deployment environment.
Relevant when the project needs focused computer vision for robotics support.
Relevant when the project needs focused computer vision for robotics support.
Relevant when the project needs focused computer vision for robotics support.
Relevant when the project needs focused computer vision for robotics support.
Relevant when the project needs focused computer vision for robotics support.
Relevant when the project needs focused computer vision for robotics support.
These links help visitors move from service intent to real examples of engineering work.
Applied AI · Vision-guided robotics
Yantrix built a production vision stack that lets a 6-DOF arm pick randomly oriented SKUs out of a cluttered bin — running entirely on an edge device.
Robotics design
A design study covering joint packaging, structural stiffness, and manufacturable geometry for a compact robotic arm concept.
Service-specific questions are useful for both users and search visibility around intent-driven queries.
On a Jetson Orin Nano, we typically ship YOLOv11-Seg pipelines between 12-30 FPS end-to-end, with decision latency under 100 ms including camera capture, inference, and robot command. Specific numbers depend on image resolution and the target class count.
That's the default. We usually start with a base detector, then fine-tune on a small labelled dataset of your parts. We'll set up labelling tooling and hand you a retraining pipeline so you can keep extending it.
Yes. Vision fails most often because of the physical setup, not the model. We scope the camera, lens, lighting, and mount geometry as part of the engagement.
Send the problem, your current design stage, and any existing files. We can scope the work from there.