AI Vision System Uses Synthetic Data to Master Food Inspection Variability
- Oxipital AI’s V-CortX uses 3D scans of real products to generate millions of AI training examples automatically, removing the need for thousands of manually annotated images that traditional vision systems require.
- The system allows food manufacturers to create different inspection "recipes" for various retail customers, so a frozen pizza maker can switch between, say, Walmart's and Target's standards without retraining AI models.
- The platform quantifies savings by tracking specific defects and reveals operational patterns like shift-to-shift performance differences that help different stakeholders optimize production.
In food manufacturing, no two chicken wings are identical and, of course, no CAD file exists for a corn dog. This organic variability has long challenged traditional vision inspection systems, but it's precisely what makes AI-powered vision particularly effective for food applications, according to Anthony Romero, product marketing manager at Oxipital AI.
"We are really good at detecting organic variability," Romero explains. "Food kind of lends itself to this because every piece is going to be a little bit different."
This new vision system evolved from Oxipital AI's previous incarnation as Soft Robotics, a robotic gripping technology company. This experience with food gripping informed the company’s move to focus on machine vision. For example, when handling tomatoes of varying sizes, the gripper needed dimensional data to apply appropriate pressure — enough to grip securely without crushing it.
"When we started going down that rabbit hole, we saw how much information there was that we could use, like the dimensional information, color, the different defects," Romero recalled. What began as an add-on to a robotic gripper system has become a standalone vision inspection platform.
From 3D Scans to millions of training examples
Oxipital AI's approach begins with 3D scanning of real-world food items to create virtual assets. For a corn dog inspection application, the company might scan 30 different corn dogs, capturing both quality products and various defects. These scans become the foundation for synthetic data generation.
Beyond simple counts, the system also reveals patterns: second shift might run slower than first shift, or a yield dip might coincide with a defect spike rather than reduced throughput.
"We can generate millions of different examples to show every good or bad potential variation," Romero said. The system also accounts for customer-specific variables like conveyor colors, lighting conditions and random product placement scenarios. The simulation process runs autonomously. Engineers can set parameters on a Friday afternoon and return Monday morning to find a trained model ready to fine tune and work with after approximately 20 hours of computation. The synthetic approach eliminates the need to supply thousands of different real-world images as well as manual image annotation that traditional vision systems require.
60 food models and growing
Oxipital AI currently maintains about 60 object models covering different food types— from kiwis, corn dogs and chicken wings to pork loins. The system can detect specifics like fat content distribution and assess pepperoni placement on pizzas. The system combines hardware and software components: the VX2 2D camera, LiDAR for 3D measurement and the V-CortX vision platform that handles object model creation, recipe building and analytics.
Romero identifies the combination of 3D and 2D imaging, paired with synthetic data generation, as Oxipital AI's key differentiator from traditional 2D vision systems. "We don't need to have someone manually change all these labels," he noted. "We just run another synthetic data generation with different parameters."
The defect classification uses traditional rules-based logic rather than AI, allowing operators to adjust standards without retraining models.
Anthony Romero answers Automation World's questions about Oxipital AI's technology in the vide below:
Key features of V-CortX are:
- AI Vision Model Manager: Download, request and deploy high-performance vision models without image capture or manual annotation.
- Recipe Builder: Drag-and-drop items to update workflows and users get real-time visual feedback.
- Analytics Dashboard: Access to throughput data, defect trends, dimensional analysis and quality insights.
Control over quality standards
While Oxipital AI’s tech generates the AI models, customers retain control over defining defects and setting tolerances. "We want to make sure that the customers are in control and can change their application," Romero said. The defect classification uses traditional rules-based logic rather than AI, allowing operators to adjust standards without retraining models.
This flexibility proves valuable for manufacturers serving multiple retail brands.
For a corn dog inspection application, the company might scan 30 different corn dogs, capturing both quality products and various defects. These scans become the foundation for synthetic data generation.
Romero explained this with an example of a frozen pizza manufacturer supplying Stop & Shop, Walmart and Target, each with different quality requirements. "That user can have a Walmart recipe and Target recipe, for example, so that the operator doesn't need to go in and key these values in individually, they'd just select the recipe they need for that run."
ROI through defect intelligence
The system's analytics dashboard provides clear ROI metrics. "We know exactly how many defects we saw," Romero explained. In one example, identifying 816 corn dogs with broken sticks at $0.30 per unit immediately quantified potential savings.
Beyond simple counts, the system also reveals patterns: second shift might run slower than first shift, or a yield dip might coincide with a defect spike rather than reduced throughput. The granular data helps different stakeholders, from C-suite executives monitoring overall yield to Kaizen teams diagnosing specific quality issues.
"Different people care about different elements of this," Romero said. "C-suite executive might just look at throughput and yield. The Kaizen team might do a Kaizen event to figure out in depth what defect is causing an issue."
About the Author
David Greenfield, editor in chief
Editor in Chief

Leaders relevant to this article: