Artificial intelligence applications have been growing rapidly in a variety of industrial technologies, ranging from data analytics and quality inspections to autonomous mobile robots. Now the technology is being applied to robotic grasping applications to enable accurate picking and placing of random objects in unstructured and changing environments.
Siemens says it is working to “democratize artificial intelligence (AI)-enabled robotics by encapsulating systems for complex problems in easy-to-use software.” To this end, the company is developing an as-yet-to-be-named software technology designed for use by system integrators and OEMs to create cost-effective, advanced AI-driven piece-picking systems that can “reliably pick and place objects that are unknown to the system at runtime.”
Traditional automated pick-and-place systems follow fixed, pre-programmed routines in a structured environment. Applying AI enables robotics to perform generic tasks in unstructured and dynamically changing environments.
Thus, the key differentiator with the developing Siemens technology is its ability to pick unknown objects.
According to Siemens, this new software will enable users to move from using robotic systems with static pick points to AI-driven piece picking robot in less than an hour. Setup for the system is accomplished in four steps: Set the robot arm and related end-of-arm tooling to move safely to static pick-and-place points; mount the 3D camera; install the Siemens software for piece picking on the target runtime hardware of choice; follow the guided setup via the user interface for calibration.
With these steps implemented, calculated pick points are then continuously provided to the robot motion program, enabling the robot to grasp any object.
“The set up takes only 30 minutes through an easy and straightforward calibration process,” said Solowjow. “The user interface is very simple and clear.”
Target applications for this technology include order fulfillment operations with a high number of SKUs requiring 500-1,200 picks per hour, such as goods-to-person tote picking, conveyor induction and sortation in e-commerce, e-grocery warehouse automation, and food and beverage packaging.
Camera and robot agnostic
A key aspect of Siemens piece picking software technology is that OEMs and integrators can use any robot, gripper, or vision system. The software supports multiple RGB-D 3D camera manufacturers, allows for the robot arm to be chosen based on the application, and is modularly designed with robot system designers in mind. The runtime platform can be integrated into a Siemens Simatic S7-1500 TM (Technology Modules) MFP (multi-functional platform) PLC or any industrial PC to operate independently of any robot/vision system.
As for robot grippers, Solowjow said grippers “usually support generic communication protocols such as Profinet. Gripper specifics—such as their geometry or suction cup diameter—can be addressed through calibration.
Inputs to the Siemens pre-trained, AI-powered vision software in the PLC come from the 3D camera’s point cloud, with the output being the grasp pose for any object at runtime. A Siemens HMI is used to interface with the PLC and software; and Siemens TIA Portal can be used to program the entire system—Siemens PLC and HMI as well as the robots, vision system, and gripper.
Solowjow noted that the software requires the PLC to have Siemens’ technology module multi-function platform (TM MFP), which allows for edge computing applications on Simatic controllers. The TM MFP is designed for integrating various independent applications, he explained, and will be extended to other platforms such as Siemens Industrial Edge, industrial PCs, and cloud platforms.
The Siemens AI-driven piece picking system also provides for collision avoidance for all actions in the robot workspace, enables robots to master challenges such as handling tightly packed boxes using low-cost 3D cameras, and reduces errors through features such as automatic bin detection.
“We pretrain the AI skill for grasping using simulation,” said Solowjow. “Similar to a human, the AI learns how to grasp in general—as opposed to specific objects—so that grasping of unknown or previously unseen objects is possible.”