Enabling vision systems with 3D capabilities is expected to provide manufacturers with a higher level of excellence from their material-handling robots. Following on this 3D vision system trend, Industrial Perception Inc. has unveiled a new vision-guided robot technology that it says will accelerate automated logistics applications such as bin picking and mixed-case handling between trucks, containers, conveyors and pallets.
3D vision allows material handling in unstructured environments where 2D systems have often failed. Examples of these environments include areas where materials overlap, have like colors with backgrounds or vary in surface depths. The increased environmental awareness of 3D vision systems allows cameras to see the walls of trucks, sides of containers or the edges of a bin. This means that robots will soon be navigating around high walls with end-effector path planning or reaching their grippers into obscure corners.
A prototype of Industrial Perception’s 3D vision-enabled robot has proved capable of unloading trucks, especially heavy containers (around 35 pounds) where humans are at a heightened risk for back injury. IPI product manager, Erin Rapacki, says the robot would be mounted on an automated guided vehicle for transport between docks. It would drive itself into a truck trailer to begin picking and loading boxes on an outgoing conveyor, minimizing the lifting and bending of dockworkers. IPI’s current goal is to locate end users that would benefit from a robot capable of 500 picks per hour. However, Rapacki says that the technology is scalable and predicts that achieving 1,000 picks per hour will be possible.
The technology that made this possible was developed by mimicking the Kinect sensor on Microsoft’s Xbox 360 video game console. “When that came out a couple of years ago, it really triggered a lot of robotics researchers to work with it,” explains Rapacki. “It is such a cheap camera to render objects in 3D and environments in 3D.”
IPI’s 3D sensors locate dimensions to distinguish between multiple objects in cluttered environments. The robots can pick items from piles while modeling the objects and calculating trajectories to intelligently determine the proper grasping points. Once the item is held, the speed of the arm is maximized without the risk of collision. An interface can be established between robots and other equipment to communicate state, failure modes and retry logic.
Rapacki says these systems also can deconstruct pallets and transfer materials to conveyors or floor-load trucks. Pallet construction can be achieved for even mixed case sizes. Packages can be reoriented so that the desired surface shows.
Further down the road, the 3D vision systems can be incorporated into outbound conveyor lines at loading stations. The system can be incorporated to identify which boxes are going where. Once identified, colored lasers could be beamed onto outbound packages, so workers know which truck to load with what boxes, without taking time to read labels.