On my niece’s first day as a nurse at the new Benioff Children’s Hospital in San Francisco, she was confronted by a robot that asked her to “please step aside.” It had supplies to deliver, and she was in its way.
Opened just about a year ago, the $1.5 billion UCSF Medical Center—with a children’s hospital, women’s hospital and cancer hospital—boasts the world’s largest fleet of autonomous robots. The 25 Tug robots from Aethon shuttle food, linens, lab specimens and medications around the facility.
The Tug is an autonomous mobile robot made specifically for hospitals. It uses a built-in map and sensors to navigate hospital halls. It can call and ride elevators, give people the right of way, and navigate around objects. It also communicates with employees and patients as needed, with about 70 phrases.
“It’s about efficiency. It’s not a great use of someone's time to be transporting something from A to B,” said Dan Henroid, director of the Department of Nutrition and Food Services for UCSF Medical Center, when the hospital first announced the arrival of the robots. “That’s more time that we spend in front of the patient. We want that personal touch as much as possible.”
Manufacturing plants have the same need for efficiency. And although Aethon leads the industry in hospital installations, its autonomous mobile robots have also made their way into the industrial space. These are different than the automated guided vehicles (AGVs) that have been more common in manufacturing. The Tug has its own navigation system, with no need for added infrastructure to the facility, and an omni-directional locomotion drivetrain. It can work alongside people, and can navigate around obstacles.
This kind of autonomy is beginning to take over from typical AGVs on the factory floor, which require carefully laid out paths of magnetic tape for guidance. The government has spent billions of dollars in autonomous tanks that can move around the battlefield, and Google is spending millions of dollars on autonomous cars, says Roland Menassa, leader of the Advanced Manufacturing and Software Technology Center at GE Robotics. “We’re finding some of that bleed over into manufacturing with automatic guided vehicles,” he says. “But now we’re starting to see autonomous AGVs, with no more tape on the floor. The vehicle knows the hallways and aisleways. These advances are going to allow for flexible material transfer in a plant.”
“In robotics research, perhaps the highest excitement for me is the accomplishment of authentic autonomy,” says Red Whittaker, professor of robotics and director of the Field Robotics Center at Carnegie Mellon University. The ambition for autonomous cars has been for them to understand where they are, where they’re going, and what to do. “Now autonomy is coming into the work world—reasoning about what it’s about, and thinking about what to do, how to do it, how to stay out of trouble to get the job done.”
Adapting robots to human environments
It’s all part of helping robots make their way more seamlessly into human environments.
“How do we take advances of state-of-the-art technologies to really help us advance our efficiencies, our quality, our productivity?” Menassa asks. “We at GE have to be very careful. We’re a highly flexible company, with a lot of manual processes. When we inject automation, we must do it in a way that’s still flexible.”
Collaborative robots, which can work safely alongside their human counterparts without need for fencing, have been at the forefront of that movement, he adds. “All of a sudden when Rethink Robotics came on, and Universal Robots, it changed everything. After a few years, the big companies took notice,” he said, ticking off companies like Fanuc, ABB and KUKA, which have all introduced collaborative robots. “Everybody is rushing into that race.”
NASA’s Johnson Space Center in Houston has been working for some time on Robonaut, a dexterous humanoid robot that can help humans work and explore in space. Some very important considerations in that development were the ability for robots to work alongside humans, and for the robots to be able to handle a variety of tools.
|NASA and General Motors jointly developed the Robonaut 2 to be able to work alongside humans not only in space, but in automotive manufacturing as well.|
Robonaut 2 (R2), developed jointly by NASA and General Motors with help from Oceaneering Space Systems, was launched to the International Space Station (ISS) on the space shuttle Discovery. R2 continues to add value for the astronauts’ day-to-day jobs, according to Menassa, who previously led GM’s advances in automation robotics, including the Robonaut work.
Space shuttles and space stations are environments highly designed for humans, which is an important factor in robotic development. “They’re not going to change the environment for robots,” Menassa says. “Where there are lots of humans working, they want to introduce automation, but in a way that does not change the environment people are working in.”
The same is true in industrial automation, and was also GM’s goal in developing R2. It’s also been a substantial factor in the latest surge of collaborative robots in industry. “If you think of the traditional way of robotics, it designed people out of an automation system. With advances in robotics, it’s the other way around. It’s the person that replaces the robot,” Menassa says. “On the assembly line, if somebody has to take a bio break, the team leader steps in, and they continue the assembly process. If a robot fails, a person can step in and work in lieu of that robot. Why? Because they didn’t change anything in the environment where that robot exists. Now I can interchange between people and machine. That’s a very powerful concept.”
Car manufacturers have led the way in industrial robot use. However, although the last 54 years in robotics have shown a lot of progress in material handling and process applications like welding, there’s been little in the assembly area, according to Menassa. “That’s the last frontier that withstands automation,” he says, pointing to the 3,000-4,000 parts that go into assembling a typical car. “There are parts in bins, maybe in a bag, and in cardboard boxes. It’s very hard for robots to handle.”
The beauty of the human hand
A key aspect of NASA’s Robonaut effort has been to develop dexterous manipulation, or the ability for the robot to use its hands to do work. The challenge has been to build machines with dexterity that exceeds that of a suited astronaut. With improved dexterity over its predecessor, R2 can use the same tools that astronauts currently use and removes the need for specialized tools just for robots.
Menassa calls the robotic hand the Achilles heel of robotics today. “The dexterity is something lacking in robotics today,” he says. “We have six or seven degrees of freedom in the arm, but the end of the arm is a stump. There’s nothing there that we can use that’s flexible to handle multiple parts. We spend so much time in tooling and retooling. The ultimate promise of a robot that can do anything still is not there.”
Rodney Brooks, founder of iRobot and Rethink Robotics, sees dexterous manipulation as “the toughest frontier in research for the general deployment of robots,” he says. “It involves mechanism; its beautiful hands. It involves materials; the material of our skin is very important. It involves embedding lots of sensors. And it involves algorithms. So for progress in that, you have to have a team that can tackle at least those four things all at the same time.”
Brooks was speaking recently about the growing amount of research being done on robotics for elder care in the home. He expects, within the next 20 years, to see lots of robots helping people maintain their independence. But first, it requires big areas of research in what he calls the three M’s: mobility (how does a robot navigate a house with stairs), messiness (and a house with items lying around) and manipulation.
Manipulation is likely the toughest nut to crack. “There have been so many failed attempts over the last 40 years, people are gun shy,” Brooks said. “But if we made progress in dexterous manipulation, it would not just be good for the home. It would be good for the factory, it would be good everywhere. It would be good for fulfillment centers. What’s happening in fulfillment centers, basically the automation of robotics has gone in a linear motion, because that we can do, but still grabbing the object that’s got to be packed is done by a human hand.”
Although computer vision has developed significantly and still faces some challenges in robotics, seeing a widget and seeing a box doesn’t require a change in tools, points out Eric Nieves, founder of PlusOne Robotics, which is focused on human-centric robotics. “But to pick up said widget or to pick up a box may be completely different end effectors, different hands. Maybe I can do it with suction for a box, but have to have a grasper for widget X. So what do you do?”
Automation’s current approach requires changing end effectors as needed—something very different than what humans do in the same situation. “We have this very dexterous end effector that can handle all these different geometries,” Nieves says. “But we are a long way from solving that problem. This is a lot of degrees of freedom and it’s a lot of sensing. There’s a lot of tactile, there’s a lot of force feedback that we do inherently that just isn’t there technically right now. Today you can buy robot hands, but they’re more expensive than robots. So the ROI’s just not there today.”
|Festo’s Bionic Handling Assistant is modeled after an elephant’s trunk, with no rigid joints and 11 degrees of freedom.|
Festo has moved beyond the human hand and borrowed from other biological systems to develop a robot arm based on an elephant’s trunk and an end effector based on the structure of a fish’s fin. The design of the Bionic Handling Assistant is more flexible than a conventional robotic arm. It is made of rings of flexible plastic, giving it 11 degrees of freedom. The adaptive FinGripper is able to hold and move delicate and differently shaped objects without damaging them.
From all angles
While researchers from a variety of fields work on the manipulation problem, another area that’s important in the field of robotics is their suitability for taking over human functions in the three D’s: dirty, dangerous and dull. Environments that are not so human-friendly are precisely the kinds of environments where robots really come in handy. Robots were deployed in the aftermath of the Fukushima nuclear disaster to look at structural damage, and can likewise be used in any number of hazardous or hard-to-reach locations, equipped with vision or other sensor capabilities.
And these days, they can come from just about anywhere—even dropping in from up above or climbing up and over walls.
NASA’s Jet Propulsion Laboratory at the California Institute of Technology has created a robot that can not only scale vertical surfaces such as wood, stone and concrete, but is also made lightweight and tough so that it can be dropped into sites from a drone or a helicopter or thrown over a wall. Called the Durable Reconnaissance and Observation Platform (DROP, suitably enough), it uses “microspines”—tiny hooks that fan out from wheels on flexible treads to grip a variety of surfaces.
DROP’s mobility and durability make it highly adaptable. It’s small enough to be mobile and stealthy, but still big enough to carry a useful payload, such as a video camera and microphone.
Although this robot was developed with such endeavors in mind as Mars exploration, such wall-climbing capabilities could also be well suited to industrial activities, such as examining structural integrity of walls or pipes.
Astec, which makes continuous and batch-process hot-mix asphalt facilities, has been experimenting with its own wall-climbing robot. The “crawlerbot” uses brushless AC motors and magnetized wheels to scale asphalt silos to measure the density of the walls. It would eliminate the need for inspection scaffolding, and would keep workers safely on the ground, where they could review images captured by the crawlerbot’s camera.