Instruments have become more sensitive and controllers faster, so technology now enables systems to present an alarm to an operator for any loop out of its specified range. Sounds like a good idea, right? But now, operators often have no way to discern which alarm is the one that can cause an incident and which ones are spurious.
Peter Andow, principal consultant at automation supplier Honeywell Process Solutions, in the United Kingdom, says, “People are still getting into the business [of alarm management].” He believes that an impetus to this is EEMUA 191, promulgated by the U.K. Engineering Equipment and Materials Users’ Association, along with the soon-to-be-adopted ISA18 standard from the International Society of Automation. In his estimation, about half of process manufacturing sites in Europe have some kind of alarm management project in place. “The good thing is that half of customers, and it’s more common in the United States, are rationalizing projects. They’ve achieved huge improvements in normal alarm rates, but not as much progress as desired has been achieved dealing with alarm floods,” Andow says.
There are two basic types of rates at which alarms appear on an operator’s screen. One could be called normal or a somewhat infrequent rate. The other is called an “alarm flood.” While the former rate may average one alarm per 10 minutes, in the latter type, there could be many, even hundreds, per minute. Andow relates the case of the “dead band” around an alarm point being set too narrowly or the system reacting too quickly.
David Strobhar, principal human factors engineer, Center for Operator Performance, in Dayton, Ohio, cites six pitfalls in alarm management that engineers face while attempting change. First is the philosophy of change, that is, failure to get appropriate buy-in from all relevant groups, which can lead to problems later on. “This is usually from folks who have their own idea of what should be alarmed and its priority, even if there is no action or the consequence is not very severe,” he adds. Second, access to operators and emotional responses in the rationalization process either delay or skew the results.
Implementation is the third pitfall to watch, according to Strobhar. Getting the changes through the management of change process and doing the actual programming can take three or four times as long as the rationalization. Unreliable instruments creating problems between what can be vs. what should be alarmed is the fourth pitfall. Next comes the fact that failure to enforce the results will quickly result in degradation over time. The last pitfall Strobhar sees is management of change. Handling off-hour alarm issues, such as chattering and instrument failure, and addition of alarms with upset analyses or new equipment can degrade the system.
Andow, who also works with the Abnormal Situation Management (ASM) consortium, of Phoenix, cites a study by the ASM that found the normal alarm rate is now down to less than one per 10 minutes. The study also found that an operator can’t keep up with alarms at a rate greater than one per minute. He considers that rate a good target, albeit a tough one to reach.
As for the future, Andow sees the next big push likely to be in alarm suppression tools. “People have been asking for it for years, but they didn’t have a good basic system,” he adds. “First you need to fix the instrumentation, then the basic alarming. Only then can you implement alarm suppression techniques. There will be a lot of work in this area accomplished in the next five to 10 years.”
Gary Mintchell, firstname.lastname@example.org, is Automation World’s Editor in Chief.