Subscribe and listen to AW’s podcast!
Subscribe and listen to the Automation World Gets Your Questions Answered podcast!
Listen Here

Consortium Reports New Findings on Alarm Rates

New research funded by the Center for Operator Performance shows that process operators may perform better at high alarm rates when alarms are grouped by priority, rather than chronologically.

How many process alarms can operators cope with before their performance is degraded, and how can control panels be designed to help ease the burden? New research funded by the Center for Operator Performance (COP,, Dayton, Ohio, may shed additional light on those questions.

Operator Performance as a Function of Alarm Rate and Interface Design” is the title of a presentation to be given by COP member David A. Strobhar at this year’s WBF North American Conference May 24-26, in Austin, Texas. (WBF is the former World Batch Forum.) Automation World interviewed Strobhar recently to get a preview on the topic.

The Center for Operator Performance is a consortium of process industry operating companies, vendors and academia. Members include British Petroleum, Chevron, Flint Hills Resources, Marathon Petroleum, Nova Chemicals and Suncor Energy on the user side, along with controls vendors ABB and Emerson Process Management. Other members include Wright State University, in Dayton, where the Center is housed, and Beville Engineering Inc., also in Dayton, which specializes in human factors engineering for the refining and petrochemical industries.

While alarm rate limits have been suggested by various organizations in the past, the numbers have typically not been based upon empirical research done in a process control environment, says Strobhar, who is chief human factors engineer at Beville. Oft-cited alarm rate targets published by the British-based Engineering Equipment and Materials Users Association (EEMUA, were based generally upon consensus estimates, he says. And the new ISA-18.2 Alarm Management standard published last year by the International Society of Automation ( essentially relied upon the same numbers, Strobhar contends.

Digging deeper

With that in mind, COP set out recently to dig more deeply into the topic, Strobhar says. The result was an academically rigorous study funded by the Center that was administered by Craig Harvey, Ph.D., an assistant professor at Louisiana State University.

For the study, more than 30 LSU students who had been trained on using a pipeline simulator were tested on their ability to respond to alarms. The students were measured on the time taken to acknowledge an alarm, time taken to initiate corrective action, and the accuracy of their responses. The experiment was set up to study two major variables—the rate of alarms that subjects would have to deal with, and the way in which the alarms were presented.

The study turned up two statistically significant findings, or “major discoveries,” according to Strobhar. The first was that the performance of students was almost identical across the four lowest alarm rates tested, ranging from one alarm in 10 minutes up to 10 alarms in 10 minutes. This seems to contradict earlier EEMUA guidelines that alarms presented at a rate of 10 every 10 minutes is “very likely to be unacceptable,” says Strobhar.

To hear a podcast of the complete interview with David Strobhar, please visit

Priority grouping

The second major finding revealed an interaction between the way in which alarms are presented and the way that subjects respond. When alarms were presented at a rate of 20 every 10 minutes, there was a significant degradation in the students’ performance, says Strobhar. But the study found that when alarm display was grouped according to alarm priority, the students’ performance was about 40 percent better at the higher alarm rate than it was when the alarms were displayed chronologically by time of actuation.

“So not only did we see from the experiment [at what alarm rate] the alarms were causing a problem, but it turns out that how you present the alarms can either magnify the problem or minimize its impact,” Strobhar observes. This result has particular implications, given that interface designers often rely on operator preference, he says, and that process operators often voice a preference for chronologically grouped alarm summary screens.

Follow-on study

The Center for Operator Performance is currently funding a follow-on study that is using refinery operators and pipeline controllers as test subjects instead of students. Preliminary results of that study are expected to be ready in time for the Center’s annual meeting June 13-15 at Wright State University, Strobhar notes. And if the results of the second study validate those of the first study, it could have a near-term impact on the way that process companies design their alarm management programs, and the way that controls vendors design operator interfaces, Strobhar indicates.

Two member companies of the COP consortium have already commented that “if this data and these results hold up with actual operators over a longer time period, then their intent is to go into their refineries and mandate a change in how they’re presenting their alarm information,” Strobhar says.

Likewise, the two vendor consortium members—ABB and Emerson—are also considering changes to their alarm display technology based on the study results, according to Strobhar. Both companies are looking at ways that displays can achieve operator preferences for chronological displays at lower alarm rates, but be automatically reformatted to a performance-enhancing prioritized alarm list at higher alarm rates, he says.

Center for Operator Performance

Engineering Equipment and Materials Users Association

International Society of Automation

RSS Feeds for News

Discover New Content
Access Automation World's free educational content library!
Unlock Learning Here
Discover New Content
Test Your Machine Learning Smarts
Take Automation World's machine learning quiz to prove your knowledge!
Take Quiz
Test Your Machine Learning Smarts