In March 2010 at the international physics laboratory of the European Organization for Nuclear Research (CERN), two proton beams collided with the energy of 7 trillion electron volts in the world’s largest and highest-energy particle accelerator known as the Large Hadron Collider (LHC). CERN, which features the most technologically advanced facilities for researching the basic building blocks of the Universe, built the LHC to test predictions of high-energy physics for particle physicists to help answer some of the most fundamental questions about the basic laws governing interaction and forces concerning elementary objects, structure of time and space, quantum mechanics and relativity.
The LHC results required years of investment in engineering, science and technology. Keeping the technology working reliably requires consistent monitoring of temperature and humidity in the tunnels and experimental areas of the LHC site.
A network of chillers and a cooling distribution system are used to cool circuits in the tunnel, experimental areas and supply air-handling units with chilled water to control temperature and dew point in the Super Proton Synchrotron (SPS) and LHC tunnels, which are 27 km in length. The control room operates a centralized alarm system for a variety of systems. The operators monitor alarms 24/7 and employ a stand-by maintenance service to address issues.
Originally, alarms from the chillers were hardwired from the chillers to controllers that transmit alarms to the control room. However, this system gave little information on the types of problems triggering the alarms.
“We had minimal information about what was happening on the chillers and the cooling distribution, only that something was wrong,” said Sabri Masrie of CERN. “To effectively address this issue, we implemented a system to connect our alarm system to the chillers directly and enabled access to the online database to acquire the alarms and status of the chillers.”
Several OPC servers were evaluated to determine delivery of critical information from the chillers. Many of the OPC servers tested did not fully support the BACnet protocol on TCP, were hard to configure or provided poor documentation.
“All promised easy connection, but required modification on the chillers side,” Masrie said. “Some showed degradation in performance after connecting several stations. Software Toolbox suggested using the TOP Server BACnet OPC Server. We found the documentation very thorough and customer support very helpful during the implementation phase. The automatic generation of the BACnet data points worked perfectly. When all 14 stations were added, with about 200 data points each, the performance was stable.”
Software Toolbox TOP Server, powered by Kepware, is a standards-based connectivity solution for automation and control devices. The TOP Server for BACnet OPC server software provides important information about chiller status to a custom Linux-based application operators use to avoid costly shutdowns. CERN has interfaced its Linux application with many OPC servers before, so connecting to TOP Server was plug-and-play.
“Using the TOP Server BACnet software, we now have access to full diagnostics of alarms and monitoring of the cooling systems from the control room,” Masrie said. “Stand-by service technicians can log in from home to check what is happening and operators can give more precise instructions to stand-by service providers.”
In addition to BACnet, the TOP Server software offers more than 100 different drivers to connect automation and control networks and devices. Once data is in TOP Server, it connects to other applications using OPC DA, A&E and UA standards, as well as DDE and several automation vendor-specific interfaces.
More information on TOP Server may be found at www.softwaretoolbox.com/topserver.