San Diego’s Supercomputer Center presents a unique temperature control challenge for engineers.
San Diego’s Supercomputer Center is home to many of today’s Big Data projects. Millions of dollars in research—whether in earth science, biology, astrophysics, or health IT— depends on the sophisticated computing systems contained within, producing high-performance computing, grid computing, computational biology, computational chemistry, data management, scientific visualization and computer networking.
Just how powerful are the center’s capabilities? Where a typical PC may contain two or four core processors that make up its central processing unit, the center’s latest supercomputer has a total of 47,776 Intel cores and 247 terabytes of memory, with capabilities for 2 quadrilion floating point calculations per second.
Such massive computing power comes with unique challenges: Supercomputers generate substantially more heat than typical PC systems, and housing so much activity with various research projects in one area results in substantial and rapid temperature fluctuations, as not all of the systems are running at full capacity at any given time.
Ordinary approaches to controlling data center temperatures—rotating a standard design of hot output and cool intake aisles—simply won’t work.