Subscribe and listen to AW’s podcast!
Subscribe and listen to the Automation World Gets Your Questions Answered podcast!
Listen Here

High-Performance Computing Breaks Oil and Gas Simulations Record

Reservoir simulation calculations from IBM, Stone Ridge Technology and Nvidia show that large-scale modeling projects can be accessible to small and mid-size oil and gas companies as well as large.

Representation of the billion-cell model showing porosity variation.
Representation of the billion-cell model showing porosity variation.

Much of what can be achieved today with sensing, data analytics, connectivity and more is possible because of the awesome computing power available to us today. Combine that with the relatively new-found ability to get more fossil fuels out of the earth, and you get a very powerful tool to manage what is arguably any oil producer’s greatest asset—the reservoir.

Earlier this year, ExxonMobil announced a major breakthrough in complex oil and gas reservoir simulation models—using 716,800 processors operating in parallel to help ExxonMobil’s geoscientists and engineers to optimize predictions in reservoir performance. It was not only the largest number of processors used in the oil and gas industry, it was one of the largest simulations reported by industry in general.

Now imagine achieving the same success with a whole heck of a lot fewer processors. IBM, Stone Ridge Technology and Nvidia recently showed off what can be done when you use graphics processing units (GPUs) to accelerate the capabilities of standard processors—central processing units (CPUs)—and apply them to engineering applications. Together, the companies shattered previous reservoir simulation capabilities using only a 10th of the power and a 100th of the space. The news demonstrates the ability of Nvidia GPUs to simulate 1 billion cell models in a fraction of the published time, while delivering 10 times better performance and efficiency than legacy CPU codes.

The breakthrough achievement used 60 Power processors and 120 GPU accelerators, aiming to transform the price and performance for business-critical high-performance computing (HPC) applications for simulation and exploration.

Energy companies use reservoir modeling to predict the flow of oil, water and natural gas in the subsurface of the earth before they drill to figure out how to more efficiently extract the most oil. A billion-cell simulation is extremely challenging due to the level of detail it seeks to provide. Stone Ridge Technology, which develops the Echelon petroleum reservoir simulation software, completed the billion-cell reservoir simulation in 92 minutes using 30 IBM Power Systems S822LC for HPC servers equipped with 60 Power processors and 120 Nvidia Tesla P100 GPU accelerators.

“This calculation is a very salient demonstration of the computational capability and density of solution that GPUs offer. That speed lets reservoir engineers run more models and what-if scenarios than previously so they can have insights to produce oil more efficiently, open up fewer new fields and make responsible use of limited resources,” said Vincent Natoli, president of Stone Ridge Technology. “By increasing compute performance and efficiency by more than an order of magnitude, we're democratizing HPC for the reservoir simulation community.”

The democratization of HPC is also demonstrated through a significant difference in the cost structure—more in the range of $1 million to $2 million rather than hundreds of millions of dollars for the ExxonMobil system.

IBM pointed to the benefits of its Power architecture for data-intensive and cognitive workloads. “By running Echelon on IBM Power Systems, users can achieve faster run times using a fraction of the hardware,” said Sumit Gupta, IBM’s vice president for HPC, artificial intelligence and analytics. “The previous record used more than 700,000 processors in a supercomputer installation that occupies nearly half a football field. Stone Ridge did this calculation on two racks of IBM Power Systems machines that could fit in the space of half a ping-pong table.”

A common misconception about GPUs is that they are better suited to simple, more naturally parallel applications such as seismic imaging. This project aimed to show that they are efficient on complex application codes like reservoir simulators as well. The project also shows that even small and medium-size oil and energy companies can take advantage of computer-based reservoir modeling and optimize production from their asset portfolio.

Though there are only a few places in the world where the resolution and detail offered by billion-cell simulations would be useful, the calculation highlights the performance differences between new fully GPU-based codes like the Echelon reservoir simulator and equivalent legacy CPU codes. Echelon scales from the cluster to the workstation. While it can simulate a billion cells on 30 servers, it can also run smaller models on a single server or even on a single Nvidia P100 board in a desktop workstation—the latter two use cases being more in the sweet spot for the energy industry.


Discover New Content
Access Automation World's free educational content library!
Unlock Learning Here
Discover New Content
Test Your Machine Learning Smarts
Take Automation World's machine learning quiz to prove your knowledge!
Take Quiz
Test Your Machine Learning Smarts