How is high-performance computing being used today? Is quantum computing going to eclipse it?
In this What That Means video, Camille talks with James Reinders, High-Performance Computing Engineer at Intel. They dig deep into the intricacies of high-performance computing (HPC), its use cases over the years, HPC challenges, and the connection between HPC, quantum computing, and AI.
What Is High-Performance Computing and How Has It Evolved?
High-performance computing (HPC) involves solving very complex engineering and scientific computational problems with the biggest, fastest computers. Over the years, HPC has evolved both in its architecture and use cases.
In terms of architecture, the idea to build machines that were more expensive and complex than what existed came in the mid to late ‘70s. By the late ’90s, supercomputers evolved into a machine that had thousands of off-the-shelf processors. Since then, there have been advances like multicore processors and accelerators like GPUs.
Initially, supercomputers were mainly used for military purposes, such as weapons design, until the ‘90s. Over the years, HPC use cases have spread into other fields such as exploration, energy, and pharmaceutical R&D. Specific use cases include using supercomputers to run drug simulations, design wind turbines, figure out where to drill for oil, and forecast climate change.
High-Performance Computing, Quantum Computing, and Artificial Intelligence
First, is Quantum computing going to eclipse HPC systems?
Not likely. Currently, quantum computers solve very specific problems but not every problem. However, there’s a potential for quantum computing to do well in modeling the real physical world. James Reinders believes that as quantum computing matures, it will become another form of high-performance computing. It won’t eclipse all the other architectures, it’ll be a part of them.
As to the relationship between HPC and AI, it’s a close one. Artificial intelligence is making its way into traditional HPC workloads. We can see this in instances like molecular dynamics, where the Monte Carlo operation is replaced with an AI-trained neural network during simulations. The generative adversarial network (GAN) was able to run simulations and deliver comparable results at a fraction of the computing power.
Navigating the Biggest Challenge of High-Performance Computing
A major challenge in high-performance computing is moving massive amounts of data from one processor to another or memory. It constitutes the greatest cost of running HPC and also consumes more power. So, there is a need to increase the efficiency of memories.
This challenge brings up topics such as high-bandwidth memories, processing capabilities, caches, etc., and how to put them together in one package. The goal is to increase memory bandwidth and reduce the power consumption in moving data around. By doing this, we can lower the cost of running the machine and increase its performance at the same time.
James Reinders, Intel’s High-Performance Computing Engineer
James Reinders is an HPC Engineer at Intel. With over thirty years of experience, James Reinders has been part of the teams responsible for several supercomputers ranked as top in the world. One of these computers was ASCI Red, which became the number one supercomputer in the world by the top 500 rankings when it was assembled in 1996. He is also the oneAPI Evangelist for Intel.
#highperformancecomputing #HPC #supercomputers
The views and opinions expressed are those of the guests and author and do not necessarily reflect the official policy or position of Intel Corporation.
If you are interested in emerging threats, new technologies, or best tips and practices in cybersecurity, please follow the InTechnology podcast on your favorite podcast platforms: Apple Podcast and Spotify.