Everyone has heard about supercomputers. These specialised machines have been around for about as long as digital computing. Their first jobs were to crunch heavy mathematical workloads – one of the original supercomputer languages, FORTRAN, is a portmanteau of formula and translation.
Supercomputers, also called high-performance computers (HPCs), helped win the Cold War through their intensive analysis capabilities. Even today, in a world dominated by microprocessor systems, HPCs make a unique impact. It’s the power behind HPCs that enabled scientists to analyse the COVID-19 virus so quickly.
All those examples still speak of bespoke boxes in very specialised environments, and many enterprises might say HPCs are not for them. But this picture is changing – and fast.
“HPCs were traditionally suited for academia and research,” says Adrian Wood, Business Operations Director at Data Sciences Corporation. “But with changes in technology there’s been this convergence of technologies from a commodity perspective.”
HPC for the AI era
HPCs continually evolve, embracing the best technology trends as they emerge. In the 1980s, HPCs leaned towards x86 processors and, in the emerging 2000s, they enthusiastically adopted multi-core systems. Today, new factors prompt another HPC evolution, one of them being the $800 billion chip company reshaping the computing market.
“I don’t think you can talk about modern HPCs and not talk about NVIDIA,” says Werner Coetzee, Data Sciences Corporation’s Business Development Executive. “They started on producing graphics chips (GPUs) for gaming and visual workloads, then expanded as users realised GPUs are great for crypto-mining and training artificial intelligence. NVIDIA saw the future that these changes could create and made several strategic acquisitions to support the case for industrial HPCs.”
Industrial HPCs is the term NVIDIA uses to describe commodity HPCs. Such systems remain specialised and immensely powerful, but their costs are much more appealing to a broader enterprise market. Industrial HPCs step beyond traditional use cases such as academic research and military modelling. As AI and big data needs grow in large companies, so does the usefulness of HPCs. The chipset market already banks on this shift – apart from NVIDIA, Intel and ARM are also actively producing new HPC architectures.
Emerging use cases include spotting fraud in financial services, training service desk bots, modelling factories, creating digital twins, emulating real-world conditions (popular among agritech companies), planning logistics and supply chains, generating actuarial predictions, accelerating R&D testing, and many more concepts that can involve AI, automation and data analysis.
“There’s a lot of businesses that are amassing huge amounts of data through various points, whether it’s IOT devices or just general connections into various data processing platforms,” Wood explains. “They have accumulated this huge amount of data that they are assuming there’s value in but they don’t know how to get there. The traditional kind of data warehouse can offer some simple modelling tools to get an output, but it hasn’t really hasn’t contributed to the vision of currency in the data.”
Why now?
HPCs make an excellent case for enterprise adoption. But why now, and what about alternatives?
The widening market for GPUs contributed to lower HPC costs. As the world realised that these graphics processors could do more than render game visuals, their popularity motivated more supply. Companies like NVIDIA noted this shift and made strategic acquisitions and consolidations to take advantage of the trend.
For example, notes Coetzee, NVIDIA “acquired Mellanox, bringing high performance, low latency infiniband networking to the commercial enterprise world. They also embraced the advent of flash storage. Those three fundamental components – GPUs, low-latency infiniband networking and flash storage – have allowed HCPs to become affordable.”
The HPC boom even extends to hyperscale cloud providers, which offer HPC services. This begs the question: Why get an HPC system if you can just rent one in the cloud?
“There are definitely conceptual overlaps between hyperscale cloud and HPC, and there are times when using the cloud is a better choice,” says Wood. “It’s useful especially for companies that want to dip their toes into what HPC can do for them. But cloud systems also have limitations in how much you can customise the HPC environment, and there are extra considerations such as ingress and egress costs for data.”
It’s not a question of one or the other. Indeed, if HPCs remained the cloistered mega boxes that hide behind university doors, that answer would be different. But since HPC has become more affordable, enterprises are seeing more reasons why their data, research and operational environments can benefit from an HPC system they control and configure.
“There’s probably not an industry out there that will not be disrupted by machine and deep learning and, most likely, every form of application going forward will have some form of artificial intelligence aspect to it,” says Coetzee. “Whether it’s in the sales side or in the production side, it really doesn’t matter. Artificial Intelligence supported by HPC is causing disruption everywhere.”