fbpx
NVIDIA DGX SuperPod – 5th Fastest Supercomputer & 1st Green Supercomputer in the world.

In 2009, the world’s first crypto-currency, Bitcoin, was created and the graphics processing units (GPU) proved to be the ideal platform to “mine” or calculate the algorithms required for the blockchain to function. At that point in time, GPUs were almost exclusively the domain of consumer-based gaming computers or high-end video processing workstations. Fast forward a decade to today and you will find that six out of the top 10 fastest algorithm cruncher computers on the planet contain GPUs (see the full list here: Top500). Supercomputers are not new, but with the advent of GPU it has unleashed the power of these supercomputers to new heights.

Now GPU-based supercomputing and AI are almost synonymous. The vast amounts of data made up of many different data types requires enormous computational power to be processed, inferred or analysed. Traditionally, use cases for supercomputers were the sole domain for government and scientific research. This is no longer the case – forward-thinking enterprises across the globe have recognised that the future of business success will depend on their ability to leverage AI. They will do this by creating new products and services that incorporate AI as part of the organisation’s DNA and that of their customers. But this was not always the case, let’s consider a very brief history in supercomputer adoption.

Mainstream supercomputer adoption challenges

In the past, it was almost impossible for enterprises to access supercomputers. It essentially meant that an organisation had to commission a science experiment to access a supercomputer that was hosted by a government science department or research laboratory. Data had to be prepared, wrangled, extracted and transformed. Specialised code had to be written, checked and compiled by people who have the letters PhD behind their name. Then you had to wait your turn in a long queue of people wearing long white lab coats to access the supercomputer. Once you reached the front of the line you had to load your data and application and process it, and hope you got your answers the first time without having to get back in the line of people with long white lab coats with the letters PhD behind their name.

If an enterprise did not feel like queuing at government departments and could contort themselves through the financial budget control hoops of their accounting departments, they could consider spending an astronomical amount of money and time to build a data centre that could house a supercomputer and coat racks. Once they accomplished that feat of corporate agility, they could embark on the odyssey of building the actual supercomputer. By the time that was finished, and they had lured a few people with long white lab coats out of the waiting line to their newly built supercomputer, the enterprise could start thinking about the questions they needed to answer with the supercomputer and if the questions are still relevant.

In the more recent past, in addition to governments and research labs, hyperscale public cloud providers have proven to be an alternative. Using the powers of economies of scale, volume-based discounted purchasing and not a small amount of people with PhD behind their name, they have provided enterprises with the opportunity to access supercomputer hardware.

This miracle of ready-to-run cloud-based supercomputer power does not come without its own challenges. Those being the not so trivial challenges related to the average enterprise trying to adopt a public cloud-based operating model. Couple that with the vast amount of data that is continuously generated, not always in the cloud, mind you, that must be sucked into said cloud, secured from threats, and then repatriated back to not yet cloud-native on-premises systems to make the significant investment worthwhile.

Though this method of supercomputing is a giant leap in the right direction and enables many organisations to start their supercomputer AI journey. (Even better, it does not require the space of a small farm in Midrand filled with thousands of computers and anti-load-shedding generators.) Despite these benefits, public cloud is not always a feasible option.

 

Enterprise supercomputer AI adoption

To return to our opening statement, organisations are looking towards emerging technologies to gain advantage and remain relevant in an ever-changing marketplace. The consumers they are catering for are also becoming more and more sophisticated and their needs more complex than ever before.

The digitisation of every industry has unleashed a drive to simplify the customer experience and remove the complexities of transacting with relevant customers. Even as the “front end” customer experience is constantly simplified, the “back end” systems and regulatory demands have become increasingly more complex to abstract the user experience. At the risk of oversimplification, the days of having a CRM or ERP system and database that feeds hundreds of dashboards hosted on what now can be considered general purpose compute to run the enterprise decision-making capability is a legacy era.

Interconnected systems at enterprise scale require in-the-moment decision-making to satisfy the next-generation digitised consumer. Whether the use case is as critical as fraud detection at point of sale or simply determining the most appropriate white lab coat to advertise on a research lab PhD’s Google search results, supercomputing with the application of artificial intelligence is redefining the IT landscape of the modern enterprise.

True value creation and investment is shifting away from general purpose compute and rapidly being redirected to establishing supercomputer AI estates, as executives seek to establish differentiation.

In the very near future, it will not be uncommon for enterprises to be commissioning Petaflop scale supercomputers as part of their future digitisation strategy. These supercomputers will allow enterprises to strip complex industry demands down to mature digitised services and products.

No need to invest in a small farm in Midrand or PhD level consulting fees to decipher your cloud provider’s itemised service billing. Thanks to the combination of emerging technologies, enterprises can adopt a cloud-native supercomputer AI platform at roughly the size of a small household oven.

In our next press release, we will explore the building blocks of a cloud native AI data centre.

NVIDIA DGX A100 – 5 Petaflops in 6 Rack Units.

NVIDIA DGX A100 – 5 Petaflops in 6 Rack Units.

Data Sciences Corporation is a leading IT solutions provider and emerging technologies systems integrator. Contact us for more information about our NVIDIA AI/ML supercomputers for the enterprise. 

seers cmp badge