Two decades ago, anyone trying to handle large numbers of calculations needed to buy time on supercomputers. Today, Nvidia’s graphic processing unit (GPU) chips power on-demand supercomputer-equivalent computing on the cloud.
While Nvidia does not directly offer cloud computing as a service or product, Nvidia’s GPU chips are a key offering in the GPU-as-a-service (GPUaaS) solution. GPUaaS is offered by every major cloud provider, including Alibaba Cloud, Amazon Web Services (AWS), IBM Cloud, Microsoft Azure, and Oracle Cloud.
In this article, we explore the common advantages and features offered by these cloud vendors directly attributable to Nvidia’s GPU Cloud Computing capabilities:
Nvidia GPU cloud computing and the GPUaaS market
Global Market Insights estimates that the GPUaaS exceeded $1 billion in 2020 and will grow at a 40% CAGR through 2027. Nvidia is not a competitor in this market, but is the major supplier of the GPUs to power artificial intelligence (AI) processing in the cloud and private data centers.
Nvidia recognized AI-enabling GPU sales of $3.2 billion in a market that Omdia research estimates to reach $4 billion in 2020 and then grow 900% to reach $37.6 billion in 2026. With a monster 80.6% share of the market Nvidia will likely capture a significant portion of the market growth from their competitors in the market such as AMD, Google, Intel, and Xilinx.
Nvidia GPU cloud computing key features
Nvidia’s GPUs supply the cloud with high-performance computing power. Cloud providers cite the key features such as:
- Nvidia GPUs offer up to one petaflop of deep learning computing capability and single-precision floating point performance of 125.6 teraflops.
- Training is available for scalable multi-node machine learning (ML).
- Customers can select their Nvidia GPUs to optimize power within their budget limitations.
- Chipsets are optimized for specific processes such as remote visualization, gaming, deep learning algorithms, and video processing.
Nvidia GPU cloud computing key benefits
Customers selecting an Nvidia GPU Cloud Computing solution usually hope to enjoy key benefits such as:
Fast and simple deployment
All cloud providers offer turn-key computing resources that do not require purchase, physical setup, running cables, or software installation. Cloud resources update quickly, deploy rapidly and easily, and can simplify IT management and accounting.
Some providers even provide pre-packaged Docker images that can be deployed in minutes, so researchers can get started as soon as they are ready.
Flexible pricing and scalability
Once built, in-house data centers continue to cost money, even if the need for computing power reduces. However, if the need for computing power increases, the data center cannot increase without additional capital investment.
Cloud computing provides limitless scaling with costs that match the computing requirements. GPUs tend to be even more expensive resources than regular computing, so the fractional use of GPUs through GPU cloud computing provides even more amplified price savings.
Limitless compute on demand
Similar to pricing, once an in-house data center is built, the computing power will be limited to the capacity of that data center. Using GPU cloud computing provides as much scalable computing power as an organization can afford.
On-demand GPU cloud computing delivers performance and reliability to enable many computing-intensive needs such as scientific computing, AI and ML model training, and neural network modeling. Organizations also use GPU cloud computing for rendering of computer graphics, 3D model rendering, and video processing.
Nvidia GPU cloud computing use cases
National Grid
Great Britain’s National Grid strives to maximize energy generation from renewable resources, but the U.K.’s unpredictable weather complicates modeling of when wind and solar will be available. National Grid’s data modelers needed a high-performance platform that could reliably deliver high computing power while also minimizing energy consumption.
National Grid selected the Oracle Cloud Infrastructure (OCI) powered by Nvidia GPU Cloud Computing to access 125 teraflop data processing that only uses 300 watts of energy—much less than an electric stove. On the cloud, National Grid is enjoying 40% improvements in both performance and accuracy.
“[It] just works,” said James Kelloway, energy intelligence manager at National Grid ESO. “You can trust it. It doesn’t fall over; it just does its job, and it does it really, really well.”
NerdWallet
NerdWallet uses ML algorithms to help customers improve personal financial decision making. NerdWallet data scientists needed huge computing power for their ML models that initially took months to prototype.
But as a startup, the company could not spend recklessly to obtain that computing power. So, NerdWallet selected AWS powered by Nvidia GPU Cloud Computing.
“We essentially unlocked business value in two months,” said Ryan Kirkman, senior engineering manager at NerdWallet. “…We’re providing a guided path that makes solving these surrounding infrastructure problems easier from a platform and engineering perspective, while also accelerating the work of our data scientists.
“It’s a win-win. It used to take us months to launch and iterate on models; now, it only takes days.”
University of California Davis
The School of Medicine at the University of California Davis (UC Davis) pharmaceutical researchers needed to run drug interaction simulations incorporating 500 million energy and force computations on more than 100,000 different atoms. These requirements easily overwhelmed the on-premises server clusters at the university, and engineers went looking for more powerful cloud-based solutions.
They turned to OCI powered by Nvidia’s GPU Cloud Computing solution.
“OCI’s HPC platform helps us run 50 different simulations at once, which allows us to test all sorts of conditions and ensure that our research is not limited by the speed of our simulations,” said Colleen E. Clancy, a professor in the department of physiology and membrane biology and the department of pharmacology at UC Davis School of Medicine.
Nvidia GPU cloud computing differentiators
The Nvidia microchips that power GPU Cloud Computing dominate the GPU industry because of their key differentiators:
- High performance: In terms of raw power, Nvidia’s chips perform more calculations and enable faster graphic rendering than their competitors.
- Rapidly accelerated ML learning: AWS notes that Nvidia-powered instances “have been proven to reduce machine learning training times from days to minutes” and increase simulations completed by 300–400%.
- Reliability: As an early mover in the GPU industry, Nvidia has the experience to provide more stable drivers, which lead to more reliable performance.
Nvidia GPU cloud computing ratings
As a supplier of a specialized need, Nvidia does not have reviews published specifically for its GPU Cloud Computing offering. In fact, most rating sites do not yet have ratings for most cloud offerings. However, Google’s Cloud GPUs service provides a representative sampling of reviews:
Review site | Rating |
Gartner Peer Insights | 4.4 out of 5 |
G2 | 4.2 out of 5 |
Nvidia GPU cloud computing pricing
Cloud providers can provide free trial offers for the accelerated GPU cloud computing. Pricing will vary by cloud provider, operating system, location, and computing power needs. Quotes can be obtained from each cloud provider.
As an example of pricing, AWS offers four types of pricing (GPU instances quoted for U.S. East-Ohio):
- Dedicated
- Physical server maintained by AWS but dedicated to the customer’s use.
- Prices start at $5.016 per hour per instance and can go as high as $36.05 per hour per instance.
- On demand
- Unreserved, non-dedicated computer capacity without any long-term commitment can be obtained by the hour or by the second (minimum 60 seconds).
- On-demand hourly rates start at $0.75 per hour.
- Reserved
- Reserving computing power in advance for one to three years and paying upfront can provide substantial savings.
- For one year with no upfront payment, prices start at $3.423 per hour per instance.
- For three years of reserved computing with full payment upfront the cost starts at $2.668 per hour per instance
- Volume discounts are available for reserved instances above $500,000.
- Spot
- Spot instances take advantage of unused AWS computing power for a reduced price; however, jobs must be interruptible and flexible for scheduling.
- Prices change every five minutes, but sample pricing seems to start at $0.225 per hour and go as high as $10.288 per hour for GPU instances
Of course, to perform tasks on the cloud also requires data transfers, storage, networking, possible IP addresses, and more, so be sure to use pricing calculators.
Conclusions
Some applications and processes need high-performance computing power to finish within a reasonable timeframe. Nvidia GPUs earn their reputation as the best option for enhanced computing power for the massive calculations associated with AI, graphic rendering, and simulations.
Adopting a Nvidia GPU Cloud Computing solution provides the power along with the options for scalability, easy upgrades, and cost savings. For any enterprise trying to obtain massive computing power, Nvidia GPU Cloud Computing options should be at the top of the consideration list.