Cloud customers can save up to 50 percent by selecting an on-demand GPU virtual machine option that is cheaper because Google can preempt the computing resources whenever it needs to with little notice.
Organizations that are using Google’s cloud infrastructure to run large compute-intensive workloads now have a relatively cheaper option for this service.
The company this week announced a new preemptible GPU option that allows customers to use virtual machines with Graphics Processing Units capable of multi-teraflop performance at prices that are least 50 percent lower than on-demand options. With the new option, enterprises can now attach Nvidia K80 and Nvidia P100 GPUs to preemptible virtual machines on Google cloud at 22 cents and 73 cents per GPU hour respectively.
The GPU-accelerated computing option is deigned for organizations looking to run engineering and other compute-intensive workloads in the cloud. Google has positioned it as an option suited for those looking to run machine learning, medical analysis, scientific simulations, video transcoding and similar applications in the cloud that require a lot of processing power.
The new preemptible option gives such customers a way to get the computing resources they need at much cheaper prices than usual, so long as they don’t require it to run continuously for more than 24-hours. Organizations that sign up for the option also do so with the understanding that Google can take away the GPU resources at any time and with little notice to run other workloads.
Google’s new preemptible GPU option builds on a cloud usage and pricing model that the company first introduced in 2015 with its preemptible virtual machine offering. Under the model, Google has been renting out excess cloud infrastructure resources at relatively low prices to organizations that want access to massive computing capacity but only for short periods of time.
Whenever Google needs the extra resources to run workloads belonging to other higher paying customers, the company simply takes over, or pre-empts, usage by the lower-paying customers.
For example, under the preemptible model, a VM with 1 CPU and 3.75GB of memory that would cost more than $24 per month to use with on-demand pricing would be available for just $7.30. Similarly, a VM with 64 processors and 240GB of memory that would cost over $1,550 per month on an on-demand basis would cost just $467 under preemptible pricing.
Google has positioned the preemptible VM option as ideally suited for running distributed and fault-tolerant workloads that are not dependent on a single VM instance. Because pricing for preemptible usage is fixed, organizations benefit by knowing exactly how much they will pay upfront compared to on-demand pricing models, according to the company.
With this week’s announcement, any GPUs that are attached to a preemptible VM on Google’s cloud will be considered eligible for the lower pricing as well. Features are available in the Google Compute Engine cloud platform that allow organizations to automatically recreate GPU instances any time Google preempts their usage, so long as excess capacity is available.
“Preemptible GPUs will be a particularly good fit for large-scale machine learning and other computational batch workloads as customers can harness the power of GPUs to run distributed batch workloads at predictably affordable prices,” said Chris Kleban and Michael Basilyan, product managers with Google’s Compute Engine group in a blog Jan 4.