SUNNYVALE, Calif. — The chipmaker AMD is tapping Google Cloud for additional development scale.
AMD and Google Cloud are entering a multi-year technology partnership for AMD to run electronic design automation (EDA) for its chip design workloads on Google Cloud, extending the on-premises capabilities of AMD data centers, according to the partners last month.
AMD will also use several Google Cloud offerings to improve on its hybrid and multicloud strategy for EDA workloads: global networking, storage, artificial intelligence (AI), and machine learning (ML).
The partners believe scale, elasticity, and the “efficient utilization of resources” play critical roles in chip design, particularly as the demand for compute processing grows with each node advancement.
See more: Top Cloud Service Providers
To remain flexible, AMD will add Google Cloud’s newest compute-optimized C2D VM instance, powered by 3rd Gen AMD EPYC processors, to its suite of resources focused on EDA workloads.
With Google Cloud capabilities, AMD plans to run more designs in parallel, giving its team more flexibility to manage short-term compute demands, without reducing allocation on long-term projects.
Google Cloud and AMD will also continue to explore new capabilities and innovations.
“In today’s semiconductor environment, the speed, scale, and security of the cloud unlock much needed flexibility,” said Sachin Gupta, GM and VP infrastructure, Google Cloud.
Gupta said Google Cloud will provide the infrastructure to meet AMD’s compute performance needs and “equip the company with our AI solutions to continue designing innovative chips.”
Google Cloud’s C2D instances have allowed AMD to be more flexible and provided the company a “new avenue of high-performance resources that allows us to mix and match the right compute solution for our complex EDA workflows,” said Mydung Pham, corporate VP, silicon design engineering, AMD.
See more: Google Cloud Platform Review