Arm has established plans to reduce the size of its global data center by 45% and reduce its use of on-premises computing resources by 80% by offloading some of its core computing tasks to the Amazon Web Services (AWS) cloud. .
The British semiconductor maker is in the process of migrating most of its electronic design automation (EDA) workflows to Amazon’s public cloud platform, and claims that the progress it has made on this front so far has led to a 6-fold improvement in performance. time for those workloads.
EDA is an important part of the semiconductor development process and involves the use of software tools to design and analyze computer chips, and the workflows it generates include elements of front end design, simulation, verification, and data analysis.
“These highly iterative workflows traditionally take many months or even years to produce a new device, such as a system on a chip, and involve massive computing power,” Arm and AWS said in a statement announcing their technology union.
It’s a complex job as each chip is designed to deliver maximum performance in the least amount of space possible and can contain billions of transistors that must be engineered down to a single digit nano level.
Arm has traditionally run these computationally intensive workloads from on-premises data centers, but is now in the process of changing its processes so that more of this type of work can be done in the AWS cloud.
“Semiconductor companies running these on-premises workloads must constantly balance costs, schedules, and data center resources to advance multiple projects at the same time. As a result, they may face a computing power shortage that slows progress or bear the expense of keeping computing capacity idle, ”the statement continued.
In addition to its EDA workloads, the company is also using the AWS cloud to collect, integrate, and analyze the telemetry data it accumulates to inform its design processes, which it claims will lead to improvements in the performance of its teams. engineering and organization in general. efficiency.
Specifically, Arm will host these workloads on a variety of different types of Amazon Elastic Compute Cloud (EC2) instances and will use the machine learning-based AWS Compute Optimiser service to decide which instances to run and where.
It also draws on the expertise of AWS partner Databricks to develop and run machine learning applications on Amazon EC2 that will allow you to process data extracted from your engineering processes to improve the efficiency of your workflows as well.
“Through our collaboration with AWS, we have focused on improving efficiency and maximizing performance to return precious time to our engineers to focus on innovation,” said Rene Haas, president of IP Products Group (IPG) at Arm.
“We are optimizing engineering workflows, reducing costs and accelerating project timelines to deliver powerful results to our clients faster and more cost-effectively than ever.”
Peter DeSantis, senior vice president of global infrastructure and customer support at AWS, added: “AWS provides truly elastic high-performance computing, unmatched network performance, and scalable storage that is required for the next generation of EDA workloads, and is so they are very excited to partner with Arm to power their demanding EDA workloads running our high-performance Arm-based Graviton2 processors. “