Skip to content
AWS introduces new Trn1 chips to accelerate training of machine learning models – TechCrunch


As more companies opt for custom silicon for their customer’s workloads, Amazon has been very busy on this front. They introduced the Inferentia chip in 2019 to help speed up inferential learning. Last year, the company released a second Trainium chip, designed specifically for machine learning models. Today, AWS continued to build on this earlier work, introducing its latest machine learning chip, the Trn1.

Adam Selipsky, during his first AWS re: Invent keynote, this morning released the news of the latest chip on stage in Las Vegas.

“So today I am delighted to announce the new Trainium powered Trn1 instance which is expected to deliver the best value for money deep learning model training in the cloud and the fastest on EC2” , Selipsky told the audience of re: Invent. .

“Trn1 is the first EC2 instance with up to 800 gigabytes per second of bandwidth. So this is absolutely great for large scale, multi-node distributed training use cases. He said it should work well for use cases like image recognition, natural language processing, fraud detection and forecasting.

Plus, you can network these chips together for even more powerful performance when placed in “ultra clusters”.

“We can network them and what we call Ultra clusters made up of tens of thousands of training accelerators interconnected with a petabyte scale network. These Ultra training clusters are powered by a powerful machine learning supercomputer to quickly train the most complex deep learning models with billions of parameters, ”said Selipsky.

The company also plans to work with partners like SAP to take advantage of this new processing power, Selipsky said.

AWS introduces new Trn1 chips to accelerate training of machine learning models – TechCrunch


techcrunch Gt

Not all news on the site expresses the point of view of the site, but we transmit this news automatically and translate it through programmatic technology on the site and not from a human editor.