Google uses SiFive RISC-V cores in AI compute nodes • The Register

RISC-V chip biz SiFive says its processors are used to some extent to manage AI workloads in Google data centers.

According to SiFive, the processor in question is its intelligence X280, a multi-core RISC-V design with vector extensions optimized for AI/ML applications in the data center. In combination with the Matrix Multiplication Units (MXU) used by Google’s Tensor Processing Units (TPUs), this is intended to provide greater flexibility for programming machine learning workloads.

Essentially, the X280’s general-purpose RV64 cores run code in the processor that manages the device and feed machine learning calculations into Google’s MXUs to complete jobs. The X280 also includes its own vector processing unit that can handle operations that the accelerator units cannot.

SiFive and Google have been a little coy, perhaps for commercial reasons, about the exact packaging and usage, although it sounds to us like Google has placed its custom acceleration units in a multi-core X280 system-on-chip that connecting Google-designed MXU blocks directly to the RISC-V core complex. These chips will be used in Google’s data centers, in “AI compute hosts” according to SiFive, to speed up machine learning work.

We imagine that if these are used in production, these chips will take on tasks within services. Please note that you cannot rent this hardware directly from Google Cloud, which offers AI-optimized virtual machines based on traditional x86, ARM, TPU and GPU technology.

The details were revealed earlier this month at the AI ​​Hardware Summit in Silicon Valley in a presentation by SiFive co-founder and chief architect Krste Asanović and Google TPU architect Cliff Young, and in a SiFive blog post in this week.

A gloved engineer's hand holding a modern Intel Core processor

The “significant contributions” that Intel has promised are intended to encourage adoption of RISC-V


According to SiFive, it’s been noticed that after the launch of the X280, some customers have started using it as a companion core alongside an accelerator to do all the household and general processing tasks that the accelerator wasn’t designed to do.

Many found that managing the accelerator required a full-featured software stack, Chip-Biz says, and customers realized they could solve this with an X280 core complex alongside their big accelerator, using the RISC-V -CPU cores do all the maintenance and opcode, perform math operations that the big accelerator can’t, and provide various other features. Essentially, the X280 can serve as a kind of management node for the accelerator.

To capitalize on this, SiFive has worked with customers like Google to develop what it calls Vector Coprocessor Interface eXtension (VCIX), which allows customers to hook an accelerator directly to the X280’s vector register file, allowing for higher performance and more data offers bandwidth.

According to Asanović, the benefit is that customers can bring their own coprocessor into the RISC-V ecosystem and run a full software stack and programming environment, with the ability to run Linux on one chip with full virtual memory and coherent cache support boot, which is a mix of general purpose CPU cores and acceleration units.

From Google’s perspective, it wanted to focus on improving its family of TPU technologies and not waste time building its own applications processor from scratch, so adding these acceleration capabilities with a pre-built, general-purpose processor seemed like the right path to take combine to go, according to Young.

VCIX essentially glues the MXUs to the RISC-V cores with low latency, skipping the need to spend many cycles waiting to transfer data between the CPU and accelerator unit via memory, cache, or PCIe. Instead, we’re told it’s only dozens of cycles through the vector register access. It also suggests that everything – the RISC-V CPU complex and custom accelerators – are all on the same chip and packaged as a system-on-chip.

Application code runs on the general-purpose RISC-V cores, and any work that can be accelerated by the MXU is routed through the VCIX. According to Young, there are other benefits to this approach besides efficiency. The programming model is simplified, resulting in a single program with nested scalar, vector, and coprocessor instructions, allowing for a single software toolchain where developers can program in either C/C++ or assembly language, as desired.

“With SiFive VCIX-based general-purpose cores ‘hybridized’ with Google MXUs, you can build a machine that lets you ‘have your cake and eat it too’, while getting all the power of the MXU and the programmability of a full-on take advantage of the CPU and the vector power of the X280 processor,” said Young.

The ability to make such a custom chip will likely remain the domain of hyperscalers like Google or those with niche needs and deep pockets, but it shows what can be achieved thanks to the flexibility of the open ecosystem RISC-V model.

This flexibility and openness seems enough to entice Google – a longtime proponent of RISC-V with RV cores used in some of its other products – to use the Upstart architecture rather than its custom coprocessors in x86 chips or arm to poke -licensed designs. ®

PS: Remember when Google was toying with using the POWER CPU architecture in its data centers? Google uses SiFive RISC-V cores in AI compute nodes • The Register

Laura Coffey

World Time Todays is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – The content will be deleted within 24 hours.

Related Articles

Back to top button