Arrcus Ups the AI Networking Game

Chipcolor3

By: R. Scott Raynovich


You may have noticed AI is hot at the moment. And while every major tech infrastructure provider has been trying to get in the game, networks have been a key focus of those enabling AI because of the need for low latency, fast connectivity, and high-bandwidth transport for AI data.

Today at the Mobile World Congress show in Las Vegas, cloud-native routing technology company Arrcus introduced a new networking solution targeted at AI -- called the Arrcus Connected Edge-AI (ACE-AI). Arrcus says ACE-AI is specifically aimed at helping network the explosion of AI data that is being transferred among the edge, datacenter, and telco points of presence (PoPs) and the cloud.

Rethinking the Network

GenAI is testing the limits of traditional networking and requires a complete rethinking of how networks will need to be built for the future. Arrcus says it can help using its distributed, cloud-native network operating system -- ArcOS -- which can help build networks on the fly that are optimized for AI workloads.

Arrcus says its platform does this by using network and route intelligence to improve throughput and decrease latency. As the press release says:

"ACE-AI enables traditional CLOS and Virtualized Distributed Routing (VDR) architectures, with massive scale and performance to provide lossless, predictable connectivity for GPU clusters with high resiliency, availability and visibility. Features like Priority Flow Control (PFC), intelligent congestion detection and buffering at ingress points to prevent packet drops, ensure lower Job Completion Times (JCT) and tail latency."

As you can tell, Arrcus' pedigree is heavily rooted in network engineering. Co-founder and CTO Keyur Patel and co-founder and chief architect Derek Yeung were Cisco veterans. Additionally, Arrcus has 11 patents in areas including virtualized distributed routing, route state databases, VXLAN, and BGP-SPF.

AI Skin in the Game

It's not surprising that Arrcus has released an AI-specific product and is doing additional marketing around this, as it's been the topic du jour among networks as well as communications companies. Some of the large, public networking players have attributed it to a new growth surge in infrastructure and have played up their AI wares in recent conference calls.

Arista Networks, for example, is being touted by many Wall Street analysts as a top AI play. In August, the company's shares jumped 20% after Q2 results beat expectations, some of it attributed to AI.

“The AI opportunity is exciting," said CEO Jayshree Ullal on the earnings conference call. "As our largest cloud customers review their classic cloud and AI networking plans, Arista is adapting to these changes, thereby doubling down on our investments in AI."

Cisco also jumped into the AI party in June of this year, announcing new Silicon One chips targeting offerings from Broadcom and Marvell.

In addition, Arrcus competitor DriveNets in May announced a new product: DriveNets Network Cloud-AI. DriveNets argues that its distributed, virtualized chassis can improve AI performance by reducing idle time by 30%. (AI workloads are most effective when the network is 100% utilized.)

A Plug for Ethernet

Arrcus’ latest announcement notably puts Ethernet in the spotlight in contrast with InfiniBand, a networking technology favored by AI chip giant NVIDIA for interconnecting systems in high performance computing (HPC) environments. Though NVIDIA smartNICs also support Ethernet, NVIDIA has been vocal in its support of InfiniBand as the preferred network for AI. In NVIDIA’s most recent quarterly conference call on August 23, 2023, EVP and CFO Colette Kress said:

“Networking revenue almost doubled year on year driven by our end-to-end InfiniBand networking platform, the gold standard for AI…. InfiniBand delivers more than double the performance of traditional Ethernet for AI.”

Not so fast, say Ethernet proponents. In a blog in July 2023, Arista CEO Jayshree Ullal cited the need for innovations to overcome slowdowns she said InfiniBand causes by “rigid ordering” that results in underutilized links. Arista is part of a recently established Ultra Ethernet Consortium (UEC) that aims to improve Ethernet for AI and HPC workloads.

All of which puts Arrcus squarely on track with others who see Ethernet as the most readily adaptable networking infrastructure for AI. And with its emphasis on maintaining ultra-reliable connectivity for GPU clusters, Arrcus has taken action with ACE-AI to improve perceived drawbacks of Ethernet to ensure optimal AI performance.