Who Will Win the "Scale Up" Challenge?

(Editor's Note: This is a premium Cloud Tracker Pro article that will be available for a limited time for free.)
A marketing war is brewing among vendors pushing “scale up” architecture for AI networking using evolving interconnection technologies. And there may be no clear winner anytime soon.
Recent news highlights the situation: The Ultra Accelerator Link (UALink) announced its first specification in April, pointing the way for AI infrastructure vendors to use an open standard to connect multiple GPUs together as a single entity. Then at the Computex show in Taiwan in May, NVIDIA unveiled NVLink Fusion, extending the vendor’s GPU interconnection technology to third-party chipmakers. And last week, Broadcom announced its Tomahawk 6 chip with Scale Up Ethernet (SUE) technology, which deploys enhanced Ethernet to compete directly with NVLink—and by implication, UALink.
Let’s take a closer look at each of these technologies and their proposed contributions to AI infrastructure.
UALink: Open Standard, Early Days
The first iteration of UALink (known as the UALink 200G 1.0 Specification) defines a low-latency interconnect for GPUs in back-end networks that supports 200-Gb/s bidirectional data rates for 1, 2, or 4 lanes connecting up to 1,024 accelerators in a pod. Hence, maximum bidirectional bandwidth is 800 Gb/s.
Sources say the advantage of UALink is its simplicity. “It is focused on providing the low latency of PCIe with the bandwidth of Ethernet,” said a member of one of the participating vendors, speaking anonymously. And of course it’s interoperable. Like NVLink Fusion, UALink supports a chiplet implementation, making it relatively easy to incorporate into existing solutions from a variety of vendors.
Users seem eager to see how UALink pans out in actual products, but there will be a lag before widespread adoption. “We expect to see products emerge in the midpoint or second half of 2026, with wider deployment in 2027,” said the vendor cited above.
Vendors active in creating UALink include Alibaba, AMD, Apple, Astera Labs, AWS, Broadcom, Cisco, Enfabrica, Google, HPE, Intel, Juniper Networks, Meta, Microsoft, and Synopsys, to name just a few. (Keep those members in mind, since some of them show up in competing ecosystems.)
NVLink Fusion: A UALink Party Spoiler?
NVLink Fusion is an NVIDIA program designed to promote multivendor use of NVIDIA’s high-speed GPU interconnection technology, which currently supports up to 1.8 Tb/s of bidirectional bandwidth per Blackwell GPU (18 100-Gb/s NVLink connections), linking up to 72 GPUs per rack and 576 in a pod.
Companies supporting NVLink Fusion include MediaTek, Marvell, Alchip Technologies, Astera Labs, Synopsys, and Cadence. These firms will use NVLink to connect their own chips and accelerator ASICs in AI systems, they say. Fujitsu and Qualcomm will also integrate their CPUs with NVIDIA GPUs using NVLink.
Some users view NVLink Fusion as a bid to rain on UALink’s parade. One poster on Reddit thinks the program was introduced as a way to feature NVLink interoperability for recently announced customers in the Middle East:
“[Nvidia’s] not going to embrace UALink at this stage, so they release [NVLink Fusion] as a bridge. This was basically forced by the multiple vendor objectives from Humain, Saudi Aramco, and G42. They all have heavily stressed open ecosystems and will not embrace lock ins. Too much money on the table to walk away, so they get Qualcomm [and] a couple others that the Arab states are interested in beyond AMD, and get them to add in NVLink. It's really as simple as that.”
Broadcom’s SUE Claims “Scale Up AND Scale Out”
Broadcom, which is also part of the UALink Consortium, last week announced that their own Ethernet-based technology could be used in the new Tomahawk 6 chip to link up to 512 multivendor XPUs (including GPUs, TPUs, and other accelerators) at 200-Gb/s or 1,024 XPUs at 100-Gb/s. The technology will allow multiple XPUs to act as one rack (scale up) or to run distributed workloads across multiple AI pods (scale out).
Broadcom presented its Tomahawk 6 and SUE announcement alongside longtime customers Arista Networks and Juniper Networks. Broadcom also has made its SUE specs available to the Open Compute Project. One industry source told Futuriom that this was because the Ultra Ethernet Consortium (UEC) declined making it part of its specifications. Two spokespeople for the UEC declined to comment.
Broadcom's Pete Del Vecchio, Product Manager – Tomahawk Switch Family had this to say in response to our question:
"This statement is incorrect. Further, from an external communication standpoint, UEC is currently focused on the 1.0 specification. This first specification pertains to scale-out networking. Additional items under discussion within the UEC Working Groups - technologies being actively developed and on the roadmap - are confidential to UEC."
Anticipating Many Approaches
What do users think of these options? Gauging by comments online and off, they seem eager to see how UALink works because they're always interesting in open and interoperable solutions. Still, they’ll have to wait awhile for meaningful adoption to take place. NVLink Fusion could also take awhile to get off the ground in multivendor products, though NVLink itself has the advantage of being an established technology that won’t be disappearing anytime soon. And Broadcom's SUE remains a vendor-specific solution with compelling interoperability benefits. Further, who knows what other vendors might produce to meet rack-scale integration requirements?
One determining factor could be the ongoing rise of Ethernet as a network of choice for AI. As hyperscalers continue to adopt Ethernet in AI networks, enterprise customers could follow suit. Familiarily with the technology and the affordability of solutions could work in favor of UALink and SUE among enterprise customers.
Some leading vendors are hedging their bets. Obviously, Broadcom, which is part of the UALink Consortium, is also peddling its SUE approach with Tomahawk 6. Astera Labs will continue support for PCIe, CXL, and NVLink via its foundational Cosmos software, which supports all its connectivity solutions and will also support upcoming UALink and NVLink Fusion products. And Enfabrica, which is also part of the UALink Consortium, offers its own solution to linking GPUs.
Bottom line? It could be a couple of years before a clear winner surfaces in the AI interconnection wars—if there's a winner at all. Realistically, the number of options open to enterprises for AI networking will increase over time, and the smart money is banking on a market with room for many.
Futuriom Take: Until UALink materializes in full force and the Tomahawk 6 chip is implemented in leading Ethernet switching solutions, NVLink will continue to be a primary connectivity option for AI infrastructure within AI large AI racks. Over the next couple of years, Ethernet solutions could gain ground with hyperscalers and enteprise customers. As more products enter the market, there will be pressure on vendors to support multiple options.