Startup Profile: CoreWeave's NVIDIA-Powered Cloud Service

Word has it that NVIDIA (Nasdaq: NVDA) has favored a handful of startups with sales of its coveted H100 graphics processing units (GPUs), the chip giant’s top-performing components for generative artificial intelligence (AI) workloads. And leading the small pack is CoreWeave, an alternative to the big cloud hyperscalers when it comes to offering compute capabilities for AI.
As noted by The Information earlier this year, NVIDIA is targeting CoreWeave and similar companies for early sales of the H100, which is in short supply due to big demand. The motivation behind the special treatment is that CoreWeave, unlike AWS and Google Cloud, doesn’t intend to make its own processors to lessen its reliance on NVIDIA. Instead, CoreWeave, with NVIDIA’s help, offers compute services that directly compete with the hyperscalers'.
Indeed, in its press releases, CoreWeave claims to “build cloud solutions for compute-intensive use cases — VFX [visual effects] and rendering, machine learning and AI, batch processing and pixel streaming — that are up to 35 times faster and 80% less expensive than the large, generalized public clouds.”
Related Articles
Google Says Its AI Accelerator Outperforms NVIDIA's
Google calls out NVIDIA, claiming the Google TPU v4 supercomputer is faster and more energy efficient than NVIDIA's A100
Cloud Tracker Pro Quarterly: Navigating the AI GoldrushGenerative AI is revolutionizing cloud infrastructure. Our latest subscriber research examines the impact on leading hyperscalers and telcos
The Top Ten Cloud Deals of 2022Here's Futuriom's list of the top ten cloud deals of 2022 -- including funding, acquisitions, and contract awards
UPDATE: CoreWeave's close relationship with NVIDIA was further revealed on August 3, when news broke that the startup had secured $2.3 billion in a debt facility for which NVIDIA H100 chips were used as collateral.
Notably, Microsoft appears to be alone among the top three cloud titans in choosing not to create its own GPUs but to rely on NVIDIA’s H100. (This despite the vendor’s purchase of Fungible, a data processing unit [DPU] innovator, earlier this year.) To ensure an adequate supply of compute power, Microsoft allegedly plans to rely on CoreWeave for added H100 access in a deal that could cost billions and run into multiple years, according to CNBC. Microsoft hasn’t publicly acknowledged the deal.
Graduating from Crypto to AI
CoreWeave was founded in 2017 in Roseland, New Jersey, by Michael Intrator (ex-Hudson Ridge, a natural gas hedge fund), who is now CEO; Brian Venturo (ex-Hudson Ridge), now CTO; and Brannin McBee (ex-trader for Active Power Investments, which specializes in natural gas, agriculture, and power), now CSO. The company started as a cryptocurrency mining firm, but pivoted to AI computing around 2019 in response to market demand.
In the shift-over, CoreWeave built on its relationship with NVIDIA. It now enjoys the status of offering not just H100 services but also ones based on many other NVIDIA chips, including the A100, the RTX, and A40 GPUs. (In 2021, CoreWeave claimed to offer North America’s largest installation of A40 GPUs.) CoreWeave also deploys NVIDIA’s InfiniBand computing platform and BlueField DPUs in its cloud.
CoreWeave has created a comprehensive cloud-native, Kubernetes-powered user interface for its cloud services, featuring applications, APIs, object storage, namespaces, Grafana monitoring, and other features.
NVIDIA Backs CoreWeave
CoreWeave has raised over $500 million ($576.5 million, according to Crunchbase). In May 2023, the company announced a Series B tranche of $421 million, including an extension of $200 million on initial funding of $221 million. The round was led by Magnetar Capital, with a contribution from NVIDIA and a “rounding out” amount from individual investors Nat Friedman and Daniel Gross.
In its initial funding announcement, CoreWeave stated that the new money would be used for expansion purposes, including building additional datacenters to provide its services. Presently, CoreWeave has datacenters in Weehawken, N.J.; Chicago, Ill.; and Las Vegas, Nev. The vendor says the datacenters are linked to one another by dark fiber supporting rates of 400 Gb/s, and each supports redundant, 200-Gb/s onramps to Tier 1 ISPs.
The company plans to launch two new datacenters in 2023, bringing its total to five. It recently announced plans to set up a facility in Plano, Texas.
CoreWeave isn’t the only small cloud alternative compute provider favored by NVIDIA. Lambda Labs also sports H100 compute power for rent. And Crusoe Energy has been named as another recipient of the NVIDIA components. Clearly, the rental of powerful compute capabilities for generative AI is a fast-growth segment that could account for many billions of dollars in revenue next year.
Startup Profile: CoreWeave
HQ location: Roseland, N.J.
Employees: 150 on LinkedIn
CEO and Co-Founder: Michael Intrator
Target market: GPU-accelerated compute services for generative AI and machine learning, visual effects and rendering processing, batch processing and pixel streaming.
Prominent investors: Magnetar Capital, NVIDIA
Funding raised to date: $421 (via Series B) and up to $155.5 million more in unspecified rounds.
Related Articles
Cloudflare, Akamai Join the AI Effort
Cloudflare and Akamai see opportunities to bring generative AI inference processing to customers' network edge
Amazon Will Invest Up to $4 Billion in AnthropicAmazon plans to invest up to $4 billion and will take a minority stake in generative AI startup Anthropic, prompting questions about Anthropic's partnership with Google Cloud
Google Cloud Joins with NVIDIA to Advance AIGoogle Cloud’s Next 2023 conference intros a slew of generative AI announcements, including a partnership with NVIDIA to buttress Google’s own supercomputing chips