Cloud Native Climbs at KubeCon

A Icloud

By: R. Scott Raynovich


KubeCon Paris, otherwise known as KubeCon + CloudNativeCon Europe 2024, was a tremendous affair. Not only did we get to eat fresh pain au chocolate, but the spring sun was out in full force.

The Futuriom analyst team, including myself and Mary Jander, came to Paris to examine trends in cloud-native infrastructure, Kubernetes, DevOps, and of course, AI.

There was definitely a positive buzz in the air, as I spoke to many companies relating solid growth stories and steady investments.

Hot topics included platform engineering, data observability, cloud cost management, FinOps, and, of course, AI. Open source projects such as OpenTelemetry, Cilium, eBPF, Prometheus, and Crossplane are gaining momentum, based on project data and comments from attendees.

Big Traffic, Upbeat Vibe

One of the things I noticed was the upbeat vibe, which drew 12,000 people and hundreds of companies. New all-time highs in the markets probably helped. The recent rise of the AI bubble has also spilled over into KubeCon, as many recognize that Kubernetes and cloud-native distributed systems will be key enablers of AI.

“It feels like a massive uptick from Chicago,” Bassam Tabbara, the founder and CEO of Upbound, told me.

Upbound is a platform engineering company that leverages the open-source Crossplane project. Crossplane extends the Kubernetes API to enable platform teams to manage a wide variety of infrastructure resources from multiple vendors.

Upbound's head of marketing Kelly Tenn pointed to evidence of growth at Upbound, such as 9,000 GitHub stars and 11,000 members of its Slack challenge.

Brendan Cooper, head of marketing with PerfectScale, said the company was drawing numbers exceeding the company's expectations after just half the show. "It's been nonstop," he said.

PerfectScale addresses some of the rising concerns of managing cloud infrastructure, including Kubernetes. Cooper said that PerfectScale addresses cloud costs as well as performance management, as cloud optimization and performance are rising concerns.

"If an app goes down, that's a concern for a CEO—more than cost management," said Cooper.

In other signs of momentum, the CNCF, which runs KubeCon, announced that 45 new members have joined the CNCF. The CNCF said there are now 233,000 contributors to its open-source projects.

Delving Into AI Questions

The topic of the year -- AI -- was also present at KubeCon. It's a natural fit, as Kubernetes enthusiasts point out that distributed computing platforms will be key contributors to AI.

As one friend pointed out, AI seemed to be featured more heavily in the keynote content than it was on the show floor, where most of the companies and products solve specific problems related to cloud-native infrastructure and Kubernetes.

One of the unknowns about AI is how it will affect the future of Kubernetes and cloud-native infrastructure. Some of this was addressed in a media roundtable on AI, where participants questioned the dominance of graphics processing units (GPUs), as well as challenges for AI such as security, data management, and energy consumption.

"With [AI] superclusters... are we thinking about the energy consumption and what that means if we continue?" asked Sudha Raghavan, senior vice president with Oracle Cloud Infrasctructure's developer platform.

Raghavan asked whether Kubernetes could help diversify compute away from scarce and expensive GPUs over time. “It’s not all about GPUs,” she said. "It can run on CPUs. The demand is so high for GPUs. Innovation is behind that demand. If we can put that innovation on things that we have, it can go faster.”

"Kubernetes is running massive AI workloads today,” said Lachlan Evenson, principal program manager, Microsoft, and governing board member, CNCF. "It's ubiquitous and incredibly flexible. It's not easy. You have to be one of these big companies. We have to make it easier to run for everybody. Each innovation cycle is much more rapid than the previous one.”

This topic was also the target of other leaders of the CNCF, who seemed enthusiastic and a bit defensive at the same time.

Jim Zemlin, executive director of the Linux Foundation, made a reference to NVIDIA overlapping its GTC conference with KubeCon and gathering attention:

"At the CPU layer, we definitely see a lot of concentration around NVIDIA, which is clearly the market leader. And the [NVIDIA] GTC Conference is going on in San Jose. Unfortunately, we were the largest event this week until Jensen [Huang, NVIDIA CEO] decided to do his in the same week – and that is so much bigger, and deservedly so."

Many of the keynote speakers pointed out that Kubernetes and cloud-native infrastructure will play a huge role in AI, with large language models (LLMs) and inference for AI demanding more data, storage, and compute across the spectrum.

According to Priyanka Sharma, executive director of the CNCF:

"Gen-AI is prompting cloud-native to rethink infrastructure paradigms to accommodate AI workloads, improve platform engineering’s focus with AI insights, and ensure AI-ready systems. This integration represents a significant shift in how we design, deploy, and manage cloud-native solutions."

On that note, CNCF’s AI Working Group launched its Cloud Native AI white paper. The paper positions cloud-native technologies for AI and points to the largest challenges, which include managing large data sizes, managing data during development and deployment, and adhering to data governance and security policies.

In one interesting observation, Zemlin said the technology industry needs to push for more open data models for AI to democratize the technology:

"But if we take it one more layer up to the foundation models themselves, and particularly to the development of frontier models, you have a mix of open and closed, with OpenAI being the most advanced frontier foundation model at present.
"But open-source foundation models like Mistral and Llama are really nipping at their heels. And with many more to come, I might add, meeting that same level of performance.
"Even with the largest and best-performing LLMs like Llama 2, Mistral, and so on, the data sets that were used to train those are not open. So, what we do need are open data sets so that the open-source community can build foundation models using open data."

Cloud Costs and Data Observability

It's clear that data is the fuel of AI engines such as LLMs, so these massive projects will drive the need for more data storage, transport, and security. It's just another factor contributing to the growing concern about data observability and costs.

Data observability is a huge buzzword in the cloud-native markets, but there's a problem: It costs money to monitor data, and you need to be smart about how you do that.

Martin Mao is the cofounder and CEO of data observability company Chronosphere, which is focused on bringing more efficient data and log observability tools to the market. He said that customers are increasingly tracking the costs of the data they are monitoring and looking for places to cut costs.

"It's easier to save money on your vendors and data costs than it is to save money on headcount," said Mao.

Mao said that moving to the cloud-native and microservices infrastructure is pumping up the data bill. "When you shift, the volume of the observability data grows and your observability bill grows."

Chronosphere, which was named a Futuriom 50 company, is going after traditional data observability giants such as Splunk and Datadog, which made lots of money serving up log data but are now under the microscope of CFOs.

"People are generally complaining about the efficacy of observability tools," said Mao. "They're worried they're getting worse results and not getting more out of the tools. That's part of the move to cloud native."

    Chronosphere is based on the open-source M3 data observability system created by Uber, to which it adds enhancements to create metrics for containerized infrastructure, microservices applications, and business services. Customers include Robinhood, Snap, Obsidian Security, DoorDash, Zillow, and Visa.

    PerfectScale's Cooper said that cloud performance and cost management go together.

    "There are a lot of variables in cost optimization," said Cooper. "We are the only ones that do production-rate optimization."

    Many cloud management and cloud-cost optimization topics were covered in our 2023 CCM and FinOps report. Futuriom is using the information gathered at KubeCon to compile our next Cloud Cost Management and FinOps report, which will be published next month for 2024. Contact us now if you are in interested in inclusion or sponsorship. (Sponsorship is not necessary for inclusion.)