Cloud Providers Rush to Offer Generative AI

Generative artificial intelligence (generative AI), as the name implies, generates information from input data, and it is taking tech markets by storm. Cloud providers are responding with an array of offerings for corporate consumption. At the same time, they face challenges of expansion, security, and accountability.
To review: Through the use of vast swaths of data called foundation models, generative AI programs are trained to recognize human language and respond by generating answers in text, images, audio clips, music, videos, and even computer code. The process requires billions of conditions and associations to be fed to algorithms to create the models, which in turn can be applied to enterprises applications for a range of uses.
For example, OpenAI’s GPT-3 is the basis for ChatGPT, which creates textual answers in response to text-prompted questions or fragments – earning it the nickname “chatbot.” OpenAI also has launched GPT-4 to generate text from images as well as text.
There are many other models emerging, such as Claude from Anthropic, a chatbot that performs many of the same functions as ChatGPT; Stability AI's Stable Diffusion, which creates images from text; and AI21 Studio, a so-called language model from AI21 for developing text-based applications from text input. Application programming interfaces (APIs) associated with foundation models help developers adapt generative AI for specific enterprise purposes.
Cloud Providers Pick Their Generative AI Partners
From the outset, cloud providers have been all over the generative AI movement. Each of the market leaders now has begun its own strategy for enabling cloud customers to use generative AI. Following is a summary:
AWS. The world’s leading public cloud provider recently announced Amazon Bedrock, a program in which AWS will provide foundation models from Anthropic, AI21, and Stability AI, along with a series of models named Titan developed by AWS. AWS also is releasing EC2 instances called EC2 Inf2, which equip the provider’s compute instances powered by AWS Inferentia2 chips. These AWS-designed chips are meant to accelerate the processing-intensive workloads associated with generative AI applications. AWS also is offering instances based on its Trainium chips and will work with NVIDIA to provide more AI infrastructure.
Microsoft Azure. Starting with $1 billion in 2019, Microsoft began investing in ChatGPT creator OpenAI. Another investment followed in 2021 (amount not disclosed). And in January 2023, Microsoft pledged $10 billion to OpenAI in a multiyear tranche arrangement. (For its part, OpenAI said it uses Azure for all of its model training.) Microsoft also in March 2023 unveiled Azure OpenAI Service, featuring OpenAI models, including ChatGPT, powered by Microsoft Azure. Another Azure OpenAI Service solution, GitHub Copilot, helps developers quickly generate code. Separately, Microsoft has added OpenAI tools to its Bing search engines and Office 365, both of which operate mainly in Azure.
Google Cloud. The number 3 market share leader has upgraded its Vertex AI machine learning and AI application platform with access to a variety of Google’s models, including PaLM AI. Google’s goal is to keep adding third-party and open source models to the Vertex AI roster. Besides Vertex AI, Google also released Generative AI App Builder, which gives developers API access to Google’s own foundation models along with templates for speeding app development. Google is also making available NVIDIA L4 GPUs in Google Cloud G2 VMs. These compute instances are designed to run applications designed via Vertex AI. Google Cloud also recently partnered with generative AI startup Anthropic that will help that vendor use Google Cloud to create new generative AI models.
Alibaba. China’s cloud titan is integrating its own version of ChatGPT named Tongyi Qianwen (which translates as “truth from a thousand questions") into its DingTalk workpace messaging application. Alibaba said it would also be adding Tongyi Qianwen development tools into Alibaba Cloud for enterprise developers.
Oracle. The provider of Oracle Cloud Infrastructure (OCI) announced it will run NVIDIA’s DGX Cloud supercomputing and NVIDIA AI Foundations service in OCI. The combination will provide OCI customers with the ability to develop and run generative AI applications on a massive scaled, Oracle said.
Salesforce. The company announced Einstein GPT, a generative AI chatbot for use with Salesforce’s Data Cloud, Tableau, MuleSoft, and Slack. The tool, says Salesforce, combines OpenAI’s capabilities with the AI models Salesforce has developed in order to provide generative AI for customer relationship management (CRM).
Future Plans and Concerns
This is just the start – most of the services listed above are in preview only. There will no doubt be many more upgrades and additions as adoption widens.
Notably, the emphasis by the AWS, Google Cloud, and Oracle on adding accelerator components to their cloud services shows that cloud providers are eager to expand their resources to accommodate generative AI applications.
On the downside, cloud providers are facing concerns that untethered generative AI could spawn security problems, along with unparalleled – and uncontrollable -- societal issues. Cloud providers seem to be looking for a way to profit from generative AI while avoiding its liabilities.
“[T]he technology is moving fast,” Google CEO Sundar Pichai said in an interview on a recent 60 Minutes episode. “So does that keep me up at night? Absolutely.”
Speaking about recent calls by various technology leaders to curb and/or control the development of generative AI, Pichai said: “[Y]ou don’t want to put out a tech like this when it’s very, very powerful because it gives society no time to adapt. I think that’s a reasonable perspective. I think there are responsible people there trying to figure out how to approach this technology, and so are we.”