OpenAI selects Oracle Cloud Infrastructure to extend Microsoft Azure AI platform
Oracle, Microsoft and OpenAl are partnering to extend the Microsoft Azure Al platform to Oracle Cloud Infrastructure (OCI) to provide additional capacity for OpenAl.
OpenAI is the AI research and development company behind ChatGPT, which provides generative AI services to more than 100 million users every month.
“We are delighted to be working with Microsoft and Oracle. OCI will extend Azure’s platform and enable OpenAI to continue to scale,” said Sam Altman, Chief Executive Officer, OpenAI.
“The race to build the world’s greatest large language model is on, and it is fueling unlimited demand for Oracle’s Gen2 AI infrastructure,” said Larry Ellison (pictured), Oracle Chairman and CTO. “Leaders like OpenAI are choosing OCI because it is the world’s fastest and most cost-effective AI infrastructure.”
OCI’s leading AI infrastructure is advancing AI innovation. OpenAI will join thousands of AI innovators across industries worldwide that run their AI workloads on OCI AI infrastructure. Adept, Modal, MosaicML, NVIDIA, Reka, Suno, Together AI, Twelve Labs, xAI, and others use OCI Supercluster to train and inference next-generation AI models.
OCI’s purpose-built AI capabilities enable startups and enterprises to build and train models faster and more reliably anywhere in Oracle’s distributed cloud. For training large language models (LLMs), OCI Supercluster can scale up to 64k NVIDIA Blackwell GPUs or GB200 Grace Blackwell Superchips connected by ultra-low-latency RDMA cluster networking and a choice of HPC storage. OCI Compute virtual machines and OCI’s bare metal NVIDIA GPU instances can power applications for generative AI, computer vision, natural language processing, recommendation systems, and more.