OpenAI’s plan to design its own AI chip with Broadcom
OpenAI’s plan to design its own AI chip with Broadcom
OpenAI is partnering with Broadcom to design and produce its own custom Artificial Intelligence (AI) chip, with plans to begin deploying it internally in 2026. This is a major move as OpenAI seeks greater control over its hardware and to reduce its dependence on existing chip suppliers.
Why Is This News Important?
Imagine a car maker deciding to build its own engine rather than buying it from someone else—this gives them more control over performance and cost. Similarly, OpenAI wants to design its own “engine” for AI—an AI chip tailored to power its models efficiently and affordably.
Currently, OpenAI relies heavily on high-performance GPUs made by Nvidia—particularly the Hopper and Blackwell series—for both training (teaching the AI) and inference (running the AI to answer questions). It also uses some AMD GPUs and TPUs from Google to diversify its hardware sources.

About OpenAI
OpenAI is the research lab behind ChatGPT and other advanced AI models. Founded as a startup, it’s become a leader in generative AI—technology that can write text, answer questions, translate languages, and much more. To power these capabilities, OpenAI uses massive computing infrastructure.
About Broadcom
Broadcom is a major semiconductor company known for designing specialized chips for networking, data centers, and now AI infrastructure. Its new AI chip business is booming—thanks in part to big orders (one rumored to be from OpenAI)—and it’s being seen as a rising competitor to Nvidia in the AI chip market.
Current AI Chip Manufacturers
| Company | What They Do |
|---|---|
| Nvidia | Leading maker of high-performance GPUs (e.g., Hopper, Blackwell) for AI training and inference. |
| AMD | Provides GPUs and specialized AI units (like XDNA NPUs) used for AI workloads. |
| Google (TPU) | Designs its own Tensor Processing Units (TPUs) used internally in Google’s AI infrastructure. |
| Broadcom | Designs custom AI accelerators (XPUs), now collaborating with OpenAI for a tailored chip. |
| Etched.ai | A startup building transformer-optimized ASICs (like “Sohu”) for LLMs. |
| Groq | Makes Language Processing Units (LPUs) designed for fast LLM inference. |
Top FAQs:
What is this deal about?
OpenAI is collaborating with Broadcom to co-design and produce a custom AI chip, which will be manufactured using advanced 3 nm technology and used internally starting in 2026.
Why does OpenAI want its own chip?
To reduce reliance on Nvidia (which can be expensive and sometimes hard to get), lower operating costs, and better tailor hardware to the specific needs of its AI models.
Will these chips be sold publicly?
No—initially, they are intended only for OpenAI’s internal use, powering its own AI infrastructure, not external customers.
Who is making these chips?
Broadcom is helping with the design and will supply AI server racks built around the chip. The manufacturing will be handled by TSMC using its advanced 3-nanometer fabrication process.
What is the timeline?
The first chips are expected to ship in 2026, with design work already underway at OpenAI’s silicon team (including engineers formerly from Google’s TPU project).
How does this compare to what other tech giants are doing?
Companies like Google (TPUs), Amazon, and Meta have already invested in their own custom AI chip programs. OpenAI is joining this trend to gain similar advantages in performance and cost control.