Qualcomm announced that it is joining the race to build AI Accelerator chips. Starting next year, the chipmaker plans on making said chips commercially available.
The new Qualcomm AI chips are called the AI200 and AI250. Both chips are designed for improved memory capacity and for running AI inferencing. The mobile chipmaker aims to make its chips available to customers in 2026 and 2027, respectively.

The decision to enter the AI Accelerator chip market means that the company will be bumping heads with two titans of industry: NVIDIA and AMD. NVIDIA currently dominates more than 90% of the AI realm with its GPU, and has a market cap of over US$4.5 trillion (~RM19 trillion), while AMD currently comes in at a reasonable second with its Instinct and EPYC GPUs and CPUs.
Qualcomm says that its AI Accelerator chips would be different, in that its rack-scale systems would provide significantly better performance-per-dollar, although it falls short of exactly how much better – in this case, how much cheaper – its racks are. For context, one of its racks draws about 160kW of power, which is comparable to what an NVIDIA GB300 rack pulls.

On that note, Qualcomm says that its AI200 and AI250 will take full advantage of its Hexagon NPU, which it first introduced with its X Elite processors in the consumer space. Oh, and its AI chips will support 768GB of memory, which is far higher than what NVIDIA or AMD’s own offerings are capable of.
Whatever the case, it is still early days for Qualcomm, and it’s likely we’re in for a long wait before we hear anything else.
(Source: CNBC, Yahoo! Finance, Reuters)

