Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Benzinga
Benzinga
Business
Anusuya Lahiri

Qualcomm Unveils New AI Chips To Compete In Data Center Race

Qualcomm AI200 & 250 - Image via Qualcomm

Qualcomm Technologies, Inc. (NASDAQ:QCOM) announced on Monday the launch of its next-generation artificial intelligence inference-optimized solutions for data centers, namely the Qualcomm AI200 and AI250 chip-based accelerator cards and racks.

Building on the company’s leadership in Neural Processing Unit (NPU) technology, these solutions offer rack-scale performance and superior memory capacity for fast generative AI inference, delivering high performance per dollar per watt, Qualcomm said.

Qualcomm AI200 introduces a purpose-built rack-level AI inference solution designed to deliver low total cost of ownership (TCO) and optimized performance for large language & multimodal model (LLM, LMM) inference, as well as other AI workloads.

Also Read: Qualcomm And Valeo Broaden Collaboration To Speed Hands Off Driving Features

Performance

It supports 768 GB of LPDDR per card, offering higher memory capacity and lower cost, while enabling exceptional scale and flexibility for AI inference.

The Qualcomm AI250 solution will debut with an innovative memory architecture based on near-memory computing, providing a generational leap in efficiency and performance for AI inference workloads by delivering more than 10 times higher effective memory bandwidth and significantly lower power consumption.

This enables disaggregated AI inferencing for efficient utilization of hardware while meeting customer performance and cost requirements.

Both rack solutions feature direct liquid cooling for thermal efficiency, PCIe for scale up, Ethernet for scale out, confidential computing for secure AI workloads, and a rack-level power consumption of 160 kW.

Qualcomm AI200 and AI250 will be commercially available by 2026 and 2027, respectively.

Competition

Qualcomm’s AI accelerator rivals include Nvidia Corp’s (NASDAQ:NVDA) H100 and H200 chips, Advanced Micro Devices, Inc’s (NASDAQ:AMD) Instinct MI300X accelerators, and Intel Corp’s (NASDAQ:INTC) Gaudi accelerators.

Alphabet Inc. (NASDAQ:GOOGL) Google has developed its own Tensor Processing Units (TPUs), which are optimized for popular machine learning frameworks, including TensorFlow and PyTorch.

Amazon.com Inc. (NASDAQ:AMZN) Amazon Web Services (AWS) created Inferentia chips to help customers scale machine learning applications more effectively.

QCOM Price Action: Qualcomm shares were up 3.48% at $174.91 at the time of publication on Monday, according to Benzinga Pro data.

Read Next:

Photo via Qualcomm

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.