Huawei Executive: Chip Process Less Important for Compute Scaling, Achieves 3x Efficiency Over NVIDIA H20

Huawei Executive: Chip Process Less Important for Compute Scaling, Achieves 3x Efficiency Over NVIDIA H20

According to a senior executive at Huawei, the company’s advancement in the artificial intelligence sector is not hindered by its chip manufacturing processes. Instead, Huawei has made significant strides in enhancing its performance by emphasizing architectural innovations rather than just upgrading node technology.

Huawei’s Ascend Cloud Platform Emerges as a Viable Competitor to NVIDIA’s AI Infrastructure

Over recent years, Huawei has positioned itself as a prominent player in the AI landscape, significantly boosting its computing capabilities through a self-reliant technology stack. The firm has successfully begun to challenge NVIDIA’s dominance within the Chinese market while simultaneously investing in hardware and software to mitigate reliance on Western technologies for its computational needs. Zhang Ping’an, Huawei’s Executive Director and CEO of Huawei Cloud Computing, has dismissed the notion that a lack of advanced chip technology is a limitation, emphasizing that the firm’s main focus is on improving system architecture.

The chip process (such as 5nm and 7nm) is not the core. What customers really need is high-quality computing results.

Zhang elaborated that Huawei has achieved a threefold increase in operational efficiency compared to NVIDIA’s H20 AI chip, a feat attributed to continuous advancements in system design, optimization, and software capabilities. Notably, the Ascend Cloud service reportedly processes 2, 400 text tokens per second on a single Ascend 910B card, with a Time-Per-Output-Token (TPOT) of under 50ms, showcasing what the company defines as “high computing capabilities.”

Huawei Ascend chip timeline presentation on stage, listing models: Ascend 910C, 950PR, 950DT, 960, and 970 with release dates.
Huawei AI chip roadmap | Image Credits: Huawei

In addition to its performance metrics, Huawei is expanding its support for large language models (LLMs) from platforms such as DeepSeek and Kiwi AI, aiming to develop the Ascend Cloud into a mature infrastructure. The company’s ambitions extend beyond local markets, with plans to distribute its technology stack to a broader array of global clients. This strategic shift suggests that Huawei is positioning itself as a substantial competitor to NVIDIA in the realm of ‘sovereign AI.’

NVIDIA’s CEO, Jensen Huang, has acknowledged Huawei as a significant challenger to the American tech stack, urging the U. S.government to ensure that companies worldwide have access to U. S.AI technologies. Moreover, Huawei has recently detailed an ambitious AI chip roadmap that includes initiatives to develop proprietary High Bandwidth Memory (HBM) and next-generation architectures aimed at rivaling NVIDIA’s innovations, such as the Rubin architecture, in the coming years.

For further insights, check the original news source: MyDrivers

Source & Images

Leave a Reply

Your email address will not be published. Required fields are marked *