The recently unveiled Tensor G5 chip from Google has prompted a mixed response from both technology enthusiasts and users, highlighting concerns over its tendency to throttle performance. Critics argue that the core issue may stem from Google’s fragmented approach to the chip’s architecture.
Understanding the Architecture of Google’s Tensor G5 Chip
The Tensor G5 features a complex architectural design that includes:
- Eight-Core CPU
- 1 high-performance Cortex-X4 core running at 3.78 GHz
- 5 mid-performance Cortex-A725 cores operating at 3.05 GHz
- 2 efficiency-focused Cortex-A520 cores clocked at 2.25 GHz
- Fifth-Generation TPU specifically for advanced machine learning and AI tasks
- Imagination IMG DXT-48-1536 GPU – an integrated PowerVR-series GPU clocked at 1.10 GHz, offering theoretical performance comparable to leading mobile graphics processors like the Adreno 732/740 and ARM Mali G715 MP7, albeit without ray-tracing capabilities
- Samsung Exynos 5G modem for enhanced connectivity
Manufactured on TSMC’s cutting-edge 3nm technology, the Tensor G5 promises enhanced transistor density, which could lead to better performance and energy efficiency.
Inherent Flaws in Google’s Chip Design Strategy
Recent analyses indicate that the Tensor G5 chip exhibits a tendency to overheat and throttle, severely affecting gaming experiences. One notable example is the performance during PlayStation 2 emulation, which is more CPU-intensive than GPU-intensive, underscoring the throttling issue.
Although the shift from ARM Mali GPU to the Imagination IMG DXT-48-1536 GPU has been highlighted as a potential factor for this throttling, it does not account for all performance challenges.
Compared to the Tensor G5, Qualcomm’s Snapdragon 8 Elite Gen 5 outperforms it significantly in various benchmark tests such as Geekbench 6 and 3DMark. The reason behind Qualcomm’s superior performance lies in their custom Oryon CPU cores, featuring a prime core clocked at 4.60 GHz and performance cores at 3.62 GHz. This design is supplemented by significant optimizations, including an extensive 12 MB L2 cache, which is not mirrored in Google’s architecture.
Moreover, while collaboration with Imagination has allowed Google to integrate the IMG DXT-48-1536 GPU, it also means that Imagination holds full proprietary control over the DXT-series drivers. Consequently, although Google can fine-tune aspects of the GPU—especially for AI processing—it must still depend on Imagination for essential driver updates and optimizations. This situation reflects Google’s limited influence over critical performance enhancements.
Metaphorically speaking, Google’s chip design strategy resembles purchasing a ready-made suit and making minor adjustments instead of opting for a bespoke fit—functional, yes, but missing that tailored sophistication of a high-end designer suit.
If Google continues to prioritize cost over optimization in its chip design strategy, it risks falling behind competitors in terms of performance capabilities, despite including innovative features such as the dedicated TPU.
Leave a Reply