
Investors in artificial intelligence (AI), especially those focused on NVIDIA, are feeling the impact of a disconcerting breakthrough from the Chinese company DeepSeek. Their latest offering, the R1 AI model, has introduced a new paradigm in model training and significantly reduced resource requirements, causing a ripple effect throughout the market.
The DeepSeek R1 Model: Transforming AI Training and Market Dynamics
If you’ve been unaware of the ongoing market turmoil in AI, particularly surrounding NVIDIA, let this serve as an informative guide. DeepSeek’s recent launch of an AI model capable of being trained with dramatically lower financial resources has reignited debates over the sustainability of the so-called “AI supercycle.”The training costs associated with DeepSeek R1 are shocking and challenge the existing perceptions of AI investments.

The R1 is a trailblazing open-source large language model (LLM) that employs a distinct training methodology, setting it apart from its contemporaries. Rather than delving deeply into the technical aspects, it’s essential to highlight that R1’s mechanism operates on a “Chain of Thought”approach. This means that with each prompt, the AI delineates the steps taken to arrive at a conclusion, allowing users to identify where any errors may have occurred during processing.
Furthermore, DeepSeek R1 utilizes “Reinforcement Learning, ”a machine learning strategy where the model learns by interacting with its environment and maximizing rewards for correct outputs. This method contrasts with OpenAI’s GPT-O1, which largely relies on supervised learning and vast datasets that substantially escalate training costs.

Despite popular misconceptions, the rumored $5.6 million training cost for the DeepSeek R1 is misleading; this figure only reflects the operational costs of the final model, rather than the full scope of expenses incurred during its development. Given China’s restrictions on accessing advanced AI computing infrastructure, DeepSeek has opted not to disclose the full extent of its capabilities, leading experts to speculate that they may possess comparable, if not superior, technology.
$NVDA – MUSK SUGGESTS DEEPSEEK ‘OBVIOUSLY’ HAS MORE NVIDIA GPUS THAN CLAIMED
Elon Musk and Alexandr Wang suggest DeepSeek has about 50, 000 NVIDIA Hopper GPUs, not the 10, 000 A100s they claim, due to U. S.export controls. Musk, with experience from xAI, agrees with Wang’s…
— *Walter Bloomberg (@DeItaone) January 27, 2025
In a stark financial comparison, the R1’s operational costs are approximately five times lower than those associated with input and output tokens for OpenAI’s GPT-O1. This disparity has introduced a wave of uncertainty and intrigue within the market. However, it is important to maintain a perspective of optimism regarding DeepSeek’s technological advancements.
NVIDIA has undeniably experienced robust revenue growth from AI computing resources, and industry giants like OpenAI continue to leverage superior technology compared to DeepSeek. If DeepSeek can achieve such results with limited computing power, imagine the capabilities of companies equipped with advanced technologies. This situation could herald a brighter future for the AI sector.
There remains no rival to NVIDIA’s CUDA ecosystem, implying that we are only on the cusp of AI’s potential. The emergence of DeepSeek’s R1 does not signal the end of the AI hype; rather, it illuminates areas of unexplored potential within the industry. Although DeepSeek’s advancements have resulted in a massive drop of over $300 billion from NVIDIA’s market capital, the market is expected to recalibrate as recognition of the situation’s positives emerges.
As companies like Meta, Google, and Amazon accelerate their AI initiatives in response, experts suggest that DeepSeek’s success could ultimately pivot perceptions towards a more bullish outlook for NVIDIA and the overall AI landscape.
Leave a Reply