Mixed

Published fact-check

DeepSeek-V4 Release Rumors and Technical Specifications

Claim checked

“Okay, Deepseek V4 is coming out this week! 1.6T MOE model, a competing model to Opus-4.6/GPT-5.4 MMLU 99.4% / SWE 83.7% Since the parameters are so large, only the quantized version will be possible on a 512GB Mac. Sorry it's a Korean article, you'll have to translate it to read”

Published April 20, 2026 at 2:38 PM

Verdict

Mixed

Reports and social media posts from April 20, 2026, suggest that DeepSeek-V4 is slated for release as early as this week. While technical specifications such as a 1.6 trillion parameter MoE architecture and high benchmark scores (MMLU 99.4%) are circulating, these figures originate from unofficial leaks and researcher predictions rather than a formal announcement from DeepSeek. The model is reportedly optimized for Huawei Ascend 950PR hardware to bypass U.S. export restrictions.

5 reviewed sources behind this verdict.

Reasoning

The claim is marked as mixed because while multiple tech news outlets (AI Times, Remio.ai) are reporting on the imminent release, they are primarily citing a single source—Princeton researcher Yifan Zhang—and 'unofficial figures.' There is no primary confirmation from DeepSeek itself regarding the specific 1.6T parameter count or the exact benchmark scores. Additionally, while the user claims an MMLU of 99.4%, the cited AI Times article lists MMLU at 92.8% and the 99.4% figure for a different metric (AIME 2026).

Source quality: The evidence includes detailed reporting from specialized AI news sites and citations of researcher leaks. However, it lacks a primary technical report or official press release from DeepSeek to verify the exact specifications.

Key checks

  • Release Timing: Researcher Yifan Zhang predicted on April 19, 2026, that the model would be released 'as early as this week.' AI Times and Remio.ai both reported this timeline, noting that DeepSeek's web version recently added a 'Professional Mode' as a precursor to launch.

  • Model Architecture and Parameters: The model is reported to be a 1.6 trillion parameter Mixture-of-Experts (MoE) model. It utilizes a new 'mHC' (modified Hyper-Connection) architecture and 'Engram' memory modules to reduce inference costs to 1/70th of GPT-4.

  • Benchmark Accuracy: The user's claim of 99.4% MMLU appears to be a misinterpretation. According to AI Times, the 99.4% score refers to AIME 2026 (math), while the MMLU score is reported as 92.8%. The SWE-Bench score of 83.7% matches the reports.

  • Hardware Compatibility: Reports confirm the model was trained on Huawei Ascend 950PR chips. Due to its size, running the model on consumer hardware like a 512GB Mac would likely require significant quantization.

Confidence

Medium