Tuesday, April 21, 2026

DeepSeek hit by longest outage, users report disruptions lasting over 7 hours

China’s popular AI chatbot DeepSeek experienced the biggest outage in its history, with the platform going offline for over seven hours overnight.

Continue reading

March 30, 2026
SCIENCE & TECHNOLOGY

New Delhi: China’s popular AI chatbot DeepSeek experienced the biggest outage in its history, with the platform going offline for over seven hours overnight. 

Outage‑tracking platform Downdetector showed users first reporting problems on Sunday evening, multiple reports said, adding that DeepSeek’s status page acknowledged the initial issue at 9:35 pm.

The platform marked the issue as resolved about two hours later, but problems recurred and were not fully fixed until 10:33 am on the following day, the reports said.

The reasons for the massive outage remain unclear, as DeepSeek has not given an official statement mentioning the causes.

Chinese AI startup DeepSeek has a near 99 per cent operational record since it first unveiled the R1 model in January 2025, according to its status page. 

The Chinese platform suddenly became popular in January 2025, when its AI models triggered a selloff in Silicon Valley tech stocks and wiped off billions of dollars in wealth. The rise of DeepSeek caused fears that the American dominance in the AI race was over.

However, the Chinese startup has not delivered models of the same scale as that of latest offerings of ChatGPT, Gemini, and Claude from Google, OpenAI, and Anthropic.

United States-based artificial intelligence firm Anthropic has recently accused three Chinese unicorns including DeepSeek of having illegally extracted capabilities from its Claude model to advance their own systems.

The US firm alleged that the theft through a process known as distillation raised national security concerns.

The modus-operandi of the alleged theft involved creation of around 24,000 fraudulent accounts to train Chinese models using over 16 million exchanges with Claude.

The company warned that models produced this way may lack the safety guardrails implemented by companies such as itself and thus could be used for cyberattacks, and biological weapons.

These models could lead to "authoritarian governments to deploy frontier AI for offensive cyber operations, disinformation campaigns, and mass surveillance," it said, warning that the "The window to act is narrow."

About the Author
728x90 Advertisement

You May Also Like


DISCLAIMER
All content on this website is the exclusive property of Eastern Media Limited. Any downloadable material, including but not limited to electronic or digital versions of the newspaper (e-paper) in any format, is provided solely for personal use. Unauthorized dissemination, distribution, circulation, or publication of any content or e-paper (whether in PDF or other formats) by any means, including on social media platforms, without prior authorization, permission, or license is strictly prohibited.