Italy Closes DeepSeek AI Investigation After Safety Commitments warn of AI 'hallucination' risks
- Editorial Team

- Jan 6
- 2 min read

Italy’s Antitrust Authority Concludes DeepSeek Probe
Italy’s antitrust regulator, Autorità Garante della Concorrenza e del Mercato (AGCM), has officially closed its investigation into Chinese AI company DeepSeek. The probe, which began in June 2025, focused on concerns that DeepSeek’s AI platform could produce inaccurate or misleading outputs, also known as “AI hallucinations.”
The closure came after DeepSeek committed to enhancing transparency and providing clearer warnings to users about potential risks associated with its AI tools. These measures were deemed sufficient to address the regulator’s concerns.
What Are AI Hallucinations?
AI hallucinations occur when artificial intelligence systems generate outputs that appear credible but are factually incorrect. These inaccuracies can range from minor errors to serious misinformation that may affect decision-making, business operations, or public perception.
With AI adoption rapidly increasing across industries, regulators worldwide are scrutinizing how AI companies handle these risks. Italy’s action against DeepSeek highlights the growing demand for responsible AI usage.
DeepSeek’s Response to Regulatory Concerns
DeepSeek operates through two entities: Hangzhou DeepSeek Artificial Intelligence and Beijing DeepSeek Artificial Intelligence. In response to AGCM’s investigation, the company made binding commitments to:
Provide explicit warnings about AI hallucination risks.
Make these warnings easily understandable for all users.
Implement measures to ensure transparency and user awareness.
The AGCM confirmed that these actions addressed the regulatory concerns, leading to the closure of the investigation.
Europe’s Growing Focus on AI Regulation
Italy’s decision is part of a broader European trend of actively regulating AI. The European Union is at the forefront of establishing comprehensive AI safety rules through initiatives such as the EU AI Act. These regulations focus on transparency, accountability, and user protection.
By requiring AI companies like DeepSeek to disclose potential risks, Europe aims to build trust between users and emerging technologies while encouraging responsible innovation.
Lessons for AI Companies and Users
The DeepSeek case provides valuable insights for both AI developers and users:
For AI developers: Regulatory authorities increasingly expect proactive measures to prevent harm. Transparent user communication is not just ethical—it is becoming mandatory.
For users: Even advanced AI tools are not infallible. Users must critically evaluate AI-generated outputs, particularly when making business or personal decisions.
The AGCM emphasized that non-compliance in the future could lead to renewed scrutiny, signaling that responsible AI practices are now a key expectation in Europe.
The Future of AI Oversight
The closure of the DeepSeek investigation demonstrates how regulators and AI companies can work together to ensure safe technology deployment. Clear warnings, risk disclosures, and transparency measures are now essential for any AI tool entering the European market.
As AI continues to evolve and expand globally, cases like DeepSeek serve as a blueprint for balancing innovation with user safety and regulatory compliance.



Comments