EU Launches Formal Investigation into X Over Grok AI Deepfake Concerns

By Rolling World News
EU Launches Formal Investigation into X Over Grok AI Deepfake Concerns

European Commission Initiates Probe into X Regarding AI-Generated Content

Brussels, Belgium – The European Commission has officially commenced an investigation into X, the social media platform owned by Elon Musk, following allegations that its artificial intelligence tool, Grok, has been used to generate sexualized deepfake images of real individuals. This move underscores the growing scrutiny faced by major online platforms under the European Union's stringent Digital Services Act (DSA).

The investigation, announced recently, focuses on whether X has failed to adequately address the risks associated with its AI functionalities, specifically concerning the creation and dissemination of illicit content. The concerns center on Grok AI's alleged capability to digitally alter images, particularly to remove clothing, thereby producing non-consensual sexualized depictions.

The Digital Services Act and Platform Accountability

At the heart of the European Commission's action is the Digital Services Act (DSA), a landmark piece of legislation designed to create a safer and more accountable online environment. The DSA imposes comprehensive obligations on very large online platforms (VLOPs), like X, to manage systemic risks, combat illegal content, and protect fundamental rights. Non-compliance with the DSA carries significant penalties, including fines that can reach up to 6% of a company's global annual turnover.

The current probe will assess whether X's internal policies, moderation practices, and the design of its AI systems are sufficient to prevent the proliferation of such harmful content. MEP Regina Doherty, representing Ireland, has emphasized that the Commission will specifically examine if "manipulated sexually explicit images" generated by Grok have been accessible to users within the EU, highlighting the cross-border implications of digital content.

Global Scrutiny and X's Response

The EU's investigation is not an isolated incident. It closely follows a similar announcement in January from the UK's communications regulator, Ofcom, which launched its own probe into X's handling of deepfake content. This parallel scrutiny from two major regulatory bodies signals a coordinated international effort to hold tech companies accountable for the misuse of AI technologies on their platforms.

In response to earlier allegations, X's Safety account had previously issued a statement indicating that the platform had taken steps to prevent Grok from digitally altering images of people to remove their clothing in "jurisdictions where such content is illegal." However, campaigners and victims affected by deepfake technology have voiced strong criticism, asserting that the ability to generate such explicit images using the tool should have "never happened" in the first place, regardless of subsequent mitigation efforts. Ofcom has confirmed its investigation remains active, despite X's stated measures.

Expanding the Scope of Regulatory Inquiry

Adding another layer to the regulatory pressure, the EU regulator has also confirmed that it has expanded an existing investigation, originally launched in December 2023. This broader inquiry focuses on the risks associated with X's recommender systems—the algorithms responsible for suggesting specific posts to users. The expansion indicates a wider concern within the Commission regarding the platform's overall content moderation strategies and its algorithms' potential role in amplifying or exposing users to harmful material, including disinformation and illegal content.

The Commission has indicated that it possesses the authority to "impose interim measures" if X is found to be non-compliant and refuses to implement meaningful adjustments to its systems and practices. Such measures could range from mandating specific operational changes to temporary restrictions on certain functionalities, underscoring the serious nature of the ongoing regulatory actions.

The Broader Challenge of AI Governance

This investigation into X and Grok AI highlights the escalating challenges faced by regulators worldwide in governing rapidly evolving artificial intelligence technologies. As AI capabilities advance, so does the potential for misuse, from generating convincing deepfakes to spreading misinformation at unprecedented scales. The case of Grok AI and sexual deepfakes serves as a critical test for the enforceability of new digital regulations like the DSA and the broader commitment to fostering responsible AI development and platform accountability. The outcome of these investigations could set important precedents for how AI-driven platforms are regulated globally to protect user safety and ethical standards.