In the past few weeks, Grok — the artificial intelligence (AI) system developed by Elon Musk’s xAI — has been generating nonconsensual, sexualized images of women and children on the social media platform X. This has prompted investigations and formal scrutiny by regulators in the EU, France, India, Malaysia and the UK. European officials have described the conduct as illegal. British regulators have launched urgent inquiries. Other governments have warned that Grok’s output might contravene domestic criminal and platform safety laws. Far from marginal regulatory disputes, these discussions get to the heart of AI governance.
Governments worldwide increasingly agree on a basic premise of AI governance: Systems deployed at scale must be safe, controllable and subject to meaningful oversight. Whether framed by the EU’s Digital Services Act (DSA), the Organization for Economic Co-operation and Development’s AI Principles, UNESCO’s AI ethics framework or emerging national safety regimes, these norms are clear and unwavering. AI systems that enable foreseeable harm, particularly sexual exploitation, are incompatible with society’s expectations for the technology and its governance.
There is also broad global agreement that sexualized imagery involving minors — whether real, manipulated or AI-generated — constitutes one of the clearest red lines in technology governance. International law, human-rights frameworks and domestic criminal statutes converge on this point.
Grok’s generation of such material does not fall into a gray area. It reflects a clear and fundamental failure of the system’s design, safety assessments, oversight and control. The ease with which Grok can be prompted to produce sexualized imagery involving minors, the breadth of regulatory scrutiny it now faces and the absence of publicly verifiable safety testing all point to a failure to meet society’s baseline expectations for powerful AI systems. Musk’s announcement that the image-generation service would be available only to paying subscribers does nothing to resolve these failures.
This is not a one-off problem for Grok. In July last year, Poland’s government urged the EU to open an investigation into Grok over its “erratic” behavior. In October, more than 20 civic and public-interest organizations sent a letter urging the US Office of Management and Budget to suspend Grok’s planned deployment across federal agencies in the US. Many AI safety experts have raised concerns about the adequacy of Grok’s guardrails, with some saying that its security and safety architecture is inadequate for a system of its scale.
These concerns were largely ignored, as governments and political leaders sought to engage, partner with or court xAI and its founder. However, that xAI is now under scrutiny across multiple jurisdictions seems to vindicate them, while exposing a deep structural problem: Advanced AI systems are being deployed and made available to the public without safeguards proportionate to their risks. This should serve as a warning to states considering similar AI deployments.
As governments increasingly integrate AI systems into public administration, procurement and policy workflows, retaining the public’s trust would require assurances that these technologies comply with international obligations, respect fundamental rights and do not expose institutions to legal or reputational risk. To this end, regulators must use the Grok case to demonstrate that their rules are not optional.
Responsible AI governance depends on alignment between stated principles and operational decisions. While many governments and intergovernmental bodies have articulated commitments to AI systems that are safe, objective and subject to ongoing oversight, these lose credibility when states tolerate the deployment of systems that violate widely shared international norms with apparent impunity.
By contrast, suspending a model’s deployment pending rigorous and transparent assessment is consistent with global best practices in AI risk management. Doing so enables governments to determine whether a system complies with domestic law, international norms and evolving safety expectations before it becomes further entrenched. Equally important, it demonstrates that governance frameworks are not merely aspirational statements, but operational constraints — and that breaches will have real consequences.
The Grok episode underscores a central lesson of the AI era: Governance lapses can scale as quickly as technological capabilities. When guardrails fail, the harms do not remain confined to a single platform or jurisdiction; they propagate globally, triggering responses from public institutions and legal systems.
For European regulators, Grok’s recent output is a defining test of whether the DSA would function as a binding enforcement regime or amount merely to a statement of intent. At a time when governments, in the EU and beyond, are still defining the contours of global AI governance, the case might serve as an early barometer for what technology companies can expect when AI systems cross legal boundaries, particularly where the harm involves conduct as egregious as the sexualization of children.
A response limited to public statements of concern would invite future abuses, by signaling that enforcement lacks teeth. A response that includes investigations, suspensions and penalties, by contrast, would make clear that certain lines cannot be crossed, regardless of a company’s size, prominence or political capital.
Grok should be treated not as an unfortunate anomaly to be quietly managed and put behind us, but as the serious violation that it is. At a minimum, there needs to be a formal investigation, suspension of deployment and meaningful enforcement.
Lax security measures, inadequate safeguards or poor transparency regarding safety testing should incur consequences. Where government contracts include provisions related to safety, compliance or termination for cause, they should be enforced. And where laws provide for penalties or fines, they should be applied. Anything less risks signaling to the largest technology companies that they can deploy AI systems recklessly, without fear that they would face accountability if those systems cross even the brightest of legal and moral red lines.
J.B. Branch is big tech accountability advocate at Public Citizen.
Copyright: Project Syndicate
In the US’ National Security Strategy (NSS) report released last month, US President Donald Trump offered his interpretation of the Monroe Doctrine. The “Trump Corollary,” presented on page 15, is a distinctly aggressive rebranding of the more than 200-year-old foreign policy position. Beyond reasserting the sovereignty of the western hemisphere against foreign intervention, the document centers on energy and strategic assets, and attempts to redraw the map of the geopolitical landscape more broadly. It is clear that Trump no longer sees the western hemisphere as a peaceful backyard, but rather as the frontier of a new Cold War. In particular,
As the Chinese People’s Liberation Army (PLA) races toward its 2027 modernization goals, most analysts fixate on ship counts, missile ranges and artificial intelligence. Those metrics matter — but they obscure a deeper vulnerability. The true future of the PLA, and by extension Taiwan’s security, might hinge less on hardware than on whether the Chinese Communist Party (CCP) can preserve ideological loyalty inside its own armed forces. Iran’s 1979 revolution demonstrated how even a technologically advanced military can collapse when the social environment surrounding it shifts. That lesson has renewed relevance as fresh unrest shakes Iran today — and it should
The last foreign delegation Nicolas Maduro met before he went to bed Friday night (January 2) was led by China’s top Latin America diplomat. “I had a pleasant meeting with Qiu Xiaoqi (邱小琪), Special Envoy of President Xi Jinping (習近平),” Venezuela’s soon-to-be ex-president tweeted on Telegram, “and we reaffirmed our commitment to the strategic relationship that is progressing and strengthening in various areas for building a multipolar world of development and peace.” Judging by how minutely the Central Intelligence Agency was monitoring Maduro’s every move on Friday, President Trump himself was certainly aware of Maduro’s felicitations to his Chinese guest. Just
On today’s page, Masahiro Matsumura, a professor of international politics and national security at St Andrew’s University in Osaka, questions the viability and advisability of the government’s proposed “T-Dome” missile defense system. Matsumura writes that Taiwan’s military budget would be better allocated elsewhere, and cautions against the temptation to allow politics to trump strategic sense. What he does not do is question whether Taiwan needs to increase its defense capabilities. “Given the accelerating pace of Beijing’s military buildup and political coercion ... [Taiwan] cannot afford inaction,” he writes. A rational, robust debate over the specifics, not the scale or the necessity,