In the past few weeks, Grok — the artificial intelligence (AI) system developed by Elon Musk’s xAI — has been generating nonconsensual, sexualized images of women and children on the social media platform X. This has prompted investigations and formal scrutiny by regulators in the EU, France, India, Malaysia and the UK. European officials have described the conduct as illegal. British regulators have launched urgent inquiries. Other governments have warned that Grok’s output might contravene domestic criminal and platform safety laws. Far from marginal regulatory disputes, these discussions get to the heart of AI governance.
Governments worldwide increasingly agree on a basic premise of AI governance: Systems deployed at scale must be safe, controllable and subject to meaningful oversight. Whether framed by the EU’s Digital Services Act (DSA), the Organization for Economic Co-operation and Development’s AI Principles, UNESCO’s AI ethics framework or emerging national safety regimes, these norms are clear and unwavering. AI systems that enable foreseeable harm, particularly sexual exploitation, are incompatible with society’s expectations for the technology and its governance.
There is also broad global agreement that sexualized imagery involving minors — whether real, manipulated or AI-generated — constitutes one of the clearest red lines in technology governance. International law, human-rights frameworks and domestic criminal statutes converge on this point.
Grok’s generation of such material does not fall into a gray area. It reflects a clear and fundamental failure of the system’s design, safety assessments, oversight and control. The ease with which Grok can be prompted to produce sexualized imagery involving minors, the breadth of regulatory scrutiny it now faces and the absence of publicly verifiable safety testing all point to a failure to meet society’s baseline expectations for powerful AI systems. Musk’s announcement that the image-generation service would be available only to paying subscribers does nothing to resolve these failures.
This is not a one-off problem for Grok. In July last year, Poland’s government urged the EU to open an investigation into Grok over its “erratic” behavior. In October, more than 20 civic and public-interest organizations sent a letter urging the US Office of Management and Budget to suspend Grok’s planned deployment across federal agencies in the US. Many AI safety experts have raised concerns about the adequacy of Grok’s guardrails, with some saying that its security and safety architecture is inadequate for a system of its scale.
These concerns were largely ignored, as governments and political leaders sought to engage, partner with or court xAI and its founder. However, that xAI is now under scrutiny across multiple jurisdictions seems to vindicate them, while exposing a deep structural problem: Advanced AI systems are being deployed and made available to the public without safeguards proportionate to their risks. This should serve as a warning to states considering similar AI deployments.
As governments increasingly integrate AI systems into public administration, procurement and policy workflows, retaining the public’s trust would require assurances that these technologies comply with international obligations, respect fundamental rights and do not expose institutions to legal or reputational risk. To this end, regulators must use the Grok case to demonstrate that their rules are not optional.
Responsible AI governance depends on alignment between stated principles and operational decisions. While many governments and intergovernmental bodies have articulated commitments to AI systems that are safe, objective and subject to ongoing oversight, these lose credibility when states tolerate the deployment of systems that violate widely shared international norms with apparent impunity.
By contrast, suspending a model’s deployment pending rigorous and transparent assessment is consistent with global best practices in AI risk management. Doing so enables governments to determine whether a system complies with domestic law, international norms and evolving safety expectations before it becomes further entrenched. Equally important, it demonstrates that governance frameworks are not merely aspirational statements, but operational constraints — and that breaches will have real consequences.
The Grok episode underscores a central lesson of the AI era: Governance lapses can scale as quickly as technological capabilities. When guardrails fail, the harms do not remain confined to a single platform or jurisdiction; they propagate globally, triggering responses from public institutions and legal systems.
For European regulators, Grok’s recent output is a defining test of whether the DSA would function as a binding enforcement regime or amount merely to a statement of intent. At a time when governments, in the EU and beyond, are still defining the contours of global AI governance, the case might serve as an early barometer for what technology companies can expect when AI systems cross legal boundaries, particularly where the harm involves conduct as egregious as the sexualization of children.
A response limited to public statements of concern would invite future abuses, by signaling that enforcement lacks teeth. A response that includes investigations, suspensions and penalties, by contrast, would make clear that certain lines cannot be crossed, regardless of a company’s size, prominence or political capital.
Grok should be treated not as an unfortunate anomaly to be quietly managed and put behind us, but as the serious violation that it is. At a minimum, there needs to be a formal investigation, suspension of deployment and meaningful enforcement.
Lax security measures, inadequate safeguards or poor transparency regarding safety testing should incur consequences. Where government contracts include provisions related to safety, compliance or termination for cause, they should be enforced. And where laws provide for penalties or fines, they should be applied. Anything less risks signaling to the largest technology companies that they can deploy AI systems recklessly, without fear that they would face accountability if those systems cross even the brightest of legal and moral red lines.
J.B. Branch is big tech accountability advocate at Public Citizen.
Copyright: Project Syndicate
The Chinese Communist Party (CCP) has long been expansionist and contemptuous of international law. Under Chinese President Xi Jinping (習近平), the CCP regime has become more despotic, coercive and punitive. As part of its strategy to annex Taiwan, Beijing has sought to erase the island democracy’s international identity by bribing countries to sever diplomatic ties with Taipei. One by one, China has peeled away Taiwan’s remaining diplomatic partners, leaving just 12 countries (mostly small developing states) and the Vatican recognizing Taiwan as a sovereign nation. Taiwan’s formal international space has shrunk dramatically. Yet even as Beijing has scored diplomatic successes, its overreach
After more than a year of review, the National Security Bureau on Monday said it has completed a sweeping declassification of political archives from the Martial Law period, transferring the full collection to the National Archives Administration under the National Development Council. The move marks another significant step in Taiwan’s long journey toward transitional justice. The newly opened files span the architecture of authoritarian control: internal security and loyalty investigations, intelligence and counterintelligence operations, exit and entry controls, overseas surveillance of Taiwan independence activists, and case materials related to sedition and rebellion charges. For academics of Taiwan’s White Terror era —
After 37 US lawmakers wrote to express concern over legislators’ stalling of critical budgets, Legislative Speaker Han Kuo-yu (韓國瑜) pledged to make the Executive Yuan’s proposed NT$1.25 trillion (US$39.7 billion) special defense budget a top priority for legislative review. On Tuesday, it was finally listed on the legislator’s plenary agenda for Friday next week. The special defense budget was proposed by President William Lai’s (賴清德) administration in November last year to enhance the nation’s defense capabilities against external threats from China. However, the legislature, dominated by the opposition Chinese Nationalist Party (KMT) and Taiwan People’s Party (TPP), repeatedly blocked its review. The
In her article in Foreign Affairs, “A Perfect Storm for Taiwan in 2026?,” Yun Sun (孫韻), director of the China program at the Stimson Center in Washington, said that the US has grown indifferent to Taiwan, contending that, since it has long been the fear of US intervention — and the Chinese People’s Liberation Army’s (PLA) inability to prevail against US forces — that has deterred China from using force against Taiwan, this perceived indifference from the US could lead China to conclude that a window of opportunity for a Taiwan invasion has opened this year. Most notably, she observes that