Governments and regulators in multiple regions have escalated actions against Grok, the AI chatbot created by xAI and deployed on the X platform, after sexually explicit images generated by the system circulated on the social network. Throughout January, authorities opened inquiries, issued directives and in some cases restricted access to the tool while pressing the platform and its developer for technical and policy changes.
Europe
On January 26 the European Commission launched an investigation to determine whether Grok has disseminated illegal material, including manipulated sexualised images, within the European Union. The probe will assess whether X fulfilled requirements under the bloc's digital rules to properly evaluate and mitigate risks posed by the technology.
The Commission had earlier, on January 8, extended a retention order it sent to X last year, requiring the company to retain and preserve all internal documents and data related to Grok until the end of 2026. Separately, Britain’s media regulator Ofcom has opened an inquiry to determine if sexually intimate deepfakes produced by Grok breached the obligation to protect people in the UK from content that may be illegal under the Online Safety Act framework.
In France, government ministers referred sexually explicit Grok-created content circulating on X to prosecutors and notified the French media regulator Arcom to verify platform compliance with EU rules. Germany's media minister Wolfram Weimer said EU rules provide the tools to address illegal content and cautioned that the issue could evolve into the "industrialisation of sexual harassment". Italy’s data protection authority warned that creating "undressed" deepfake images of real people without their consent could constitute serious privacy violations and in some instances amount to criminal offences. Swedish political leaders also condemned sexualised material generated by Grok after reports that imagery involving Sweden's deputy prime minister had been produced from a user prompt.
Asia
Authorities across Asia have likewise moved to curb the spread of AI-generated sexual content. India’s IT ministry issued a formal notice to X on January 2 over alleged Grok-enabled creation or sharing of obscene sexualised images, ordering the material to be removed and requiring a report on actions taken within 72 hours.
Japan opened a probe into X over Grok and said the government would consider all available options to prevent the generation of inappropriate images. Indonesia’s communications and digital ministry reported it had blocked access to Grok; digital minister Meutya Hafid said the step aimed to protect women and children from AI-generated fake pornographic content and cited Indonesia’s strict anti-pornography laws.
Malaysia reinstated access to Grok for users after X implemented additional safety measures, the country's communications regulator said on January 23. The Philippines said on January 21 it will restore access to Grok after the developer agreed to remove image-manipulation tools that had raised child-safety concerns, according to the nation’s cybercrime investigation unit.
Americas
In the United States, California's governor and attorney general said on January 14 they were seeking answers from xAI amid reports of non-consensual sexual images circulating on the platform. Canada’s privacy commissioner said it was broadening an existing investigation into X after reports that Grok was generating non-consensual, sexually explicit deepfakes. In Brazil, the federal government and prosecutors issued a joint statement on January 20 giving xAI 30 days to prevent the chatbot from spreading fake sexualised content.
Oceania
Australia’s online-safety regulator, eSafety, said on January 7 it was investigating Grok-generated "digitally undressed" sexualised deepfake images. The regulator stated it was assessing adult material under its image-based abuse scheme and noted that current examples involving children it had reviewed did not meet the legal threshold for child sexual abuse material under Australian law.
xAI’s Measures and Platform Controls
xAI said on January 14 that it had limited image editing capabilities for Grok AI users and had blocked certain users, determined by their location, from generating images of people in revealing clothing in "jurisdictions where it’s illegal". The company did not identify the countries affected by the location-based blocks. Earlier measures had restricted Grok’s image generation and editing features to paying subscribers.
Scope and Significance
The actions taken by regulators and governments span investigative measures, procedural preservation orders, take-down demands, temporary access blocks and conditional restoration of services contingent on changes by the developer. Officials are invoking a range of legal frameworks - from digital regulation compliance in the EU to national privacy laws, online safety statutes and anti-pornography rules - as they assess whether the content created by Grok crosses legal thresholds or violates citizens' rights.
In several countries, authorities signalled that existing regulatory tools and legal standards are being applied to AI-generated material to determine accountability, protect privacy, and address potential child-safety concerns. Where regulators found immediate risks, they sought rapid remedial action from X and xAI or temporarily limited consumer access to the technology.
What Remains Unresolved
Regulators are continuing inquiries to establish whether the spread of sexualised deepfakes via Grok amounts to illegal content in their jurisdictions and whether X and xAI took sufficient steps to anticipate and mitigate these risks. Investigations and retention orders suggest authorities will seek internal records and other evidence to assess compliance and possible breaches, but final findings and any consequent enforcement actions have not been reported across the listed cases.
As these processes unfold, questions remain about how jurisdictions will coordinate oversight, the scope of remedies regulators will demand, and whether further technical or policy changes by the developer will satisfy multiple national and supranational standards.