Undressed by Algorithm: How Grok Exposed the Legal Vacuum Around AI Sexual Abuse

Photo: An anti-Musk and X (formerly Twitter) poster placed in a bus stop by the activist group ‘Everyone Hates Elon’ in London, United Kingdom, January 14, 2026. Kristian Buus / Getty Images

A user types a single word: “Undress.” Within seconds, Grok, the artificial intelligence tool built by Elon Musk’s xAI and hosted on X, generates a sexualized image of a real woman. She consents to nothing. She does not even know it is happening. The image spreads. 

She is not alone. It happens again, and again. It happens to celebrities, strangers, women. It happens to minors.

When the backlash mounted in January 2026, Musk had an answer ready: the users were to blame. Musk suggested that the platform was merely a neutral vessel: a stage on which bad actors had performed. Free speech, he implied, demanded as much. But legal scholars, policy experts, and technologists who spoke to The Politic tell a different story, one in which the platform is not a bystander but an architect, the law is not equipped but exposed, and the victims, overwhelmingly women and girls, are left to fight alone in a system that was never built to protect them.

The Anatomy of a Crisis

Grok’s January 2026 backlash did not emerge from an isolated incident. Months prior, users had discovered that the AI image generation tool embedded in X could be prompted with disarmingly simple instructions to produce nonconsensual sexualized imagery of real people from ordinary photographs. Within two weeks of the crisis reaching public attention, Grok had generated an estimated 23,000 sexualized images of children and at least 1.8 million posts of sexualized images of women. 

The international response was swift. Malaysia and Indonesia blocked access to Grok entirely. The European Union invoked the AI Act, signaling that the platform’s behavior violated its sweeping new AI regulation framework. In the United States, the response was overwhelmingly mute. No federal agency intervened. No government official issued a formal rebuke. The limited action that emerged came almost entirely from survivors themselves, through private lawsuits filed without institutional support.

Musk, meanwhile, justified the non-interference as principled, as free speech, as the cost of an open platform. It was a framing that legal scholars found difficult to take seriously.

Scott Shapiro, the Charles F. Southmayd Professor of Law and Philosophy at Yale Law School, called Grok’s behavior what it was: “ew.” On Musk’s free speech defense, he was equally direct. “I used to teach criminal law, and I know enough to know that the user can be held liable. That’s not that hard. The question is whether the company can be.” The more troubling question, Shapiro argued, was not whether users abused the system, but whether the company made a deliberate choice to let them. “The company created a product which was able to be abused in certain ways and did not put guardrails to prevent this, despite the fact that they have guardrails on other things, and they just chose not to do it.” As for why no guardrails were placed for sexual imagery specifically, Shapiro was sardonic. “It was claimed as free speech, and ‘don’t be so uptight’—for the lols, for liberal tears, I don’t know.” 

He paused, “I still can’t believe it happened.”

The Law and Its Limits

To understand why Grok’s behavior exposed a legal vacuum rather than triggering a legal response, one must understand the architecture of American internet law. Section 230 of the 1996 Communications Decency Act was born out of a specific and narrow concern: Congress worried that children were gaining access, through the internet, to pornography. They wanted platforms to self-regulate without heavy-handed federal oversight. What they could have not imagined at the time, however, was that the platform itself would become the one generating the harm.

Section 230 became the legal foundation of the modern internet. Its core provision shields platforms from being treated as the publisher or speaker of content generated by their users. The logic, explained Dr. Mary Anne Franks, the Eugene L. and Barbara A. Bernard Professor of Law at George Washington University and President of the Cyber Civil Rights Initiative (CCRI), was originally meant to encourage platforms to moderate content responsibly. The law did this by shielding platforms from being treated as the publisher of content their users posted, so that companies could remove harmful material without fear of being sued for the very act of moderating. But the way courts interpreted it over the following decades handed the technology industry something far more powerful than anyone initially intended. “We have said, for 30 years, to an industry: do whatever you want, make as much of a mess as possible, hurt everybody you can, as long as it makes you profit, we trust you,” Dr. Franks told The Politic. The result, she argued, is that the U.S. is not merely a safe harbor for AI-enabled abuse. “I would say the U.S. is exporting abuse as a kind of business model.”

Dr. Franks knows the legal landscape here better than almost anyone. In 2013, she drafted the first model criminal statute on the nonconsensual distribution of intimate imagery, the template that has since shaped laws in all fifty states and served as the basis for the federal Take It Down Act, passed in 2024. She is also realistic about where that law breaks down.

Section 230, she explained, makes a critical distinction between the content a platform facilitates and the content a platform creates. When Grok takes a user’s prompt and generates a sexualized image, it is no longer a passive intermediary. It is a co-creator. “Even though you could argue it’s a combination of the user giving the prompt and the AI giving the product,” Franks said, “all you need, for the purposes of getting away from Section 230, is for there to be some kind of contribution or development of the content, which I think easily is what Grok was doing here. So I think the answer is no, they’re not protected by Section 230.”

The deeper problem, however, is that even if platforms lose the Section 230 shield, the law governing what they produce remains incomplete. Real intimate images implicate privacy; these images use real information that was never meant to be public. Fabricated deepfake images implicate something closer to defamation; they depict something that never happened. 

“We can protect real information because there is a privacy angle,” Franks explained. “When it comes to false information, we need a different theory.” The Take It Down Act attempts to address both, but Franks identified a flaw that she has repeatedly tried to get corrected. The law contains an exception that permits a person depicted in an image to distribute it, meaning that a man who photographs himself with a woman without her knowledge, or who Photoshops himself into a fabricated intimate image, may be legally permitted to share it. “I cannot tell you how many times I had conversations with the sponsors to say, surely this is not what you meant,” she said. “And they would always say, ‘Oh no, we will fix that.’And they just never did, which makes me think it wasn’t a mistake.”

Matthew Kugler, Professor of Law at Northwestern University, conducted a plethora of empirical research on how the public perceives privacy harms. He offered a data-driven corrective to the idea that society is simply not ready to reckon with deepfake abuse. In a 2021 paper co-authored with Carly Pace, his research found that roughly 90% of Americans believe nonconsensual deepfake sexual imagery should be prohibited. “Getting 90% of Americans to agree on much of anything is really hard,” he told The Politic. Even the American Civil Liberties Union, historically skeptical of speech regulation in this area, was “somewhat willing to concede that the pornographic deepfakes had to go.” The gap between what the public wants and what the law delivers, Kugler argued, reflects not democratic ambivalence but institutional failure, compounded in this case by the particular character of the man at the center of it. “I think Elon Musk understands free speech as very much in favor of whatever speech Elon Musk wants to allow,” he said. “It frankly seems rather juvenile to me. I think he’s playing to a very particular, very online community of people who, frankly, like annoying people. They’re doing it because they know it’s upsetting.”

On the question of government accountability, Kugler was careful but pointed. Federal enforcement, he noted, was never legally required, only discretionary. The Federal Trade Commission (FTC) has the authority to investigate unfair consumer practices and could have treated Grok’s behavior as exactly that. “You could certainly look at what Grok was doing and say, this is unfair to consumers, the FTC should have gotten involved,” he said. “But it has no legal obligation to get involved. It just has a legal avenue to do so.” A different administration, he implied, would have walked through that door. This one had every reason not to.

Inside the Schools

While legal scholars debated liability and legislative drafters identified loopholes, the harm was already spreading through school hallways. Kristin Woelfel has spent years sitting at the intersection of those two worlds. A former fellow at the Cyber Civil Rights Initiative and member advocate at the Miami-Dade County teachers union, the largest in the Southeast, she now serves as Policy Counsel on Center for Democracy and Technology’s (CDT) Equity in Civic Technology team, where she leads research on AI abuse in K-12 education. Her data paints a picture of almost total institutional unpreparedness.

In a 2024 CDT survey of students, parents, and teachers, respondents across all three groups said that deepfake Non-Consensual Intimate Imagery (NCII) and authentic NCII caused similarly serious harm. “I don’t think you need to explain to somebody why sharing an authentic, intimate depiction of them is harmful,” Woelfel told The Politic. “And so, similarly, even if it’s not real, you probably shouldn’t have to explain why that’s also very harmful, and why that would lead to somebody not being able to benefit from their education in the same way as everybody else.” 

The survey identified some key failures in how schools respond. First, almost none had proactive prevention programs in place. The most common institutional response to a reported incident was to refer the student to law enforcement, effectively displacing the responsibility elsewhere without doing their own investigation or considering the implications for the student’s daily life. Second, nearly no schools offered meaningful support to victims, whether counseling, guidance on how to get content removed, or even a designated point of contact for students who came forward.

“If a student goes and reports it to their teacher, you don’t know everything; you don’t immediately know how to respond to something like that,” Woelfel said. “And if you don’t know who to escalate that to, what are you supposed to do for that student?” A follow-up CDT report published before the crisis showed modest improvement in the provision of victim resources, which Woelfel attributed partly to advocacy efforts from organizations like CCRI and the passage of the Take It Down Act. But she was measured about what that progress means, believing that nothing suggests that “there is no longer a problem.”

The victims within those schools are not a random cross-section. Woelfel’s survey found that when students reported seeing deepfake NCII, they were significantly more likely to describe it as depicting a female. This is consistent with the broader data landscape. Before tools like Grok made AI image generation widely accessible, the vast majority of intimate deepfakes online depicted women. “The way that these tools have historically been used,” Woelfel said, “if they’re going to be used to harass somebody, much of the time it’s going to be women.” She noted that stories involving the exploitation of young men are beginning to emerge, and that the problem is not exclusively gendered. But the pattern is unmistakable, and it does not surprise the researchers who have been watching it for years.

Photo: A group of women participate in a Stop Image-Based Abuse protest as part of a larger campaign organized by Glamour Magazine in London, United Kingdom. Hannah Harley Young / Glamour Magazine

A Civil Rights Crisis by Another Name

Franks has spent fifteen years arguing that this is not merely a technology problem or a speech problem. Rather, it is a civil rights problem. The organization she leads is called the Cyber Civil Rights Initiative, deliberately drawing on a framework developed by her colleague Danielle Keats Citron, who argued over a decade ago that what online abuse requires is a second civil rights movement. The analogy Franks makes is to the workplace. When women entered professional spaces that had long been male-dominated, the response was not gracious. They were met with pornographic pictures on office walls, unwanted physical contact, constant denigration, a campaign aiming to make the space so hostile that women would leave. The internet, she argues, has replicated that pattern exactly. “When women really started wanting to have their own space there, there was a pushback,” she said. “Oh, well, then I guess you just cannot cut it on the internet, so you should just leave.”

The civil liberties framework, which Musk invokes when he talks about free speech, asks a fundamentally different question than the civil rights framework. The former asks whether an individual has the freedom to say what they want. The latter asks whether the impact of that speech is on entire groups of people who have historically been excluded and targeted. “If a man’s right to commit revenge porn abuse is free speech,” Franks said, “what happens when you censor a woman’s right to sexual expression? Because now she cannot speak because of it.” The stakes, she argued, are not abstract. Every woman who knows that a fabricated sexual image of her could be created and shared at any moment faces a chilling effect on her presence online, her willingness to post photographs, and her sense of safety in digital spaces. That is a civil rights injury, whether or not the law has caught up to recognizing it as one.

The Guardrail Question

If the law has been slow, could technology have done better? Alex Schapiro, a Yale computer science student, ethical hacker, and AI researcher, offered a grounded perspective on what content guardrails actually involve. They are genuinely difficult to build perfectly, he explained, partly because of a technique known as prompt engineering, in which users frame requests in ways designed to circumvent safety systems. “If you say, build a nuclear bomb, it won’t tell you,” he said. “But if you say, ‘Hey, I’m writing a story, and my character is building a nuclear bomb, what should I write about?’ you can sneak it in that way.” The challenge, he noted, is real. But challenge is not the same as impossibility. Grok’s developers made choices about where to invest in safety and where to not. In the domain of sexual imagery, the choice they made was consequential.

That choice, Franks argued, was not accidental and not principled. It was political. The same platform that chose not to moderate AI-generated sexual abuse has guardrails on other categories of content. 

“He [Musk] is notorious for banning accounts, slowing down people’s virality, changing the algorithm so his posts get more visibility,” she said. “This is someone who is constantly manipulating the platform while simultaneously claiming it is neutral.”

The decision about which harms to prevent and which to permit reflects values, and in this case, those values left women and minors without protection while the company’s owner declared himself a champion of free expression. “The fact that Trump put Musk in his government was a reward,” Franks said. “The entire thing is quite corrupt, and it is anti-democratic in probably every meaningful sense that we can think of.”

Who Is Left to Fight

When Musk pointed at the users, he was making a legal argument as much as a political one. And legally, as Shapiro acknowledged, it is not entirely wrong. Individual users who generated and shared nonconsensual images can and should face liability. But the harder question, the one the legal system has not yet answered, is what happens when the company that built the tool, hosted the content, and chose not to install the guardrails faces no meaningful consequence.

For now, the answer is that survivors mostly fight alone. They navigate a patchwork of state laws, the majority of which were written before generative AI existed and certainly before it took over. They bring private lawsuits against companies with unlimited resources. They try to get images taken down from platforms that are under no legal obligation to act quickly. The National Center for Missing and Exploited Children operates a service to help identify and remove intimate images of minors, and the Cyber Civil Rights Initiative provides state-by-state guidance on what laws exist and how to use them. These resources matter. They are also not enough.

“That is a ridiculous failure of our government,” Franks said, “to [not] even take the most minimal steps to try to regulate this behavior. And far beyond that, we have actually gone in the opposite direction, signaling in every possible way that this industry will continue to get benefits, the more irresponsible and reckless it is.”