Commentary

Yet Another Version of the “PROTECT Act” Introduced by Morley Safeword

Section 230

Add Congressman Jimmy Patronis (R-Fla.) to the list of elected officials hellbent on repealing Section 230 of the Communications Decency Act.

In a press release issued January 14th, Patronis celebrated his introduction of H.R. 7045, AKA the “Promoting Responsible Online Technology and Ensuring Consumer Trust” (PROTECT) Act.

The argument Patronis made in support of his proposal is a well-worn one, rooted in the notion that Section 230 is enabling evil tech platforms to ruin America’s children by shielding them from liability for things published by third parties on those platforms.

“As a father of two young boys, I refuse to stand by while Big Tech poisons our kids without consequence,” Patronis said. “This is the only industry in America that can knowingly harm children, some with deadly consequences, and walk away without responsibility. Big Tech is digital fentanyl that is slowly killing our kids, pushing parents to the sidelines, acting as therapists, and replacing relationships with our family and friends. This must stop.”

There’s a reasonable argument to be had about whether the courts have extended Section 230’s coverage too far in some cases, but to hear people like Patronis tell it, the statute’s safe harbor provision allows “Big Tech” to do anything it pleases with total impunity.

“These companies design their platforms to hook children, exploit their vulnerability, and keep them scrolling no matter the cost,” Patronis added. “When children are told by an algorithm, or a chatbot, that the world would be better without them, and no one is being held responsible, something is deeply broken. I bet they would actually self-police their sticky apps and technologies if they knew they would have to pay big without the Big Tech Liability Protection of Section 230.”

In his press release, Patronis claims that “Section 230 shields social media companies and other online platforms from liability for content published on their sites.” This claim is a half-truth, at best. Section 230 shields social media companies from liability for content published by others on their sites. That’s an important distinction, not a distinction without a difference.

Let’s try a thought experiment here: Let’s suppose you’re a congressman whose website permits users to post comments in response to things you post on the site. Let’s further suppose one of your site’s users decides to post something defamatory about another of your colleagues. Would you want to be held directly liable for that comment? How about if instead of something defamatory, the user posted something patently illegal, like an image of a child being sexually abused; is Patronis saying my hypothetical congressman ought to go to prison in that scenario?

There are many reasons why groups like the Computer and Communications Industry Association (CCIA) are against the repeal of Section 230 – and yes, one of those reasons is that the CCIA is funded by everyone’s current favorite boogeyman, Big Tech. Another more important reason is the people behind the CCIA can see where this is all heading, if Section 230 is outright repealed and no safe harbor at all is provided for those who offer forums in which users can publish their content and comments.

“In the absence of Section 230, digital services hosting user-created content, including everything from online reviews to posts on social media, would risk constant litigation,” the CCIA asserted in an analysis published January 12th. “Continuing to provide services optimized for user experience would require massively increased legal expenses.”

How massively would those legal expenses increase? The CCIA said, given the sheer volume of user-generated posts published in a year, if “just one post or comment in a million led to a lawsuit, digital services could face over 1.1 million lawsuits per year following a Section 230 repeal.”

“A single lawsuit reaching discovery typically costs over $100K in fees, and sometimes much more,” CCIA correctly noted. “If companies face 1.1 million lawsuits, that’s $110 billion in legal costs annually.”

I suppose those who say Big Tech is the devil (while using the platforms enabled by Big Tech to say so) might think this is a good thing, I’m not sure they’ve thought this all the way through. If social media platforms can’t operate due to overwhelming legal costs, we lose all the good things about social media, too – not to mention a whole lot of jobs when those platforms inevitably go out of business.

From the perspective of the adult industry and those who enjoy adult entertainment, repealing Section 230 would likely spell the end of platforms allowing adult content creators to post self-produced content, as well. What platform would want to risk being held strictly liable for anything and everything depicted in the videos and photos adult creators produce? It would be absolute madness for platforms like OnlyFans and its competitors to maintain their current business model in the absence of Section 230 safe harbor.

Again, for those who think porn should be abolished, that development might be seen as a feature and not a bug where the idea of repealing Section 230 is concerned. But extend that same outcome to some platform they DO like – YouTube, TikTok, Facebook, Instagram, X or what have you – and they might not like the collapse quite as much.

From where I sit, the idea of repealing Section 230 should be accompanied by that old standby of a warning: “Be careful what you wish for, because you might just get it.”

Read More »

Utah’s “Porn Tax”: A Levy on Paper and Ink for the Internet Age by Morley Safeword

Tax

Back in the early 1970s, the Minnesota legislature altered the state’s sales tax such that it created a “use tax” on the cost of paper and ink, while exempting the first $100,000 worth of such materials in any calendar year.

In part due to that exemption, the use tax clearly was directed at the state’s larger periodicals, including the Minneapolis Star Tribune. The Star Tribune wasn’t simply one of eleven publications incurring tax liability under the statute in the early 70s; of the $893,355 in total tax revenue collected under the statute in 1974, the Star Tribune paid $608,634 – roughly two-thirds of the total revenue collected.

The Star Tribune sued the Minnesota Commissioner of Revenue, alleging that the state’s tax scheme violated the First Amendment. The resulting case, Minneapolis Star & Tribune Co v. Minnesota Commissioner of Revenue, was decided by the U.S. Supreme Court in 1983.

The Supreme Court ruled in favor of the newspaper, holding that the “main interest asserted by Minnesota in this case is the raising of revenue” and that while this interest was “critical to any government,” it wasn’t, by itself, enough for the law to survive scrutiny under the First Amendment.

“Standing alone, however, it cannot justify the special treatment of the press, for an alternative means of achieving the same interest without raising concerns under the First Amendment is clearly available,” Justice Sandra Day O’Connor wrote for the court, “the State could raise the revenue by taxing businesses generally, avoiding the censorial threat implicit in a tax that singles out the press.”

I’ve been thinking a lot about the Star Tribune case since first reading about SB 73, a new bill in Utah proposed by State Senator Calvin R. Musselman. There was a time when the 1983 decision would have given me confidence that Musselman’s bill wouldn’t survive court scrutiny, assuming it becomes law in the state. After the Supreme Court’s decision last summer in Free Speech Coalition v. Paxton, however, I’m a lot less certain.

Granted, there’s not a perfect analogy between the Minnesota law at issue in the Star Tribune case and the bill proposed in Utah, nor between the 1983 case and the Paxton case. But what has me feeling uneasy is the readiness of the current Supreme Court to throw aside precedent and impose a lower standard of review in cases where material alleged to be “harmful to minors” is at issue.

It’s not just reasonably well-informed laymen like me who are uncertain, either. In recent adult industry media coverageabout the Utah bill, litigators who are experts in the First Amendment were divided in their analysis, as well.

Speaking to XBIZ, attorney Larry Walters of FirstAmendment.com pointed out that in one recent case, “the Georgia Supreme Court upheld a 1% gross revenue tax on adult entertainment establishments in the face of a constitutional challenge.”

“But the court reasoned that the tax was justified not based on the entertainment content produced by the businesses, but on the alleged ‘adverse secondary effects’ of physical adult establishments, such as prostitution and child exploitation,” Walters noted, adding that there are “no recognized adverse secondary effects of online adult entertainment businesses. Accordingly, the Utah bill, if adopted, could be subject to a constitutional challenge as a violation of the First Amendment.”

Walters also observed that the Utah bill also could have trouble for it looming in the form of the Dormant Commerce Clause, which limits states’ ability to pass legislation that discriminates against of unreasonably burdens interstate commerce.

Walters’ colleague Corey Silverstein, commenting for the same article, was less optimistic.

“After the state-by-state pummeling of AV laws, this is only the beginning of another new trend of anti-adult and anti-free-speech laws that the entire industry needs to prepare for,” Silverstein said, also predicting that a challenge to the Utah bill, should it become law, would be unlikely to succeed.

One thing is certain: The Utah “porn tax” bill won’t be the end of state governments seeking to impose further regulatory burdens on the adult entertainment industry. Emboldened by their success in establishing age verification requirements, state legislatures across the country can be relied upon to cook up additional hurdles to put in the path of adult businesses, performers, content creators and anyone else engaged in expressing themselves through sexually explicit materials.

Read More »

GitHub Purges Adult Game Developers, Offers No Explanation

Anime two women

Something strange started rippling through a small, niche corner of the internet not long ago. Developers who build mods and plugins for hentai games and even interactive sex toys began waking up to missing repositories, locked accounts, and dead links. GitHub, the place many of them had treated like home base, had quietly pulled the rug out. No warning. No explanation. Just… gone.

From conversations within the community, the rough headcount quickly took shape: somewhere between 80 and 90 repositories, representing the work of roughly 40 to 50 people, vanished in a short window. Many of the takedowns seemed to cluster around late November and early December. A large number of the affected accounts belonged to modders working on games from Illusion, a now-defunct Japanese studio known for titles that mixed gameplay with varying degrees of erotic content. One banned account alone reportedly hosted contributions from more than 30 people across 40-plus repositories, according to members of the modding scene.

What made the situation feel especially surreal was the silence. Most suspended developers say they were never told which rule they’d broken—if any. Their accounts simply stopped working. Several insisted they’d been careful to stay within GitHub’s acceptable use guidelines, avoiding anything overtly explicit. The code was functional, technical, sometimes cheeky in naming, but never pornographic. At least, not in the way most people would define it.

“Amongst my repositories there were no explicitly sexual names or images anywhere in the code or the readme, the most suggestive naming would be on the level of referencing the dick as ‘the men thing’ or referencing the sex as ‘huffing puffing,’” one developer, Danil Zverev, told me. He makes plugins for an Illusion game called Koikatsu. Zverev said he’s been using Github for this purpose since 2024, but on November 18, his Github page was “completely deleted,” he said. “No notifications anywhere, simply a 404 error when accessing the page and inability to log in on the web or in the mobile app. Also it does not allow me to register a new account with the same name or email.”

The timing raised eyebrows. GitHub had updated its acceptable use policies in October 2025, adding language that forbids “sexually themed or suggestive content that serves little or no purpose other than to solicit an erotic or shocking response, particularly where that content is amplified by its placement in profiles or other social contexts.” The policy explicitly bars pornographic material and “graphic depictions of sexual acts including photographs, video, animation, drawings, computer-generated images, or text-based content.”

At the same time, the policy leaves room for interpretation. “We recognize that not all nudity or content related to sexuality is obscene. We may allow visual and/or textual depictions in artistic, educational, historical or journalistic contexts, or as it relates to victim advocacy,” GitHub’s terms of use state. “In some cases a disclaimer can help communicate the context of the project. However, please understand that we may choose to limit the content by giving users the option to opt in before viewing.”

The Anti-Porn Crusade That Censored Steam and Itch.io Started 30 Years Ago

Keywords and tags have never been a useful metric for distilling nuance. Pushing for regulations based on them is repeating a 30-year history of porn panic online.

SAMANTHA COLE

Zverev didn’t bother appealing. He said writing to support felt pointless and chose instead to move on to another platform. Others tried to fight it—and found themselves stuck in limbo.

A developer who goes by VerDevin, known for Blender modding guides, utility tools, and plugins for the game Custom Order Maid 3D2, said users began reporting trouble accessing his repositories in late October. Oddly, he could still see his account when logged in, but not when browsing while logged out.

“Turned out, as you already know, that my account was ‘signaled’ and I had to purposefully go to the report section of Github to learn about it. I never received any notifications, by mail or otherwise,” VerDevin told me. “At that point I sent a ticket asking politely for clarifications and the proceedings for reinstatement.”

The response from GitHub Trust & Safety was vague and procedural: “If you agree to abide by our Terms of Service going forward, please reply to this email and provide us more information on how you hope to use GitHub in the future. At that time we will continue our review of your request for reinstatement.”

VerDevin replied the next day, agreeing to comply and offering to remove whatever GitHub considered inappropriate—despite still not knowing what that was. “I did not take actual steps toward it as at that point I still didn’t know what was reproach of me,” they said.

A full month passed before GitHub followed up. “Your account was actioned due to violation of the following prohibition found in our Acceptable Use Policies: Specifically, the content or activity that was reported included multiple sexually explicit content in repositories, which we found to be in violation of our Acceptable Use Policies,” GitHub wrote.

“At that point I took down several repositories that might qualify as an attempt to show good faith (like a plugin named COM3D2.Interlewd),” they said. GitHub restored the account on December 17—weeks later, and just one day after additional questions were raised about the ban—but never clarified which content had triggered the action in the first place.

Requests for explanation went unanswered. Even when specific banned accounts were flagged to GitHub’s press team, the response was inconsistent. Some accounts were reinstated. Others weren’t. No clear reasoning was ever shared.

The whole episode highlights a problem that feels painfully familiar to anyone who’s worked on the edges of platform rules: adult content policies that are vague, inconsistently enforced, and devastating when applied without warning. These repositories weren’t fringe curiosities—they were tools used by potentially hundreds of thousands of people. The English-speaking Koikatsu modding Discord alone has more than 350,000 members. Another developer, Sauceke, whose account was suspended without explanation in mid-November, said users of his open-source adult toy mods are now running into broken links or missing files.

“Perhaps most frustratingly, all of the tickets, pull requests, past release builds and changelogs are gone, because those things are not part of Git (the version control system),” Sauceke told me. “So even if someone had the foresight to make mirrors before the ban (as I did), those mirrors would only keep up with the code changes, not these ‘extra’ things that are pretty much vital to our work.”

GitHub eventually reinstated Sauceke’s account on a Tuesday—seven weeks after the original suspension—following renewed questions about the bans. Support sent a brief note: “Thank you for the information you have provided. Sorry for the time taken to get back to you. We really do appreciate your patience. Sometimes our abuse detecting systems highlight accounts that need to be manually reviewed. We’ve cleared the restrictions from your account, so you have full access to GitHub again.”

Even so, the damage lingers. In Sauceke’s account and others, including the IllusionMods repository, release files remain hidden. “This makes the releases both inaccessible to users and impossible to migrate to other sites without some tedious work,” Sauceke said.

Accounts may come back. Repositories might be restored. But for many developers, the trust is already gone—and that’s the kind of thing that doesn’t reinstall quite so easily.

GitHub isn’t just another code host—it’s the town square for open-source developers. For adult creators especially, who are used to being quietly shoved to the margins everywhere else, visibility there actually matters. It’s how people find each other, trade ideas, and build something that feels bigger than a solo side project. “It’s the best place to build a community, to find like-minded people who dig your stuff and want to collaborate,” Sauceke said. But if this wave of bans stretches beyond hentai game and toy modders, they warned, it could trigger a slow exodus. Some developers aren’t waiting around to find out, already packing up their repositories and moving them to GitGoon, a platform built specifically for adult developers, or Codeberg, a nonprofit, Berlin-based alternative that runs on a similar model.

Read More »

Age Verification Push Sends Millions of Britons to Unregulated Porn Sites, Charity Says

Age verification

Something strange has been happening since age verification checks quietly became part of everyday internet life in the UK. You’d think stricter rules would close doors. Instead, a lot of people seem to be wandering down darker hallways.

New research from the Lucy Faithfull Foundation suggests that nearly 45 per cent of UK porn users have visited websites without age verification checks since the rules came into force last summer under the Online Safety Act.

The poll, which surveyed more than 3,700 people across Britain, found that 39 per cent of those who visited these sites ended up watching content that made them feel uncomfortable. Even more telling, 40 per cent said what they saw was enough to put them off returning altogether.

The charity, which focuses on preventing online child sexual abuse, has warned that these unregulated spaces can quietly increase the risk of people stumbling into harmful — and illegal — material.

Vicky Young, who leads the Stop It Now UK and Ireland anonymous helpline, said these sites can become a dangerous stepping stone toward indecent images of children.

“We work with people who have looked at indecent images of children to try and address that behaviour, to help support them to change their behaviour,” she said.

“One of the things that people say to us frequently is that they started looking at legal adult pornography and that their behaviour escalated. In part they were spending maybe longer online, but also the sort of content that they were looking at became more extreme and often started getting younger, and that’s when they then crossed into illegal behaviour, so looking at indecent images of children.”

That pattern — the slow creep from curiosity to something far more serious — is what worries the charity most.

“Because of that pathway and that coming out constantly in conversations we have with people, it concerns us that if people are accessing sites where there is this riskier content, that actually they are putting themselves at a higher chance of accessing indecent images,” she said.

“Sometimes that might not be intentional in the beginning, but what people tell us is that actually, if they come across those images as part of their other pornography, that then sparks curiosity. There’s something that kind of adds to the excitement around the risk, and they don’t necessarily stop at one image. They actually then start looking for more images.”

The survey also revealed a quieter anxiety bubbling under the surface. Nearly 30 per cent of respondents said they were worried about how much pornography they consume. That concern was highest among young men aged 18 to 24, with more than half admitting it’s something that troubles them — a group the Foundation describes as particularly vulnerable.

At the same time, the rules appear to be forcing a moment of self-reflection for many. Almost 47 per cent said they’ve reduced how much pornography they watch since age checks were introduced, while 55 per cent said the changes made them stop and think about their habits.

Recent data backs that up. Enforcement of highly effective age assurance led to an immediate drop in traffic to major porn sites from late July. But as one door closed, another cracked open: VPN use surged as people looked for ways around the new barriers.

The UK’s most visited adult site recorded a drop of around 1.5 million viewers year on year, falling from 11.3 million in August 2024 to 9.8 million this August.

Meanwhile, VPN usage more than doubled after the rules came in, jumping from roughly 650,000 daily users to a peak of over 1.4 million in mid-August 2025. Although that number has since dipped, it still hovered around 900,000 by November.

In other words, fewer people are walking through the front door — but plenty are still trying the side entrance.

Dr Alexandra Bailey, head of psychology at the Foundation and an associate professor at the University of Roehampton, said the intention behind age verification is sound, but the consequences are more complicated.

“Age verification is vital to protect children, and we fully support it,” she said. “But we also need to recognise that some adults are choosing riskier sites to avoid age checks. These sites can expose people to harmful material, including illegal content depicting child sexual abuse. Even if you’re not looking for it, you could encounter it — and that can have serious life-changing consequences.”

She added that the rules have created a pause many people probably needed, but not everyone is responding in a safe way.

“Age verification is also prompting adults to reflect on their online behaviour, which can be a good thing for people worried about their porn use. But we need to address the risks for those who are turning to sites that avoid the new regulations.

“Every day, our advisors speak to people whose pornography use has spiralled into something much more harmful. We know embarrassment can stop people from reaching out, but confidential help is available. If you’re worried about your own behaviour or someone else’s, contact Stop It Now before it’s too late.”

Sometimes the most dangerous part isn’t the rule itself — it’s what people do when they decide to dodge it.

Read More »

2025: The Year Tighter Regulation Came to Town for the Online Adult Industry by Morley Safeword

Adults only button

When I got my start in the online sector of the adult entertainment business, back in the mid-nineties, there was no video streaming. Individual photos often took web users several minutes to download. And you hardly heard a peep from anyone suggesting that the fledgling industry needed to be reined in.

To be fair, many people were only vaguely aware of what was available on the internet at the time, let alone worried about what their kids might be looking for on there – and frankly, the web was so slow, using it exceeded the patience of a lot of kids, anyway.

Oh, how things have changed.

What evolved fastest, of course, was the technology underpinning the internet. As high-speed connectivity became the norm rather than the exception and video streaming capabilities increased year over year, online porn went from something enjoyed by a small subset of early adopters to a massive, multibillion dollar industry. Along with those changes in technology came ever louder calls for the online adult industry to be more tightly regulated – or regulated at all, in the still-early-internet days of the mid-nineties.

In the United States, Congress began cooking up proposals to prevent minors from accessing online porn. While these proposals enjoyed broad bipartisan support (within the legislature, at least), what they didn’t get was much support from the courts.

Early attempts to impose things like age verification requirements were slapped down by the courts, most notably in cases like Reno v. ACLU, decided in 1997. In Reno, the Supreme Court held that certain provisions of the Communications Decency Act of 1996 (“CDA”) violated the First Amendment. Specifically, the court found that the CDA’s “indecent transmission” and “patently offensive display” provisions trod upon the freedom of speech protected by the First Amendment.

What changed in 2025, as the Supreme Court again considered an age verification proposal, this time a state law passed in Texas (“HB 1181”), was in part the continued forward march of technology. But more crucially, what changed was the court’s disposition as to which “standard of review” ought to be applied.

In previous cases involving online age verification proposals, the court has applied “strict scrutiny,” a high bar that requires the government to show its actions (and laws) are “narrowly tailored” to further a “compelling government interest” and are the “least restrictive means” to further that interest.

In the case Free Speech Coalition v. Paxton, which the Supreme Court decided in June, the district court had applied strict scrutiny and found that HB 1181 failed to satisfy the standard. When the case reached the Supreme Court, however, the majority decided the court had erred in applying strict scrutiny and that the correct standard to apply was “intermediate scrutiny,” which sets the bar much lower for the government.

Writing for the majority, Justice Clarence Thomas asserted that HB 1181 has “only an incidental effect on protected speech.”

“The First Amendment leaves undisturbed States’ traditional power to prevent minors from accessing speech that is obscene from their perspective,” Thomas wrote. “That power includes the power to require proof of age before an individual can access such speech. It follows that no person – adult or child – has a First Amendment right to access such speech without first submitting proof of age.”

Since the law “simply requires adults to verify their age before they can access speech that is obscene to children,” Thomas found that HB 1181 “is therefore subject only to intermediate scrutiny, which it readily survives.”

The three justices who dissented from the majority’s position didn’t see things quite the same way, naturally. In her dissent, Justice Elena Kagan criticized the majority’s holding as “confused” and highlighted the ways in which it departed from the court’s previous rulings in similar cases.

“Cases raising that question have reached this Court on no fewer than four prior occasions – and we have given the same answer, consistent with general free speech principles, each and every time,” Kagan observed. “Under those principles, we apply strict scrutiny, a highly rigorous but not fatal form of constitutional review, to laws regulating protected speech based on its content. And laws like H. B. 1181 fit that description: They impede adults from viewing a class of speech protected for them (even though not for children) and defined by its content. So, when we have confronted those laws before, we have always asked the strict scrutiny question: Is the law the least restrictive means of achieving a compelling state interest? There is no reason to change course.”

Whether there was reason to change course or not, surely now the course has been changed. Make no mistake, laws like HB 1181 are here to stay – and they will be followed by other measures designed to restrict access to sexually-explicit materials online, as well as regulation which goes much further and sweeps in an even broader range of controversial content.

The old cliché about the “canary in the coal mine” has often been applied to pornography in the context of free speech discussions. Even those who don’t like or approve of porn have often warned that crackdowns on sexually explicit expression can presage attempts at regulating other forms of speech.

If indeed those of us who work in the adult industry are part of a sentinel species, the warning to our peers in the broader world of entertainment and self-expression could not be more clear, as we look out to 2026 and beyond: Here in the world of online porn canaries, we’re choking on this new regulatory push – and most likely, some of you other birds are going to be feeling short of breath too, soon enough.

Read More »

The Adult Industry Has Been Through Worse. We Will Survive by Morley Safeword

Anthony Comstock

These are challenging times for the adult entertainment industry, no doubt. Around the globe, governments are passing increasingly strict regulations around age verification and other, more censorious measures putatively designed to “protect minors,” but which legislators and anti-porn crusaders also hope will reduce porn consumption among adults, as well.

If all this is enough to inspire some folks in the adult industry want to wave the white flag, close up shop, and find something else to do for a living, I can certainly understand why. As the name of this site reflects, people in the industry rightfully feel like they’re under siege, waging a battle against forces with a great deal more wealth and power to enlist as weapons than does our side.

As someone who has worked in the adult industry for nearly 30 years (and who has enjoyed its products even longer), take it from me when I tell you none of this is new. Some of the battlefields are new and they are constantly evolving, but the war itself goes back longer than many of us can remember.

In the United States, obscenity laws and other statutes designed to maintain public morals and prevent the corruption of society date back to colonial times. In other words, long before there was an adult entertainment industry against which to wage war, the government was taking aim at sexual expression and conduct.

Fast forward to the 19th Century and there was the establishment of the Comstock Act of 1873, which—among many other things—made it a criminal offense to send obscene materials through the U.S. mail. The Act also made it illegal to use the mail to tell someone where such materials might be found, or how to make them provisions, which was later struck down by the courts as overly broad, thankfully.

To give you an idea of just how much more restrictive the obscenity laws were in the early 20th Century than they are today, you need only look as far as the name of a seminal case from 1933 – United States v. One Book Called Ulysses. Frankly, the contents of James Joyce’s Ulysses wouldn’t even be enough to raise one-half of a would-be censor’s eyebrow these days, yet it was considered positively scandalous in its day.

From an American adult industry perspective, the War on Porn arguably reached its zenith in the 1980s and 1990s, under Presidents Ronald Reagan and George H.W. Bush. According to the Bureau of Justice Statistics, in 1990 alone there were 74 federal obscenity prosecutions targeting adult materials (as opposed to Child Sexual Abuse Materials, which are patently illegal and have no protection under the First Amendment). Contrast that figure with 2009, in which there were a total of six.

Despite the number of prosecutions at the start of the decade, the 1990s were a period of tremendous growth for the adult industry, driven in large part by the advent of the commercial internet and its relatively unregulated environment. What we’re seeing now is what governments might call a “correction” of that laissez faire approach – and what those of us in the industry might call an overcorrection.

Yes, age-verification laws present a challenge. Like a lot of people in the adult industry, I don’t object to the idea of making people prove they’re adults before consuming porn; what I object to is the means by which we’re required to offer such proof and the way those methods compromise not only our privacy, but potentially open us up to extortion, identity theft and other crimes. I’m also not convinced age verification, at least as currently executed, does much to prevent minors from being exposed to porn.

If you were to ask any of the people who have been prosecuted for obscenity for the movies they’ve made, books they’ve written, or magazines they’ve published, I think you’d find near unanimity on the question of whether they’d rather pay a financial penalty, or face serving years in prison in addition to being fined, as the likes of Paul Little (AKA “Max Hardcore”) have done in the past.

My point here is not that those of us currently working in the adult industry should simply thank our lucky stars we avoided the crackdowns of the past or simply accept the current campaign against the adult industry without putting up a fight. My point is simply this: We’ve been under the gun for decades and we’ve not only survived but expanded as an industry considerably along the way.

The bottom line, whether the anti-porn zealots like it or not, is many humans like sexual expression, whether one calls it “porn,” “erotica,” or “filth.” Neither the desire to consume the products we make nor the desire to make them is going away—and neither are we.

Read More »

What Would Ethical Age Verification Online Actually Look Like?

age verification

Age-verification laws are spreading fast, and on paper they sound simple enough: if a website hosts explicit content — and sometimes even if it doesn’t — it has to check that visitors are over 18, usually by collecting personal data. Lawmakers say it’s about protecting kids. Full stop.

But scratch the surface and things get messy. Privacy experts keep waving red flags about what happens when sensitive personal data starts piling up on servers. And this year, several studies quietly dropped an uncomfortable truth: these laws don’t actually seem to stop minors from accessing porn at all.

So the uncomfortable question hangs in the air — is age verification, the way it’s currently done, ethical? And if not, what would ethical age-verification even look like? When experts were asked, their answers kept circling back to the same idea: device-level filters.

Current age-verification systems

Right now, most laws — from state-by-state mandates in the U.S. to the UK’s Online Safety Act — put the burden on platforms themselves. Websites are expected to install age checks and sort it out. And, honestly, it hasn’t gone well.

“Age gating, especially the current technology that is available, is ineffective at achieving the goals it seeks to achieve, and minors can circumvent it,” said Cody Venzke, senior policy counsel for the ACLU.

A study published in November showed what happens next. Once these laws go live, searches for VPNs shoot up. That’s usually a sign people are sidestepping location-based restrictions — and succeeding. Searches for porn sites also rise, suggesting people are hunting for platforms that simply don’t comply.

The ethics get even murkier. Mike Stabile, director of public policy at the Free Speech Coalition, didn’t mince words. “In practice, they’ve so far functioned as a form of censorship.”

Fear plays a huge role here. When people worry their IDs might be stored, processed, or leaked — and we’ve already seen IDs exposed, like during October’s Discord hack — they hesitate. Adults back away from legal content. That same November study argued that the cost to adults’ First Amendment rights doesn’t outweigh the limited benefits for minors.

“Unfortunately, we’ve heard many of the advocates behind these laws say that this chilling effect is, in fact, good. They don’t want adults accessing porn,” Stabile said.

And for some lawmakers, that’s not a bug — it’s the feature. Project 2025, the blueprint tied to President Trump’s second term, openly calls for banning porn altogether and imprisoning creators. One of its co-writers, Russell Vought, was reportedly caught on a secret recording in 2024 calling age-verification laws a porn ban through the “back door.”

But there is another path. And it doesn’t start with websites at all.

An ethical age assurance method?

“Storing people’s actual birth dates on company servers is probably not a good way to approach this, especially for minors… you can’t change your birth date if it gets leaked,” said Robbie Torney, senior director of AI programs at Common Sense Media.

“But there are approaches that are privacy-preserving and are already established in the industry that could go a long way towards making it safer for kids to interact across a wide range of digital services.”

It also helps to separate two terms that often get lumped together. Age verification usually means confirming an exact age — showing ID, scanning documents, that sort of thing. Age assurance, Torney explained, is broader. It’s about determining whether someone falls into an age range without demanding precise details.

One real-world example is California’s AB 1043, set to take effect in 2027.

Under that law, operating systems — the software running phones, tablets, and computers — will ask for an age or birthday during setup. The device then creates an age-bracket signal, not an exact age, and sends that signal to apps. If someone’s underage, access is blocked. Simple. And notably, it all happens at the device level.

That approach has been recommended for years by free-speech advocates and adult platforms alike.

“Any solution should be easy to use, privacy-preserving, and consumer-friendly. In most cases, that means the verification is going to happen once, on the device,” Stabile said.

Sarah Gardner, founder and CEO of the child-safety nonprofit Heat Initiative, agreed. “Device-level verification is the best way to do age verification because you’re limiting the amount of data that you give to the apps. And many of the devices already know the age of the users,” she said.

Apple already does some of this. Its Communication Safety feature warns children when they send or receive images containing nudity through iMessage and gives them ways to get help. The company recently expanded protections for teens aged 13–17, including broader web content filters.

So yes, the technology exists. And in 2027, at least in California, device makers will have to use it.

But there’s a catch. AB 1043 doesn’t apply to websites — including adult sites. It only covers devices and app stores.

“Frankly, we want AB 1043 to apply to adult sites,” Stabile said. “We want a signal that tells us when someone is a minor. It’s the easiest, most effective way to block minors and doesn’t force adults to submit to biometrics every time they visit a website.”

Last month, Pornhub sent letters to Apple, Google, and Microsoft urging them to enable device-level age assurance for web platforms. Those letters referenced AB 1043 directly.

Venzke said the ACLU is watching these discussions closely, especially when it comes to privacy implications.

Will device-level age assurance catch on?

Whether tech giants will embrace the idea is still an open question. Microsoft declined to comment. Apple pointed to recent updates around under-18 accounts and a child-safety white paper stating, “The right place to address the dangers of age-restricted content online is the limited set of websites and apps that host that kind of content.”

Google struck a similar tone, saying it’s “committed to protecting kids online,” and highlighted new age-assurance tools like its Credential Manager API. At the same time, it made clear that certain high-risk services will always need to invest in their own compliance tools.

Torney thinks the future probably isn’t either-or. A layered system, where both platforms and operating systems share responsibility, may be unavoidable. “This has been a little bit like hot potato,” he said.

No system will ever be perfect. That part’s worth admitting out loud. “But if you’re operating from a vantage point of wanting to reduce harm, to increase appropriateness, and to increase youth wellbeing,” Torney said, “a more robust age assurance system is going to go much farther to keep the majority of teens safe.”

And maybe that’s the real shift here — moving away from blunt tools that scare adults and don’t stop kids, toward something quieter, smarter, and a little more honest about how people actually use the internet.

Read More »

Oh Good, Warning Labels Are Back Again by Stan Q. Brick

Cigarette warning label

Good news, everyone: The Nanny State is back and coming to a computer screen near you!

In fact, if you live in Washington state or Missouri, the Nanny State is coming to a computer screen very near you indeed, because it will be your own computer’s screen. Or smartphone screen, or smart watch screen, or pretty much any other screen you can connect to the internet.

As you may have read here on The War on Porn or elsewhere, both states currently are considering bills which would not only impose age verification requirements on adult websites but would require such sites to publish warning notices about their content, as well.

The Washington bill is the murkier of the two, stipulating that the warning labels to come are “to be developed by the department of health.” The Missouri bill, on the other hand, is quite specific indeed.

The legislation being pondered in Missouri would require sites to publish warnings stating that “Pornography is potentially biologically addictive, is proven to harm human brain development, desensitizes brain reward circuits, increases conditioned responses, and weakens brain function;” that “exposure to this content is associated with low self-esteem and body image, eating disorders, impaired brain development, and other emotional and mental illnesses;” and finally that “pornography increases the demand for prostitution, child exploitation, and child pornography.”

To say that these claims are disputed would be to put it mildly. Most of the evidence for these assertions is anecdotal in nature, in part because it’s very difficult to evaluate them without intentionally exposing a group of minors to pornography (which is illegal to do) in the context of clinical study.

Regardless of their basis in fact (or lack thereof) these labels are what attorneys and Constitutional scholars call “compelled speech,” something which is a bit of a no-no under First Amendment jurisprudence and the appropriately named “compelled speech doctrine.”

As explained by David L. Hudson Jr., writing for the Free Speech Center at Middle Tennessee State University, the compelled speech doctrine “sets out the principle that the government cannot force an individual or group to support certain expression.”

“Thus, the First Amendment not only limits the government from punishing a person for his speech, but it also prevents the government from punishing a person for refusing to articulate, advocate, or adhere to the government’s approved messages,” Hudson adds.

The compelled speech doctrine has been invoked by the Chief Justice John C. Roberts-era Supreme Court as recently as the case Rumsfeld v. Forum for Academic and Institutional Rights.

“Some of this Court’s leading First Amendment precedents have established the principle that freedom of speech prohibits the government from telling people what they must say,” Roberts wrote for the Court in 2006.

When some folks hear about these labels, doubtlessly they say to themselves something like “How is this any different from requiring cigarette packages to carry warning labels?” And that would be a good question, if cigarettes were a form of speech that presumptively enjoys protection under the First Amendment.

Beyond that distinction, there’s another obvious difference here. Cigarettes, unlike pornography, have been subjected to extensive clinical study, research which has confirmed that nicotine is addictive, and that tobacco (along with the myriad other substances found in cigarettes) is strongly associated with the development of lung cancer and various cardiopulmonary disorders and diseases.

In short, the analogy between pornography and cigarettes is a terrible one, scientifically and legally.

There was a time when I would very confidently assert that the Supreme Court will eventually reject these warning labels as textbook compelled speech and shoot down at least the labeling requirements in the bills pending in Washington and Missouri. But after their decision in Free Speech Coalition v. Paxton, I’m not so sure.

For those who like the contours of our First Amendment just the way they are, this uncertainly should be even more alarming than the warning labels the Nanny State wants us to start seeing on porn sites.

Read More »

Missouri Becomes the Latest State to Treat Online Adults Like Children by Stan Q. Brick

Missouri flag

Citizens of Missouri who frequent adult websites will find the internet has changed for them when they wake up this Sunday morning, towards the end of the long Thanksgiving weekend.

Why will the internet be different for citizens of Missouri as of that morning? Because Sunday is November 30, the day the state’s new age-verification mandate begins for websites covered by the “Missouri Merchandising Practices Act.”

Under the law, websites on which a “substantial portion” of the content (33% or more) is deemed “pornographic for minors” must employ a “reasonable age verification method” to assure anyone accessing such content is an adult.

On its face, requiring adult sites to verify the age of their visitors may not seem like such an unreasonable proposition. But, as the saying goes, “the devil is in the details.”

For starters, making adults jump through hoops to enter a brick-and-mortar adult video store, or requiring people to show ID when purchasing a porn mag at a convenience store is one thing, storing and cross-referencing their personally identifying information is quite another.

When a clerk at an adult shop or any store that sells age-restricted materials checks your ID, they look at it, they look at you, they check the date of birth listed on the ID document and then you both get on with your lives. Minutes later, that same clerk probably couldn’t tell you much about the customer they’d just served, other than “I checked his ID, it looked legit and he’s 55 freaking years old, dude.”

When I scan my ID on the behest of an age-verification provider…who the fuck knows what happens to that data? Sure, some of these state laws prohibit vendors from storing and sharing that data, but do you trust them to follow the law? How many times do we need to haul tech companies before Congress (or watch them get fined by the FCC) to admit they interpreted the law in some “nuanced” way that permits them to hold on to and use our personal data before we get wise to their sneaky ways?

The data collected by age-verification services is valuable to them. They aren’t going to abstain from using it in every profitable way possible, regardless of what the law says. They will find ways to interpret the law such that they can sell, rent out or permit third-party cross-referencing of the data, mark my words. Some of these companies won’t be domiciled in the United States – and they will give just about as big a shit about U.S. law as any other business located outside the jurisdiction of the U.S. does, accordingly.

Of course, none of this will bother the politicians who pass these laws, because this isn’t about protecting kids – and it sure as hell isn’t about protecting the privacy of adults who like to watch porn. This is about a larger antipathy towards adult entertainment and a desire to discourage anyone and everyone from looking at porn, not just minors.

Consider what Missouri Attorney General Catherine Hanaway had to say in September about the new law in her state: “We are holding powerful corporations accountable, respecting women and victims of human trafficking, and helping ensure that minors are shielded from dangerous, sexually explicit material.”

Notice that the bit about “helping ensure that minors are shielded” comes last on the list? That’s not a coincidence.

Someone also needs to explain to me how making people show ID at the door when they watch porn is in any way helping “women and victims of human trafficking.” Let’s assume a person has been trafficked for the purpose of performing in porn (something that truly doesn’t happen often at all, despite a constant stream of political rhetoric to the contrary); how does making viewers confirm they’re old enough to watch legal porn help anyone who has been forced into making illegal porn?

The word “trafficking” doesn’t appear in the text of Missouri’s new law. What does appear there is the claim “nothing in this proposed rule limits the ability of adults to view sexually explicit material online,” which is technically true, so long as one doesn’t consider an age-verification requirement a “limit” to any of the adults who would prefer not to hand over the personally identifying information to God-knows-who.

When the Supreme Court ruled in favor of Texas in the challenge to that state’s age-verification mandate, Cecillia Wang, the national legal director of the American Civil Liberties Union, said something that strikes me as being just as true with respect to the Missouri law:

“The legislature claims to be protecting children from sexually explicit materials, but the law will do little to block their access and instead deters adults from viewing vast amounts of First Amendment-protected content.”

She’s right – and the list of adults deterred by such laws is only going to get longer as these laws proliferate.

Welcome to the dumb-downed internet. Please be mindful of the language you use herein; some of your readers might be children!

Read More »

Aylo Pushes Tech Giants to Adopt API-Driven Device Age Verification

Aylo-logo

Something interesting happens when big tech companies get a polite nudge from a company they usually keep at arm’s length. That’s exactly what Aylo — the parent company of Pornhub — just did. The company asked Google, Apple, and Microsoft to open the door to API signals that would let platforms verify a user’s age at the device or operating-system level. The goal? Keeping minors off porn. It’s a request that feels both obvious and strangely overdue, considering how much of the internet already runs through those devices.

Wired revealed last week that Anthony Penhale, Aylo’s chief legal officer, sent separate letters on Nov. 14 to the relevant executives at each company. Those letters were later confirmed by Aylo, whose spokesperson provided them for review.

Aylo has been steadily pushing the idea that age verification should happen at the device level — not slapped awkwardly onto individual sites through clunky pop-ups and ID uploads. It’s a stance that puts the company at odds with most state and regional age-gating laws in the U.S. and E.U., which still rely on site-level verification. Meanwhile, Google, Apple, and Microsoft have been sending mixed signals about how far they’re willing to go with device-based checks.

Most recently, California’s governor, Gavin Newsom, signed a bill requiring age verification in app stores. Google, Meta, and OpenAI endorsed the measure, while major film studios and streaming platforms pushed back, calling the law a step too far.

“We strongly advocate for device-based age assurance, where users’ age is determined once on the device, and the age range can be used to create an age signal sent over an API to websites,” Penhale wrote in his letter to Apple. “Understanding that your Declared Age Range API is designed to ‘help developers obtain users’ age categories’ for apps, we respectfully request that Apple extend this device-based approach to web platforms.”

“We believe this extension is critical to achieving effective age assurance across the entire digital ecosystem and would enable responsible platforms like ours to provide better protection for minors while preserving user privacy,” he added.

Penhale’s letters to Alphabet and Microsoft echoed the same ask: allow website operators — not just app developers — access to the age-related API tools each company already uses within its own ecosystem.

“As a platform operator committed to user safety and regulatory compliance, Aylo would welcome the opportunity to participate in any technical working groups or discussions regarding extending the current age signal functionality to websites,” Penhale wrote in the letter sent to Microsoft.

A Google spokesperson told Wired that Google Play doesn’t “allow adult entertainment apps” and that “certain high-risk services like Aylo will always need to invest in specific tools to meet their own legal and responsibility obligations.” In other words, Google’s not eager to widen the gates.

Developer documentation shows that Apple now turns on content controls by default for new devices registered to under-18 users. Microsoft, for its part, has leaned heavily toward service-level verification — meaning platforms should handle their own age checks rather than relying on the device.

All of this is unfolding while Aylo continues to argue that site-level age verification doesn’t work. The company has pointed to real-world examples of how these systems push users off regulated sites and into murkier, unmonitored corners of the web.

Internal data shows that traffic from the U.K. to Aylo’s platforms dropped more than 77 percent after Ofcom began enforcing new rules under the Online Safety Act. Related documents reviewed privately indicate that users didn’t disappear — they simply migrated to non-compliant, unregulated sites.

At the same time, a court in Germany just offered Aylo a temporary lifeline. On Nov. 19, the Administrative Court of Düsseldorf put a hold on new regulations requiring ISPs to block Pornhub and YouPorn entirely.

The court’s order would have forced ISPs like Deutsche Telekom, Vodafone, and O2 to bar access to the sites over Germany’s age verification laws. For now, those rules are on pause while the High Administrative Court of North Rhine-Westphalia works through appeals on the original network-ban orders.

Interestingly, the Düsseldorf court pointed out that Germany’s enforcement approach under the Youth Media Protection Interstate Treaty contradicts the European Union’s Digital Services Act, which outlines a different vision for age verification.

Aylo is still fighting over its designation as a “very-large online platform” under the DSA — a label that brings intense regulatory scrutiny and a long list of compliance demands. The company’s push for device-based age checks is part of that bigger battle, and it’s hard not to notice the irony: the company everyone expects to resist regulation is the one asking for the kind that might actually work.

Read More »