The War on Porn

Arcom Moves to Block or Delist Adult Sites Over Age-Verification Failures

Arcom logo

PARIS — There’s a particular kind of chill that runs through an industry when the letters stop being polite reminders and start sounding like countdown clocks.

Earlier this month, France’s digital watchdog Arcom quietly moved from warning shots to something sharper. In a statement released Tuesday, the agency confirmed that, at the beginning of December 2025, it sent enforcement letters to three adult websites it believes were ignoring the country’s age-verification requirements under the Security and Regulation of the Digital Space (SREN) law.

A few weeks passed. Enough time to fix things. Enough time to at least try. Two of the sites didn’t.

So now the tone has changed. Arcom has issued formal notices to those two operators, giving them 15 days to comply with the law or risk being blocked and/or delisted entirely. Fifteen days isn’t much time in tech, but it’s a lifetime in regulatory terms. It’s the kind of deadline that makes inboxes sweat.

The third site isn’t off the hook either. Arcom says it plans to work directly with that operator to evaluate whether its age-verification solution actually does what it claims to do. Not just ticking a box, but functioning in the real world, where friction, privacy, and compliance collide.

Notably, the agency didn’t name the websites involved or disclose where they’re based. That silence feels intentional. This isn’t about shaming specific players; it’s about setting a precedent. The statement frames the move as part of Arcom’s already-telegraphed plan to widen enforcement beyond the biggest platforms and start pulling smaller adult sites into the compliance spotlight.

It’s a reminder that flying under the radar isn’t a strategy anymore. The radar got better.

Read More »

Click Here to Keep Clicking Here, So You Can Click There (Eventually) by Stan Q. Brick

Ofcom logo

I was on the lookout for something to write about. “I know,” I thought, “I’ll see what the latest news is to come out of OfCom, the UK’s regulatory authority for broadcast, internet, telecommunications and postal services!

In the old days, days I remember with great fondness, I could have just typed Ofcom.org.uk into the nav bar on my browser and I’d be there, reading the latest from Ofcom. Not anymore – because now, even to read a drab, dull, regulatory agency’s website, first I must satisfy a machine’s demand that I prove I’m human, first.

No big deal. Just a simple captcha test (one probably easily defeated by a sophisticated enough bot, tbh) and I’m on my way… sort of. Which is to say I would be on my way, except now I must read a disclosure about cookies, perhaps adjust some settings and then “accept” or “save” or “surrender to” those preferences, or whatever the verbiage might be.

This is using the internet now, apparently. Instead of “surfing” and the freedom of movement that term suggests, it’s more like navigating a joyless obstacle course, in which I’m required to verify my age and/or my very humanity as I hop from step to step.

I’m sure this seems to many people like an overstated complaint. “So what?” they might say. Why is it a big deal to verify minor details like your age, or to have your internet path blocked in one way or another, based largely on where I live and where the site I’m accessing is located?

People used to call the internet the “information superhighway.” While this was an admittedly irritating buzz phrase, the term did at least capture the sense that the internet was something largely unfettered, where data, entertainment, information, misinformation and all manner of expressive content was available to all those able to access it.

Now, despite the fact I’ve been an adult for nearly 40 years, every time I turn around while online, I’m being asked to verify the fact of my adulthood anew. (Yes, I do visit a lot of porn sites; it sort of comes with the territory of – you know – working in and writing about the online porn industry.)

I understand a lot of people are hot to make the internet “safer,” but to me, this desire betrays an ignorance of what the internet is – or if not an ignorance of its nature, a stubborn desire to transform the internet to something else. But the internet, whatever else it might be, is a massive computer network about which the best thing has always been the worst thing, as well: Virtually anyone can publish virtually anything on it.

Slap as many age gates and regulations as you’d like on a massive, global, computer network; you’re still just engaging in an endless game of whack-a-mole. OfCom themselves reported that after the requirement that adult sites employ “Highly Effective Age Assurance” (HEAA) methods, VPN usage in the UK more than doubled, “rising from about 650k daily users before 25 July 2025 and peaking at over 1.4m in mid-August 2025.”

OfCom is undeterred by numbers like these, of course. Their inevitable answer will be to impose restrictions on VPN use. Because like any government regulatory agency, if there’s one thing OfCom will not be able to tolerate, it will be the sense they can’t control that which is in their remit to tame.

Speaking of OfCom, when I did finally satisfy their system that I’m a human who doesn’t want to spend a lot of time deciding which cookies he does and doesn’t want attaching to his browser, what I found was an explanation of – and almost an apology for – the upper limit of the agency’s regulatory reach with respect to AI chatbots.

After stating with apparent pride that OfCom was “one of the first regulators in the world to act on concerning reports of the Grok AI chatbot account on X being used to create and share demeaning sexual deepfakes of real people,” OfCom goes on to explain that “not all AI chatbots are regulated” by the agency.

“Broadly, the Online Safety Act regulates user-to-user services, search services and services that publish pornographic content,” OfCom explained. (They don’t say so, but just for your edification, this limited scope is due to sexually explicit depictions being awful, youth-corrupting and inherently sinister, while depictions of people getting shot in the head or beaten bloody with lead pipes are fine.)

On the other hand, “AI chatbots are not subject to regulation if they… only allow people to interact with the chatbot itself and no other users (i.e. they are not user-to-user services);

do not search multiple websites or databases when giving responses to users (i.e. are not search services); and cannot generate pornographic content.”

OfCom ends its notice with a how-to guide on reporting anything you find online “that you think might be harmful or illegal.”

I’d try reporting OfCom’s website itself for harmful content, because I sure feel like I’m getting dumber just by reading it… but I suspect to execute this vengeful little practical joke, I’d have to pass at least three captcha tests, verify my age seven times and produce some manner of HCPN (“Highly Compelling Proof of Netizenship”).

You know what? I think I’ll just read a book. So far as I’m aware, I’m not required to present ID to grab an old tome off the shelves in my study… yet.

Read More »

FTC Inches Closer to a New ‘Click to Cancel’ Subscription Rule

Click to cancel

It always seems to start the same way: you notice a charge you don’t recognize, scroll through your bank app, and realize—again—that you’re paying for something you thought you canceled months ago. That frustration is the backdrop as the Federal Trade Commission once more steps into the messy, bureaucratic maze of subscription rules, trying to revive its long-stalled effort to rein in negative option plans after a federal court knocked down its last attempt.

In a statement released Friday, the FTC said it has submitted a draft Advance Notice of Proposed Rulemaking, or ANPRM, on its Negative Option Rule to the Office of Information and Regulatory Affairs. OIRA, which sits inside the Office of Management and Budget, now gets to scrutinize the proposal before the FTC can publish it in the Federal Register. Only then does the public get a say—one more round of comments, one more chance for consumers to vent about subscriptions that refuse to die.

The commission’s vote to approve sending the draft to OIRA was unanimous, though that unanimity comes with an asterisk. The FTC currently has only two sitting commissioners, leaving three seats empty. One of those two, Chairman Andrew N. Ferguson, had previously voted against the updated Negative Option Rule when it narrowly passed in October 2024. It’s a strange kind of consensus, the sort you get when the room is half-empty.

That earlier rule didn’t survive long anyway. The U.S. Court of Appeals for the 8th Circuit vacated it while further review plays out, siding with critics who argued the agency overstepped its authority and skipped required procedural steps by failing to issue a preliminary regulatory analysis. In regulatory terms, it was less a slap on the wrist and more a reminder that process still matters—even when intentions are good.

Back in December 2025, the FTC also posted a petition for rulemaking from the Consumer Federation of America and the American Economic Liberties Project. The public comment window on that petition closed Jan. 2, quietly adding another layer of pressure and paperwork to an already complicated path forward.

The Negative Option Rule itself isn’t new. It dates back to the 1970s, born in an era of mail-order clubs and surprise shipments, designed to stop consumers from being signed up—and billed—without clear consent. The 2024 amendments would have dramatically expanded its reach, covering nearly all negative option programs, from auto-renewing subscriptions to “free trial” offers that quietly flip into paid plans. For many websites, that would have meant rethinking how sign-ups work and, more importantly, how easy it is to cancel.

Now, with the process restarted yet again, the FTC could circle back with the same ideas, or something close to them. Whether this time leads to real change—or just another loop through regulatory limbo—remains the open question hanging over every “Cancel subscription” button that somehow never quite does what it promises.

Read More »

Conservative Push for Porn Taxes Sparks Constitutional Backlash

Tax

It feels like the walls are closing in a little more every week. As age-verification laws continue to reshape—and in some cases dismantle—the adult industry, a Utah lawmaker has now stepped forward with a bill that would slap a new tax on porn sites operating in the state. It’s the kind of proposal that makes you pause, reread the headline, and wonder how we got here so fast.

Introduced by Republican state senator Calvin Musselman, the bill would impose a 7 percent tax on total receipts “from sales, distributions, memberships, subscriptions, performances, and content amounting to material harmful to minors that is produced, sold, filmed, generated, or otherwise based” in Utah. If passed, it would take effect in May and require adult sites to pay an additional $500 annual fee to the State Tax Commission. According to the legislation, revenue from the tax would be directed to Utah’s Department of Health and Human Services to expand mental health support for teens.

A new strain of American conservatism is asserting itself more boldly, and lawmakers across the US are calling for tighter restrictions on adult content. In September, Alabama became the first state to introduce a porn tax—10 percent on adult entertainment companies—after passing age-verification mandates that require users to upload ID or other personal documentation before accessing explicit material. Pennsylvania lawmakers are also exploring a proposal that would tack an extra 10 percent tax onto subscriptions and one-time purchases from online adult platforms, despite already charging a 6 percent sales and use tax on digital products, two state senators wrote in an October memo. Other states have flirted with similar ideas before. In 2019, Arizona state senator Gail Griffin, a Republican, proposed taxing adult content distributors to help fund a border wall during Donald Trump’s first term. To date, 25 US states have enacted some form of age verification.

Professor Answers Television History Questions

Efforts to criminalize sex workers and regulate the industry have been unfolding for years, accelerating alongside increased online surveillance and censorship. Yet targeted taxes have repeatedly stalled, in part because the legality of such measures remains deeply contested.

“This kind of porn tax is blatantly unconstitutional,” says Evelyn Douek, an associate professor of law at Stanford Law School. “It singles out a particular type of protected speech for disfavored treatment, purely because the legislature doesn’t like it—that’s exactly what the First Amendment is designed to protect against. Utah may not like porn, but as the Supreme Court affirmed only last year, adults have a fully protected right to access it.”

Utah, Alabama, and Pennsylvania are among 16 states that have adopted resolutions declaring pornography a public health crisis. “We realize this is a bold assertion not everyone will agree on, but it’s the full-fledged truth,” Utah governor Gary Herbert tweeted in 2016 after signing the resolution. Utah’s early response to the spread of adult content dates back to 2001, when it became the first state to establish an office focused on sexually explicit material by appointing an obscenity and pornography complaints ombudsman. The role—often referred to as the “porn czar”—was eliminated in 2017.

“Age restriction is a very complex subject that brings with it data privacy concerns and the potential for uneven and inconsistent application for different digital platforms,” Alex Kekesi, vice president of brand and community at Pornhub, said previously. In November, the company urged Google, Microsoft, and Apple to adopt device-based verification across app stores and operating systems. “We have seen several states and countries try to impose platform-level age verification requirements, and they have all failed to adequately protect children.” To comply with existing mandates, Pornhub has blocked access to users in 23 states.

Critics argue that age verification has never truly been about protecting children, but about quietly scrubbing porn from the internet. In 2024, a leaked video from the Centre for Climate Reporting showed Russell Vought, a Trump ally and Project 2025 coauthor, describing age-verification laws as a “back door” to a federal porn ban.

Platforms like OnlyFans and Pornhub have pushed sex work further into the mainstream, but they’ve also made it easier to monitor and police both performers and audiences. As states consider new tariffs and penalties, it’s creators who are most likely to absorb the shock.

The cultural conservatism taking shape under Trump 2.0 is driven by a desire to punish sexual expression, says Mike Stabile, director of public policy at the Free Speech Coalition, the US adult industry’s trade association. “When we talk about free speech, we generally mean the freedom to speak, the ability to speak freely without government interference. But in this case, free also means not having to pay for the right to do so. A government tax on speech limits that right to those who can afford it.”

OnlyFans says it complies with all tax requirements in the jurisdictions where it operates, while creators remain responsible for their own tax obligations. Pornhub, which is currently blocked in Utah and Alabama, did not respond to a request for comment.

Douek points out that while states can regulate minors’ access to explicit material following the Supreme Court’s decision upholding Texas’ age-verification law, “a porn tax does nothing to limit minors’ access to this speech—it simply makes it more expensive to provide this content to adults.” A 2022 report from Common Sense Media found that 73 percent of teens aged 13 to 17 had viewed adult content online. Today, much of that exposure happens through social media platforms like X and Snap. A recent survey from the UK’s Office of the Children’s Commissioner found that 59 percent of minors encounter porn accidentally—up from 38 percent the year before—mostly via social feeds.

In Alabama, as would be the case in Utah, revenue from the porn tax is earmarked for behavioral health services, including prevention, treatment, and recovery programs for young people.

Last year, Alabama state representative Ben Robbins, the Republican sponsor of the bill, said adult content was “a driver in causing mental health issues” in the state. It’s a familiar claim among lawmakers advocating for a nationwide porn ban. While some studies suggest adolescent exposure to porn may correlate with depression, low self-esteem, or normalized violence, medical experts have never reached a clear consensus.

As lawmakers increasingly frame the issue around harm to minors, Stabile says it’s crucial to remember that adult content is not a special category outside the bounds of free expression. Courts have repeatedly struck down content-specific taxes as unconstitutional censorship.

“What if a state decided that Covid misinformation was straining state health resources and taxed newsletters who promoted it? What if the federal government decided to require a costly license to start a podcast? What if a state decided to tax a certain newspaper it didn’t like?” he says. “Porn isn’t some magical category of speech separate from movies, streaming services, or other forms of entertainment. Adult businesses already pay taxes on the income they earn, just as every other business does. Taxing them because of imagined harms isn’t just dangerous for our industry—it’s a dangerous expansion of government power.”

Read More »

‘An Embarrassment’: Critics Slam UK’s Proposed VPN Age Checks

VPN

It started the way these things always seem to start lately—with a vote that felt small on paper and enormous everywhere else. Politicians, technologists, and civil society groups reacted with visible dismay after the House of Lords backed a move that would ban children from using VPNs and force providers to roll out age verification.

The backlash was swift. Wikipedia co-founder Jimmy Wales blasted the decision on X, calling the UK’s position an embarrassment. Windscribe CEO Yegor Sak had already summed up the idea as the “dumbest possible fix,” warning that forcing age checks on VPNs would set a deeply troubling precedent for digital privacy.

By Tuesday morning, the argument had spilled fully into the open. Online debate surged, with X logging more than 20,000 posts on the issue in just 24 hours—one of those moments where you can almost hear the internet arguing with itself.

Labour, Lords & VPN laws

Last week, the House of Lords voted in favor of an amendment to the Children’s Wellbeing and Schools Bill that would, in effect, bar anyone under 18 from using VPNs.

The proposal would require commercial VPN providers to deploy mandatory age assurance technology, specifically to stop minors from using VPNs to bypass online safety measures. It sounds tidy in theory. In reality, it opens a can of worms no one seems eager to fully acknowledge.

Notably, the government itself opposed the amendment. Instead, it has opened a three-month consultation on children’s social media use, which includes a broader look at VPNs and how—or whether—they should be addressed.

Political pushback

Even though the House of Lords has shown its hand, the proposal now heads to the House of Commons, where it’s expected to hit serious resistance from the Labour government.

If the Commons throws it out, as many expect, the Lords will have to decide whether to dig in and trigger a round of parliamentary “ping-pong” or quietly step aside.

Labour’s Lord Knight of Weymouth, who voted against the amendment, suggested there’s little appetite for a drawn-out fight. He told TechRadar that it’s unlikely politicians will “die in a ditch” over banning VPNs.

In his view, many lawmakers are chasing “something iconic” on child safety—something headline-friendly—rather than wading into the technical swamp that regulating VPNs would require.

That said, Knight didn’t dismiss the broader concern. He argued that regulator Ofcom “needs to do better” at enforcing existing safety laws and agreed that more should be done to protect children online, provided it’s handled “carefully.” That word—carefully—does a lot of work here.

Civil society’s response

Regardless of whether this particular amendment survives, one thing is clear: VPNs are under a brighter spotlight than ever, and not just in the UK.

In the United States, lawmakers in Wisconsin are pushing a bill that would require adult websites to block access from users connected via a VPN. In Michigan, legislators have floated ideas around ISP-level blocking of circumvention tools. Different routes, same destination.

Evan Greer, director of the US-based group Fight for the Future, warned that policies designed to discourage or ban VPN use will “will put human rights activists, journalists, abuse survivors and other vulnerable people in immediate danger.”

Fight for the Future is running a campaign that lets users contact lawmakers directly, arguing in an open letter that the ability to use the internet safely and privately is a fundamental human right.

Back in the UK, a public petition is urging the government to reject any plan that would effectively ban VPNs for children.

The Open Rights Group has also been vocal, pointing out that detecting or banning VPN use isn’t realistically possible without resorting to what it calls an “extreme level of digital authoritarianism.”

And just in case anyone missed the point the first time, the reaction hasn’t softened. Politicians, technologists, and civil society organizations continue to express dismay after the House of Lords vote to ban children from using VPNs and force providers to introduce age verification.

Jimmy Wales again called the UK’s stance an embarrassment, while Windscribe CEO Yegor Sak repeated his warning that this is the “dumbest possible fix” and a terrible precedent for privacy.

The conversation flared once more as public debate peaked Tuesday morning, with more than 20,000 posts appearing on X in a single day—a reminder that when it comes to privacy, the internet rarely stays quiet for long.

Read More »

Alabama Adds Notary Requirement to Adult Content Consent Forms

Alabama flag

Something shifted in the room last week, and you could feel it before anyone said a word. The kind of tension that creeps up your spine when people realize a line has been crossed. Alarm is spreading across the adult industry over new record-keeping and consent rules tied to adult content production and user access in Alabama, and that anxiety boiled over after a series of legal panels at the AVN Adult Entertainment Expo in Las Vegas.

At the center of the panic is a sweeping state law that folds age verification, a statewide 10 percent “sin” tax on pornography, and a wider web of regulation into one heavy package. For producers, the most immediate shock is a new demand that written consent documentation be notarized by a notary public, as directed by the Alabama Secretary of State.

The legislation, House Bill (HB) 164, tucks this requirement into the state’s deceptive trade practices statutes, a placement that feels almost casual given how disruptive the impact could be.

The law reads, “Any commercial entity, before knowingly publishing or distributing a private image […] through an adult website, shall obtain written consent to publish or distribute the private image from every individual depicted in the private image.

“The written consent required by this section shall be signed by the individual depicted and sworn to by a notary public,” the law further decrees. “The commercial entity shall maintain records of the written consent for not less than five calendar years following the publication or distribution of the private image.”

What’s missing, conspicuously, is guidance. No major regulatory clarification has come down from the state, leaving the notary requirement sitting in a gray zone that some believe could be worked around. In theory, studios could hire an in-house notary licensed in Alabama or elsewhere, assuming that person can legally perform notarial acts. In practice, that’s easier said than done.

Even with workarounds, the pressure mounts. Creators, producers, and studios are already feeling the weight of added friction, cost, and uncertainty. Several platforms are now warning users about written consent requirements, and many have begun restricting—or outright blocking—users tied to Alabama IP addresses.

It’s a familiar move. Parent companies behind major platforms like Pornhub have already chosen to geo-block nearly half the country in response to age-gating laws. When compliance becomes a minefield, the simplest option is often to shut the gate entirely.

First Amendment attorney Corey Silverstein, who represents adult industry clients, didn’t mince words when discussing Alabama’s approach.

“One could argue that Alabama created one of the most aggressive age verification laws because it included mandatory health warnings,” he said. “Alabama’s law is so burdensome to website operators that it’s no surprise that they are simply closing their doors to Alabama content creators.

“This goes to the heart of my continued position that these laws have nothing to do with protecting minors,” he continued. “States like Alabama want to control what a person can view or publish and completely ignore the First Amendment. Sadly, Alabama enacted a law so burdensome that it has achieved its nefarious goal of eliminating content it deems unsuitable.”

Silverstein shared those concerns during a legal panel at the expo, alongside Lawrence Walters of Walters Law Group, as the room wrestled with a question no one seems ready to answer out loud: when regulation becomes this heavy, is the point compliance—or disappearance?

Read More »

Alabama’s Latest Adult Content Law Pushes Creators Out, Not Toward Safety

Picture of the Alabama Flag

Most adult creators didn’t need a push notification to feel it. The moment the news started circulating, it landed with a familiar weight: Clips4Sale has restricted access in Alabama after the passage of House Bill 164, a law that introduces notarized consent requirements for performers and platforms. The company frames the decision as compliance—necessary, even prudent. Creators read it differently. To many, it felt like the ground quietly disappearing beneath their feet.

Both interpretations can coexist. And maybe that’s the most unsettling part.

Legislation like Alabama’s is almost always sold as “protective.” The language is comforting, even noble—designed to reassure the public that something dangerous is being handled. But when you listen to the people living under these laws—performers, indie creators, small operators—the tone shifts. What comes through isn’t relief. It’s confusion. Anxiety. A creeping sense that they’re being legislated out of existence without anyone actually talking to them.

House Bill 164 didn’t arrive out of nowhere. It’s part of a broader pattern unfolding across the country, where states are targeting adult platforms through new consent rules, age checks, and documentation standards. On paper, they sound reasonable. In reality, they unravel fast.

What they create isn’t safety. It’s splintering.

A Law That Misses the Reality of the Industry

Adult performers aren’t operating without rules. They never have been. For decades, the industry has been bound by strict federal record-keeping requirements—ID verification, age documentation, signed releases. These systems already exist. They’re already enforced. They’re already audited. And they’re treated seriously because the penalties for failure are brutal.

Which is exactly why Alabama’s law sparked disbelief instead of reassurance.

Adult performer Leilani Lei cut through the noise on X by asking a simple question: do lawmakers actually understand what notarization does? A notary verifies identity and witnesses a signature. That’s the full job description. They don’t assess consent. They don’t evaluate content. They don’t make legal judgments. Requiring notarization doesn’t increase safety—it adds friction, expense, and logistical chaos.

Is a notary expected on every set? For every solo clip? For content created privately by independent performers in their own homes? These aren’t dramatic hypotheticals. They’re practical questions that expose how disconnected the law is from how adult work actually functions.

When laws ignore operational reality, compliance stops being ethical and starts being geographic. Platforms block states. Creators lose access. Income vanishes—not because of misconduct, but because following the rules becomes impossible.

When “Protection” Quietly Becomes Economic Damage

One consequence of laws like HB 164 rarely gets discussed out loud: money.

Adult creators aren’t faceless entities. They’re people paying rent, covering medical bills, supporting families. For many, digital platforms aren’t side hustles—they’re lifelines. When a state gets geoblocked, creators living there lose their audience instantly. When platforms restrict access, creators with fans in that state watch sales drop overnight.

Cupcake SinClair’s response on X captured the mood perfectly—not panic, but dread. Not fear of regulation itself, but fear of where this path leads. If these laws keep spreading—each state tweaking the rules just enough—what does the landscape look like in a year? Two years? Does access slowly shrink until it’s determined entirely by ZIP code?

That’s not protection. That’s erosion.

And while platforms like Clips4Sale may view geoblocking as the least damaging option on the table, the fallout doesn’t land on the platform. It lands on creators. The backlash reflects more than anger—it reflects a growing sense that major decisions are being made without creators in the room, reshaping livelihoods without alternatives or support.

From the creator’s side, these aren’t abstract compliance choices. They translate into fewer customers, lower visibility, and more instability in an already fragile industry.

The Patchwork Problem Everyone Pretends Isn’t a Problem

One of the most dangerous aspects of this legislative trend is how inconsistent it is.

Each state passes its own version of “protective” law, often without coordination, consultation, or technical understanding. The result is a patchwork of requirements no platform can realistically meet across the board. What’s compliant in one state may be illegal in the next.

For massive tech companies, patchwork laws are an inconvenience. For adult platforms—already operating under heavier scrutiny, higher fees, and greater risk—they can be fatal.

For independent creators, they’re destabilizing by design.

When lawmakers ignore the cumulative effect of these laws, compliance becomes less about doing the right thing and more about surviving. Platforms that can’t afford bespoke, state-by-state systems opt out entirely. Creators are left scrambling to adapt, relocate, or rebuild somewhere else.

Who Is Actually Being Protected Here?

Supporters of laws like HB 164 often speak in moral absolutes. They invoke exploitation, trafficking, consent—serious issues that deserve serious responses.

But when legislation refuses to distinguish between criminal behavior and lawful adult work, it ends up punishing the latter while barely touching the former.

Bad actors don’t notarize forms. They don’t operate transparently. They don’t comply with documentation requirements. Meanwhile, compliant creators and legitimate platforms absorb the cost of laws that don’t meaningfully address wrongdoing.

Protection that collapses under scrutiny isn’t protection. It’s performance.

A Future Built on Exclusion Isn’t a Fix

The adult industry isn’t asking for no rules. It’s asking for rules that reflect reality.

That means lawmakers engaging with performers, platforms, and legal experts who understand how consent, documentation, and digital distribution actually work. It means recognizing that piling on procedural hurdles doesn’t automatically make anyone safer—and that cutting off access often harms the very people these laws claim to defend.

If this trend continues unchecked, the future of adult content in the U.S. won’t look like reform. It will look like retreat. More geoblocking. More platform withdrawals. More creators pushed out of legitimate marketplaces and into less secure corners of the internet.

That outcome serves no one—not performers, not platforms, and not the public.

Until the conversation moves beyond slogans and starts grappling with consequences, laws like Alabama’s will keep feeling less like protection and more like disappearance.

Read More »

FTC Comes Out in Favor of Age Verification at the Federal Level

FTC Building

It started the way a lot of policy shifts do in Washington—quietly, almost casually, with a few carefully chosen words that hinted at something much bigger. Federal Trade Commission commissioner Mark Meador, a figure aligned with a right-wing populist wing of the Republican Party, publicly backed age verification as a “better way” to shield minors from age-restricted online material that’s still protected by the First Amendment.

Meador has served on the Federal Trade Commission since his appointment and Senate confirmation in April 2025. His comments surfaced during an FTC-hosted workshop on January 28, a long, technical day that brought together experts and critics of age-verification laws and technology, including representatives connected to the adult-entertainment space.

But the room itself told another story. As previously reported, adult-industry companies and other key stakeholders—including the Free Speech Coalition—were conspicuously absent. According to an anonymous source, the workshop was planned with a built-in assumption that the industry had little credibility when it came to online safety for minors.

That exclusion didn’t go unnoticed. Several stakeholder groups said the adult industry was shut out not just from the panels, but from the planning process entirely. When you start deciding who gets a seat at the table—and who doesn’t—you’re already shaping the outcome.

“Age verification offers a better way—it offers a way to unleash American innovation without compromising the health and well-being of America’s most important resource: its children,” Meador said. “It is a tool that empowers rather than replaces America’s parents — really, I don’t know that we can afford to forego it.”

His enthusiasm didn’t come out of thin air. Meador’s position closely tracks with endorsements of age-verification laws and technology from conservative and far-right groups that have long opposed pornography and the companies that produce or distribute it. At the same time, even supporters admit the legal landscape is a mess—something that came up repeatedly in legal panels at AVN’s Adult Entertainment Expo in Las Vegas just last week.

Meador’s alliances are also telling. He has ties to prominent anti-pornography figures like Utah Senator Mike Lee and Texas Attorney General Ken Paxton. Lee has repeatedly introduced federal bills targeting adult platforms, often through age-verification mandates and penalties. Paxton, for his part, was sued by the Free Speech Coalition and other adult-industry companies over enforcement actions in Texas.

Right now, roughly half of U.S. states have age-verification laws on the books. Penalties can range from civil fines to criminal charges. During the FTC workshop, several speakers openly backed child-protection proposals pending in Congress, including the controversial Kids Online Safety Act and the SCREEN Act.

FTC chair Andrew Ferguson, also a Republican, echoed support for age verification as a means of complying with the Children’s Online Privacy Protection Act. Central to his view is the expectation that businesses deploy third-party age-verification tools—such as those offered by companies like Yoti and Incode—to prevent what he described as “innovative ways of breaking the law.”

All of this is unfolding inside an agency with an unusual power imbalance. Ferguson and Meador are currently the only two commissioners steering the FTC. Investigative reporting by Al Jazeera has noted that both men have expressed strong support for using regulatory authority to suppress certain forms of LGBTQ+ speech.

The story isn’t over. Federal age-verification policy is still taking shape across the FTC, the broader executive branch, and Congress. What’s clear already is that the debate isn’t just about technology or children—it’s about who gets heard, who gets sidelined, and how much privacy anyone is expected to give up along the way.

Read More »

FSC Backs OpenAge’s AgeKey as a Privacy-First Age Verification Option

Open age logo

Something quietly important happened this week—one of those moments that doesn’t scream for attention but might change how a lot of people experience the internet going forward. The Free Speech Coalition threw its support behind the OpenAge Initiative and its flagship technology, AgeKey, calling it a rare attempt to meet age-assurance rules without turning privacy into collateral damage.

FSC is a nonprofit that advocates for the adult entertainment industry, a corner of the internet that sees hundreds of billions of visits every year and tends to feel regulatory pressure before almost anyone else does.

“We believe that device-based solutions are more effective than fragmented platform or site-specific approaches,” said Alison Boden, executive director of the Free Speech Coalition. “OpenAge and AgeKey offer a practical bridge between these models, allowing users to store a verified age result locally on the device and reuse it across multiple platforms without repeated verification or resubmission of sensitive data.”

She added, “This approach holds the promise of reduced friction and privacy risks that have undermined compliance with age-verification mandates, and provides a path that is affordable for large platforms, independent creators, and small businesses alike.”

At its core, AgeKey is a reusable, FIDO2 passkey-based age credential. It lets someone prove they meet an age requirement without ever handing over who they are—no names, no identity trail, no awkward oversharing just to get through a digital door.

Because it’s natively supported by major devices, operating systems, and browsers, AgeKey doesn’t ask users to download an app, register an account, or jump through extra hoops. Verifications can happen up to 95 percent faster than traditional age checks. And thanks to a double-blind architecture, neither the service provider nor the AgeKey issuer knows who the user is—or where they’re going online.

The OpenAge Initiative itself is focused on building something bigger than a single tool: an interoperable, cross-industry, cross-platform framework for age assurance. Any platform or certified verification provider can adopt AgeKey and participate. Sites still decide which methods, providers, and recency rules they accept, while AgeKeys remain optional and free for users.

“OpenAge believes deeply in interoperability and reusability when it comes to age assurance and users’ own data,” said Julian Corbett, head of OpenAge. “Our mandate is to think first and foremost about what users’ needs and rights should be. This includes the right of children to receive age-appropriate experiences and protections from harmful content, and the right of adults to privacy and frictionless access online.”

For anyone watching the slow collision between regulation, privacy, and real-world usability, this is one of those developments worth sitting with for a moment. Not flashy. Just quietly ambitious.

Read More »

App Meant to Help Users Quit Porn Leaked Their Masturbation Habits

Hand and porn site

There’s something quietly disturbing about discovering that a tool meant to help people wrestle with their most private habits accidentally left the blinds wide open. An app that claims to help users stop consuming pornography ended up exposing intensely sensitive personal data — the kind of stuff most people wouldn’t even admit to a close friend. Ages. Masturbation frequency. Emotional triggers. How porn makes them feel afterward. And tucked inside that data were a lot of minors, which makes your stomach drop a little when you really sit with it.

One user profile, for instance, listed their age as “14.” Their “frequency” showed porn use “several times a week,” sometimes up to three times a day. Their “triggers” were logged as “boredom” and “Sexual Urges.” The app had even assigned a “dependence score” and listed their “symptoms” as “Feeling unmotivated, lack of ambition to pursue goals, difficulty concentrating, poor memory or ‘brain fog.’” It reads less like analytics and more like a vulnerable diary entry — something that was supposed to stay locked away.

The app isn’t being named because the developer still hasn’t fixed the issue. The problem was uncovered by an independent security researcher who asked to remain anonymous. He first flagged it to the app’s creator back in September. The creator said he’d fix it quickly. That didn’t happen. The flaw comes from a misconfiguration in how the app uses Google Firebase, a popular mobile app development platform. By default, Firebase can make it surprisingly easy for anyone to become an “authenticated” user and access backend storage — the digital attic where all the private boxes tend to live if you’re not careful.

Overall, the researcher said he could access information belonging to more than 600,000 users of the porn-quitting app, with roughly 100,000 identifying as minors. That number lands heavy. It’s not abstract. It’s classrooms. It’s school buses. It’s kids who probably assumed they were talking into a void, not a wide-open window.

The app also invites users to write confessions about their habits. One of them read: “I just can’t do this man I honestly don’t know what to do know more, such a loser, I need serious help.” You can almost hear the frustration in that sentence — the messy spelling, the emotional spill. That’s not data. That’s a human having a rough night.

When reached by phone, the creator of the app said he had spoken with the researcher but claimed the app never exposed any user data due to a misconfigured Google Firebase. He suggested the researcher may have fabricated the data that was reviewed.

“There is no sensitive information exposed, that’s just not true,” the founder said. “These users are not in my database, so, like, I just don’t give this guy attention. I just think it’s a bit of a joke.”

When asked why he previously thanked the researcher for responsibly disclosing the misconfiguration and said he would rush to fix it, he wished me a good day and hung up. One of those conversations that ends abruptly, leaving a strange quiet buzzing in the room.

After the call, an account was created on the app. The researcher was then able to see that new account appear inside the misconfigured Google Firebase environment — confirmation that user information was still exposed and accessible. Sometimes reality has a way of answering arguments faster than any debate ever could.

This type of Google Firebase misconfiguration isn’t new. Security researchers have been talking about it for years, and it continues to surface today. It’s one of those problems that feels boring until it suddenly isn’t — until someone’s real life data is sitting out in the open.

Dan Guido, CEO of cybersecurity research and consulting firm Trail of Bits, said in an email that this Firebase issue is “a well known weakness” and easy to find. He recently noted on X that Trail of Bits was able to build a tool using Claude to scan for this vulnerability in just 30 minutes.

“If anyone is best positioned to implement guardrails at scale, it is Google/Firebase themselves. They can detect ‘open rules’ in a user’s account and warn loudly, block production configs, or require explicit acknowledgement,” he said. “Amazon has done this successfully for S3.” S3 is a cloud storage product from AWS that previously struggled with similar data exposure issues due to misconfigurations.

The researcher who uncovered the app’s vulnerability added that this insecure setup is often the default in Google Firebase. He also pointed a finger at Apple, arguing that apps should be reviewed for backend security issues before being allowed into the App Store.

“Apple will literally decline an app from the App Store if a button is two pixels too wide against their design guidelines, but they don’t, and they don’t check anything to do with the back end database security you can find online,” he said. It’s one of those comments that lands with an uncomfortable kind of truth — polished surfaces, shaky foundations.

Apple and Google did not respond to requests for comment.

And that’s the part that lingers. People trusted this app with their most awkward truths, their late-night regrets, their quiet attempts at self-control. Some of them were kids. They weren’t posting for an audience. They were whispering into what they thought was a locked room. Turns out the door was never really closed.

Read More »