Political Attacks

Mandatory Age Verification Is Creating a New Security Crisis by John Johnson – Cybersecurity Expert

Discord logo

There’s a quiet rule that’s floated around cybersecurity circles for years: don’t hold onto more data than you’re capable of protecting. Simple, elegant, almost parental in its logic — if you can’t safeguard it, don’t collect it.

But the world doesn’t care about that rule anymore.

Laws around identity and age verification are spreading fast, and they’re forcing companies—whether they’re ready or not—to gather and store the most intimate, high-risk documents a person can hand over. Passports. Driver’s licenses. National IDs. All the things you’d rather keep in your own pocket, not scattered across the servers of whoever happens to run the website you’re using.

And then something like the Discord breach happens.

In early October 2025, The recent data breach involving Discord. Not Discord’s internal systems—one of the partners handling support. Hackers got access to support-ticket data: names, emails, IP addresses, billing info, conversation logs… the usual mess. But tucked inside that mess was something far more sensitive: government-issued IDs.

These were collected for one reason: to prove a user was old enough to be there. To appeal an underage ban. And suddenly, the private documents people reluctantly handed over “just to get their account back,” were sitting in someone else’s hands entirely.

The Trap These Laws Create

Discord didn’t wake up one day deciding it wanted a folder full of driver’s licenses. Companies aren’t hungry for that kind of liability. But regulators have been ramping up age-verification mandates, and the penalties for non-compliance are steep enough to make anyone comply.

You can see the logic in the laws. Protect kids. Keep platforms accountable. Reasonable goals.

But look closely at the side effects:

We’ve built a system where organizations must stockpile some of the most breach-sensitive personal data in existence — even when they have no business storing it, no infrastructure built to protect it, and no desire to be holding it at all.

The old rule of “collect as little as possible” dies the moment a legal mandate requires collecting everything.

One Breach Becomes Everyone’s Problem

And once a company becomes responsible for storing IDs, the risk spreads. Healthcare portals, schools, banks, e-commerce shops, SaaS platforms — anyone providing service to the general public could end up in the same situation.

Every new database of passport scans is a future headline waiting to happen.

And when it happens, the fallout isn’t just personal. It’s financial. Legal. Reputational. You lose customer trust once — and you don’t get it back.

For small companies, one breach can simply end the business.

The MSPs Get Pulled Into the Storm

Managed service providers—MSPs—don’t get to sit this one out. They inherit the problem from every client they support. One MSP breach doesn’t just hit one organization. It hits all of them at the same time.

And the typical MSP environment? It’s a patchwork quilt of tools stitched together over time:

  • One for backups

  • One for endpoint protection

  • Another for vulnerability scanning

  • A different one for patching

  • Another for monitoring

  • And maybe one more to try and tie it all together

Every tool is another doorway. Another password. Another integration that can fail silently. Another shadow corner where data can slip unencrypted or unmonitored.

In an age when MSPs are being asked to guard government IDs, medical files, financial records, and entire networks—you can’t afford those shadows.

The Fix Isn’t “More Tools” — It’s Fewer

The only real path forward is simplification.

Not by removing security controls, but by merging them. Consolidation. Native integration. One platform where backup, protection, monitoring, and recovery exist inside the same ecosystem, speaking the same language, managed from the same place.

When everything runs through a single agent with one control plane:

  • There are fewer gaps.

  • There are fewer weak handoffs.

  • There are fewer places for attackers to slip in unnoticed.

  • And the attack surface shrinks dramatically.

You trade chaos for clarity.

You trade complexity for protection.

The New Reality

That old cybersecurity rule—don’t collect more data than you can protect—wasn’t wrong. It’s just not optional anymore.

The Discord breach isn’t a one-off story. It’s a preview. A warning shot.

Organizations are being legally pushed into storing the exact type of data that attracts attackers the most. And MSPs are being put in charge of securing it at scale.

So the question shifts:

If you no longer get to choose how much data you collect…

you have to be very deliberate about how you protect it.

And that means rethinking the entire structure of how we secure systems—not by addition, but by alignment.

Because now the stakes aren’t abstract. They are literal: your identity, my identity, everyone’s identity.

And someone is always watching for the first loose thread.

Read More »

How to Stay Legally Protected When Policies Get Outdated

Adult Attorney Corey Silverstein talks about how to stay legally protected as an adult website owner. Here’s a summary of the article:

It feels like the adult industry just hit a hard reset. Age verification laws are no longer theoretical — they’re real, enforced, and expensive to ignore. And because every region wants something slightly different, the once-standard “one policy fits everywhere” approach is basically dead. If a site can’t explain exactly how it keeps minors out, it’s already behind.

At the same time, regulators and payment processors are demanding proof that every bit of content is consensual and monitored. The Aylo case didn’t accuse anyone of new wrongdoing, but it sent a clear message: it’s not enough to say you have safeguards — you need documentation, systems, records, and the ability to show them working. Old blanket model releases aren’t enough anymore. Consent now has to be specific, traceable, and ongoing.

And hanging over all of this is data privacy — the silent one that can shut a company down overnight. GDPR and CPRA require clear deletion rights, consent controls, and minimal data collection. Most adult sites still haven’t updated. The takeaway is simple: the old shortcuts are now risks. The companies that survive will be the ones who update before they’re forced to — not after.

Read More »

Big Tech Pushes Back Against New Colorado Rules

Netchoice logo

There’s something almost surreal about the idea of a phone interrupting a teenager’s late-night scrolling to say, Hey, maybe this isn’t great for your brain. That’s what Colorado’s new law is aiming for: gentle on-screen nudges when minors have been on a platform for more than an hour, or when they’re using it between 10 p.m. and 6 a.m. The warnings would have to reference research on brain development and repeat every thirty minutes. It’s the kind of thing that could become background noise — or maybe it could give someone a moment to pause. Hard to say. The provision is supposed to roll out on January 1.

But before any pop-ups appear, the law is already tied up in court. NetChoice, a group representing major social media platforms, has filed a lawsuit arguing that Colorado can’t require companies to deliver the government’s message on their own services. Their point is that social media isn’t just a product — it’s a place where communication happens, where speech happens — and the First Amendment doesn’t let the government compel speech, even in the name of public health. They note that other media industries have voluntary systems for guidance — movie ratings, for example — and that forcing platforms to present specific warnings crosses constitutional lines.

Supporters of the law say the goal is simply to help protect kids, especially given widespread concern about youth mental health. They argue that many parents want something in place to help guard against endless scrolling or disrupted sleep. Opponents respond that the science around social media’s effects is still developing — not nonexistent, but varied, with impacts that differ depending on the individual. Some researchers point to small but measurable effects; others note that social media can be a source of support, identity, creativity, or connection. So the debate becomes less about whether social media is “good” or “bad” and more about who gets to decide how platforms communicate risk — the state, the companies, or families themselves.

Similar legal battles are unfolding in other states, and courts haven’t landed on one clear answer yet. Some laws have been paused, others allowed to proceed while the challenges continue. Colorado’s case will likely turn on some very old constitutional questions — about speech, autonomy, and the limits of state power — applied to a very modern situation. And maybe that’s what makes this moment feel so unsettled: the technology is new, the stakes feel personal, and the rules are still being written.

Read More »

Why is Ofcom trying to censor Americans?

Spiked’s Adam Edwards opines on the Online Safety Act in the UK.

The story centers on U.S. lawyer Preston Byrne, who represents the message board 4chan and is openly defying the UK’s Online Safety Act. When the UK regulator Ofcom issued 4chan a £20,000 fine, Byrne publicly mocked the demand and argued that British law has no legal power over companies and citizens who have no operations or assets in the UK. He views the Online Safety Act as an overreaching censorship regime and says Ofcom is trying to enforce rules outside its jurisdiction by sending threatening letters instead of going through proper international legal channels.

The Online Safety Act requires any online platform accessed by UK users—regardless of where the company is based—to submit risk assessments, reports, and censorship plans, under threat of fines or even jail for executives. While 4chan has refused to comply and likely faces no real consequences because it has no UK presence, larger American companies like Meta and Google do have substantial assets in Britain, making potential enforcement far more serious. This has sparked broader questions about sovereignty, free speech, and whether a foreign government can compel U.S. companies to restrict or monitor content.

To counter the UK’s moves, Byrne has launched both a legal challenge in U.S. federal court and proposed new U.S. legislation called the GRANITE Act, which would allow American companies to sue foreign regulators like Ofcom if they attempt to impose fines or censorship demands. If passed, it could effectively block foreign censorship attempts and even allow U.S. courts to seize foreign government assets in retaliation. Byrne argues that if the UK cannot force U.S. firms to comply, British lawmakers may eventually be forced to reconsider the Online Safety Act altogether.

Read More »

Appeals Court Clears Tennessee to Begin Enforcing Age Verification Law

Tennessee flag

There are court decisions that land with a dull thud, and then there are the ones that feel like a door has quietly been locked behind you. This week’s move from the U.S. Sixth Circuit Court of Appeals sits firmly in the second category.
A three-judge panel just wiped away a lower court’s injunction that had been blocking Tennessee from enforcing its age verification law for adult sites. They didn’t do it with loud fanfare. They simply pointed to a recent Supreme Court decision and said, essentially, their hands are tied. And for anyone who works in digital adult spaces — creators, viewers, small platform operators — the shift is significant.
The law in question sits under the Protect Tennessee Minors Act (PTMA), which requires commercial adult platforms to verify the age of anyone trying to access sexual content. Tennessee lawmakers didn’t write this in a vacuum — they modeled it after Texas’ HB 1181, the same law that’s been bouncing through courts for over a year now. And with the Supreme Court’s ruling in Free Speech Coalition et al. v. Paxton, Tennessee suddenly found a legal green light where, a week earlier, there’d been a flashing red.
The Sixth Circuit judges put it plainly: “[The] Supreme Court has upheld a Texas statute that the Court described as ‘materially similar’ to the one at issue here.”
That line is the hinge. The before and after.
Once the highest court signed off on Texas’ law, Tennessee simply adjusted its own definitions to match. The panel noted that “the state has also since amended the PTMA, the Attorney General says, to track ‘the definitional language at issue in Paxton almost word for word.’”
Meaning: Tennessee rewrote its law to be legally bulletproof — or at least bullet-resistant — by mirroring the one that survived at the highest level.
And that’s where things get complicated, even surreal, depending on who you ask.
Earlier versions of the PTMA had been criticized for being vague, especially in how the state defined “obscenity” and “harmful to minors.” Those categories aren’t just slippery — they’re historically weaponized. What counts as obscene in Nashville might pass without a blink in New York, or Berlin, or literally any corner of the internet where adults talk to each other freely. Lawmakers in Tennessee tightened those definitions after the Supreme Court’s decision, lifting language almost directly from the Texas statute that the justices allowed to stand.
The result? A clearer legal blueprint — but one that still treats the adult internet like a gated amusement park, complete with bouncers and ID scanners.
The Sixth Circuit didn’t rule that age verification is harmless or good policy. They just said that, for now, the district court’s injunction can’t stand while appeals continue. In legal speak, the fight isn’t over. In reality, the pressure just shifted to the people who will have to comply immediately.
And that’s where the human story sits:
•The solo performer who runs her own website.
•The queer couple documenting intimacy as art.
•The niche platforms built by two founders and a rented server rack.
•The teenager who grew up online and suddenly hits a digital checkpoint that assumes danger instead of curiosity.
There’s a narrative underneath these rulings that rarely fits into the court filings:
This isn’t just about sex. It’s about who gets to speak, who gets to control, and who gets to define what is “harmful.”
Tennessee made its move quietly, in the wake of a louder battle in Texas. But the message travels. Other states are already watching, already drafting, already preparing to photocopy the language line by line.
Sometimes the law doesn’t roar.
Sometimes it whispers.
But the effect can be the same.
And the internet is listening.

Read More »

Free Speech Coalition Rolls Out New Age Verification Toolkit for Adult Platforms

Free Speech Coalition logo

Something’s always shifting in the world of compliance — especially when it comes to how adult sites handle age verification. The Free Speech Coalition (FSC), the legal advocacy nonprofit that often ends up doing the industry’s heavy lifting, just rolled out an updated version of its Compliance With U.S. Age Verification Laws: A Toolkit for Adult Websites.

The group explained on its website that the updates were necessary, given how quickly the legal ground keeps moving. What used to be “best practices” a few months ago can suddenly look outdated once new state laws or attorney general actions land on the table.

“Today, we are releasing an updated edition reflecting new legal developments and feedback from stakeholders who’ve put the toolkit into practice,” the post read.

And it’s not a light revision, either. “The key updates in this version include the final language and analysis of the Missouri age verification regulation taking effect November 30th and inclusion of recently-filed attorney-general actions, regulatory notices, and litigation related to age-verification laws,” FSC added.

That sense of urgency runs through every line of the update. “FSC’s guidance to our members continues to develop as state requirements and enforcement actions evolve,” said Alison Boden, executive director of the Free Speech Coalition. “Staying up-to-date is vital as companies make decisions about compliance.”

For those who actually need to keep their sites out of trouble (and their users protected), the new version of Compliance With U.S. Age Verification Laws: A Toolkit for Adult Websites is available for download at FreeSpeechCoalition.com/toolkit

Read More »

France Launches Investigation Into Shein, Temu, and AliExpress Over Youth Porn Exposure

Arcom logo

It’s one thing for a brand to go viral for its prices — it’s another to land under government investigation. France’s Finance Minister, Roland Lescure, just put fast-fashion giant Shein on notice after a watchdog found “child-like” sex dolls being sold on the site. Reuters broke the story, and Lescure didn’t mince words: he called the products “illegal.”

He went even further. “For terrorist acts, drug trafficking, and child pornography, the government has the right to request banning access to the French market,” he said. The threat hit just as Shein opened its first permanent retail store — a glitzy shop in central Paris inside the historic BHV department store. It’s the kind of irony that writes itself.

But this isn’t just about one brand. French regulators are now investigating whether Shein — along with Temu, AliExpress, and Wish — has allowed minors to access pornographic content through their platforms. That’s not just scandalous; it’s illegal. The country’s age-verification laws are strict, and these platforms may have crossed the line.

France’s consumer watchdog, the Directorate-General for Competition, Consumer Affairs and Fraud Control, issued an advisory explaining that “the e-commerce site Shein was selling child-like sex dolls.” They didn’t sit on it — the listings were reported to a public prosecutor.

The agency added that “these activities have been reported to ARCOM, the competent regulatory body in this area, and, in agreement with the public prosecutor, a report has been filed with the platform, urging it to implement appropriate measures promptly.” In other words, this is no warning shot — it’s an official escalation.

Quentin Ruffat, Shein’s head of public affairs in France, tried to strike a cooperative tone when speaking to local radio, as reported by Reuters. He said the company was sharing information with investigators, including the names of vendors and buyers.

“We are in the process of sacking all the offending vendors from the platform,” Ruffat said. Meanwhile, Lescure confirmed he’d submitted a report to ARCOM, noting that Shein qualifies as a “very large online platform” under the European Union’s Digital Services Act — meaning, yes, it’s squarely in the regulators’ crosshairs.

AliExpress, another major e-commerce player, isn’t escaping scrutiny either. It’s being investigated for allegedly distributing pornographic images or depictions of minors — a charge that can lead to five years in prison.

It’s worth remembering that these platforms — Shein, AliExpress, Temu — are backed by massive Chinese corporations, while Wish belongs to a Singaporean parent company. They’ve built empires on accessibility and affordability. But as France is reminding them now, there’s a line between disruption and disregard — and crossing it can get very expensive.

Read More »

Quick Look: The Status of Age Verification Laws Across the U.S. by Morely Safeword

age verification

As you’re likely aware, since you’re reading this site, in recent years there’s been a proliferation of new state laws across the United States that require adult websites to verify the age of users before displaying any content that may be deemed “harmful to minors” to those users.

After the recent Supreme Court decision in Free Speech Coalition v. Paxton, in which the court upheld the age verification mandate passed by the Texas legislature, similar laws in other states are now clearly enforceable. With Missouri’s law poised to take effect later this month, it’s a good time to remind ourselves of the states that have (and haven’t, yet) passed similar laws.

The states with active age verification mandates in place include Alabama, Arizona, Arkansas, Florida, Georgia, Idaho, Indiana, Kansas, Kentucky, Louisiana, Mississippi, Montana, Nebraska, North Carolina, North Dakota, Ohio, Oklahoma, South Carolina, South Dakota, Tennessee, Texas, Utah, Virginia and Wyoming. And as mentioned earlier, Missouri will soon join this list.

Quite a few states have not yet passed age verification laws, at least to date. Those states include Alaska, California, Colorado, Connecticut, Delaware, Hawaii, Illinois, Iowa, Maine, Maryland, Massachusetts, Michigan, Minnesota, Nevada, New Hampshire, New Jersey, New Mexico, New York, Oregon, Pennsylvania, Vermont, Washington, West Virginia and Wisconsin.

Several of the states on the list of those that haven’t passed age verification laws have considered such proposals in the past and may do so again in the future, of course. Doubtlessly, there are at least some legislators in every state who favor the measures and are likely to introduce new bills at some point in the future.

States that haven’t passed age verification laws but have debated them at some point include Colorado, Hawaii, Illinois, Iowa, Maryland, Michigan, Minnesota, Nevada, New Mexico, New York, Oregon and West Virginia.

For much more information on age verification laws around the country – and to keep track of new bills that would establish them in additional states – check out the Age Verification section of the Free Speech Coalition website.

Read More »

Italy Orders Age Checks on Porn Sites, Signaling a New Crackdown on Digital Access

Italian flag

Rome has always been a city of contradictions — history and chaos, beauty and bureaucracy — and now it’s adding another to the list: sex and regulation.

This week, Italy’s media regulator, AGCOM, dropped a quiet but seismic announcement. Starting November 12, every platform that hosts adult content will be required to implement age verification systems — a digital checkpoint meant to keep users under 18 out of explicit spaces. On paper, it sounds simple: protect minors. In reality, it’s a bureaucratic earthquake waiting to happen.

The penalties for noncompliance? Up to €250,000. That’s not a slap on the wrist; that’s a knockout punch for smaller operators who barely make that much in a year. For the giants, it’s more of a warning shot — but one they can’t afford to ignore, especially in a country where digital privacy and moral politics are always in a tug-of-war.

AGCOM didn’t stop there. It also released a preliminary list of 45 adult content providers required to comply — a who’s who of the internet’s most-visited destinations. The message was clear: this isn’t theoretical. It’s happening. The list, the regulator says, will evolve based on how quickly platforms adopt the new rules. Translation? The watchdog is watching — and waiting to see who blinks first.

But what does this actually look like for users? Italy, like many countries flirting with digital ID systems, hasn’t laid out a clear method. Will people have to upload documents? Link to government-issued IDs? Use third-party verification apps that track their age (and maybe more)? No one knows yet, and AGCOM isn’t saying.

And maybe that’s the most Italian part of all this — the gray area between rules and reality. The intention is noble, sure: protect the young. But every time regulators try to police the internet’s most intimate corners, there’s collateral damage — privacy risks, data collection nightmares, and the quiet exodus of users to VPNs and underground mirrors of the same sites they’re trying to block.

For now, everyone’s waiting — platforms scrambling, lawyers reading fine print, users rolling their eyes. Because in the eternal theater that is Rome, even adult sites have to play their part in the latest act of digital morality.

Come November 12, the curtain rises. And whether this new performance turns out to be a tragedy, a farce, or a step toward something better — well, that depends on who’s still watching.

Read More »

A 10% “Titty Tax”? Pennsylvania’s Strange New Plan to Profit From Porn

Pennsylvania state capitol

Pennsylvania lawmakers want to slap a 10 percent tax on porn.

The proposal targets “subscriptions to and one-time purchases from online adult content platforms.” Add that to the state’s existing 6 percent sales tax, and you get what the Free Speech Coalition’s Mike Stabile called the “tiddy tariff.” It’s a catchy name for something that sounds like a moral statement wrapped in a fiscal policy.

Here’s the strange thing — almost nobody pays for porn anymore. The internet made sure of that. So taxing paid porn feels like setting up a toll booth on an abandoned road. You can’t collect money from traffic that’s already gone.

And if this plan actually discourages people from paying for porn, it could end up doing the opposite of what lawmakers claim to want. Paying for porn isn’t just about access — it’s about ethics. When viewers pay creators or production companies, they’re supporting people who work legally and consensually. They’re also helping make sure performers are paid and protected.

Platforms that allow direct payments to performers give sex workers something rare in this business — control. They decide what to shoot, how to do it, and where it goes. Reputable studios verify age and consent. All that takes structure and funding. Make it harder to earn money from ethical content, and you push people toward the shady, unregulated side of the web.

The bill comes from state senators Marty Flynn (D–Scranton) and Joe Picozzi (R–Philadelphia). “In the near future, we will be introducing legislation to impose an additional 10% tax on subscriptions to and one-time purchases from online adult content platforms,” they wrote in an October 15 memo. “This tax will be applied in addition to the Commonwealth’s existing 6% sales and use tax, ensuring that Pennsylvania captures revenue from this rapidly growing sector of the digital economy.”

What’s unclear is who would actually pay. Would it hit consumers directly, or the platforms and creators? Either way, the pain rolls downhill. Platforms pass costs to users. Users buy less. Creators — often independent and working without safety nets — earn less.

The money would go into the state’s general fund, supposedly to make these “platforms contribute their fair share.” That line always makes me raise an eyebrow. “Fair share” of what?

Maybe Flynn and Picozzi imagine this hitting only the big companies — the nameless giants raking in cash. But that’s not how the modern porn economy works. Much of it now comes from small creators: individuals or couples filming at home, uploading content, building communities, and surviving off direct sales. They’re entrepreneurs, not conglomerates.

So while the state gets a symbolic win and a few extra dollars, the people who actually make the content — the ones they’re claiming to regulate — will take the hit.

Taxing porn isn’t just about numbers. It’s about how we treat speech and labor we find uncomfortable. And no matter how you spin it, this tax looks less like fairness and more like judgment dressed up as revenue.

Read More »