Pornhub warning collage

Australia’s Porn Age-Verification Law Sparks Debate Over Safety and Shift to “Darker Corners”

Something changed overnight — not just on adult sites, but in how people moved through the internet itself.

When major porn platforms began blocking Australians from access, it didn’t stop there. X also started requiring age checks before users could view adult content. And for some, that meant something far more intrusive: being asked to submit a video selfie just to look at a single post.

“Almost every post on my alt account has a content warning and asks me [for a] selfie for age verification,” one Australian porn consumer, Joe*, said. “It’s maddening.”

Others described pulling back entirely, choosing to walk away rather than comply.

“I’m honestly no longer engaging with any of the sites and platforms I used to use because not only is the verification process really invasive, but some of them even give you the option to sign in with Google … and that’s the last platform I’d trust with any sensitive data,” Jethro said.

“The choices are: link your perversions to your government ID, or submit your face into the AI slop machine,” Chris* said.

It’s still early days. Aside from several Aylo-owned sites like RedTube blocking Australians outright, and Pornhub limiting access to safe-for-work content for users who aren’t logged in, most of the top free adult platforms have yet to fully implement age verification.

Data from the SEO firm Semrush suggests only one site in the country’s top 20 — Thisvid — had complied so far. But with potential fines reaching $49.5 million for violations, more platforms are expected to follow. Users have already begun to react.

Search interest in porn-related terms has climbed to its highest level since pandemic lockdowns ended in 2022. At the same time, searches for virtual private networks — tools that allow users to appear as though they’re browsing from outside Australia — have surged to levels not seen since 2015, when website blocking laws targeting piracy were introduced.

Sex workers say none of this is surprising. For years, they warned that regulations developed between the eSafety commissioner and industry stakeholders could drive users away from regulated spaces and into less controlled environments.

“We’ve already warned that these laws will funnel traffic away from platforms that do have moderation safeguards in place and towards sites that profit from non-consensual and stolen porn, including the unpaid work of sex workers,” said Mish Pony, chief executive of Scarlet Alliance.

“So driving people off mainstream services, such as Pornhub, does not stop porn consumption, it just pushes it into darker corners of the internet. It makes it harder to address real harms.”

Andy Conboi, an OnlyFans creator based in Sydney, said he has already seen the effects firsthand. Engagement on his posts has dropped.

“People don’t really want to send a photo of themselves or their licence or whatever to these platforms, particularly Twitter [X],” he said.

“In the group chats I do have with creators, people are just frustrated and annoyed, their engagement is down [and] it’s much more difficult to put stuff out there and be seen a lot of the time.”

Some creators, he added, are pivoting. They’re shifting toward safe-for-work content on platforms like Instagram and TikTok just to maintain visibility — a move he described as ironic, given the presence of underage users on those services.

For longtime opponents of pornography, however, the changes mark a milestone.

After earlier attempts at internet filtering fell short under previous governments, and opt-out filtering proposals were abandoned before the 2013 election, regulators have gradually expanded their authority over online content. The eSafety commissioner’s role has grown significantly over the past decade.

Advocacy groups that have campaigned for tighter controls welcomed the developments.

“This day was hard fought for,” said Melinda Tankard Reist, movement director for Collective Shout. “Collective Shout and our partners and allies worked hard to bring it to fruition.”

“It is a relief to know proof-of-age protections are now in place as one obstacle in the way of young people being exposed to rape porn, torture porn, incest porn and extreme violence and degradation of women.”

The Australian Christian Lobby also supported the outcome.

“The fact that P*rnhub have ceased operating in Australia is already proof of its effectiveness,” said chief executive Michelle Pearse.

Questions remain about whether those outcomes will hold — or simply shift behavior elsewhere.

Researchers studying similar laws in parts of the United States found that when major sites restricted access, users didn’t necessarily stop searching. They redirected.

“We saw very large substitution effects for search traffic for XVideos, which is the second largest porn website in the states,” said David Lang, a Stanford University researcher and lead author of the report.

“It’s a sufficiently large change that the No 2 site is now the No 1 site in states that passed those laws.”

Tracking VPN use proved more difficult, researchers noted, since users often disappear from local data once they connect through external servers.

For digital rights advocates, the concern isn’t just where people go — it’s what they leave behind.

Tom Sulston, head of policy at Digital Rights Watch, warned that age-verification systems could create centralized pools of highly sensitive personal data.

“It would be absolutely trivial for a criminal to set up porn sites as honeytraps to capture Australians’ identities and sexual interests; and then use that material for blackmail, similar to existing sextortion schemes,” Sulston said.

“Foreign intelligence services looking to trap Australian targets could easily do the same. The age-verification regime puts Australians at greater risk of harm, not less.”

And that’s the uneasy part of it all. The behavior doesn’t disappear — it just moves.

Read More »
Big Ben

Starmer Government Pushes Back on MPs’ Bid to Ban Taboo Porn in U.K.

LONDON — U.K. Prime Minister Keir Starmer is facing the prospect of dissent within his own Labour Party if the government does not support a proposed ban on certain categories of pornography included in the Crime and Policing Bill.

The pressure follows a narrow vote in the House of Lords earlier this month, where peers approved an amendment by 144 to 143 to prohibit simulated incest pornography, step-relationship content and depictions such as consensual strangulation.

Several Labour backbenchers, many of them women, have raised concerns about the availability of so-called “step-incest” material online and its potential impact on victims of child sexual abuse. Some lawmakers argue that such content could contribute to harm, according to reports from U.K. media outlets.

One unnamed Labour member of Parliament described “step-incest” pornography as a “gateway drug” to illegal material. Lawmakers from Labour have also worked with Conservative MPs on efforts to criminalize depictions of step-family sexual relationships, even when they are fictional.

Data from Pornhub’s 2025 Year in Review shows that “step mom” remains among the most frequently searched terms on the platform.

If enacted, the law would make a range of currently legal pornography depicting step-relationships subject to potential prosecution by the Crown Prosecution Service, as well as enforcement by agencies including the Metropolitan Police Service and regional police forces.

Baroness Gabby Bertin, who led an independent parliamentary review on the harms of pornography, urged peers to support restrictions on what she described as taboo content, including material portraying “intercourse with a step-child.”

Bertin said online pornography often includes scenes “with settings in children’s bedrooms, with actors in children’s clothes, braces, toys, pigtails, and other markers of childhood. Millions of videos and images are then tagged as ‘little,’ ‘tiny,’ ‘age gap,’ ‘mommy,’ ‘daddy,’ or ‘teen.’”

The government has also drafted provisions to ban the possession or publication of pornography depicting sex between relatives.

The inclusion of step-relationship content in the proposed restrictions prompted debate within the government. Justice minister Baroness Alison Levitt said that while such material is controversial, these relationships are “not illegal in real life.”

Levitt also raised concerns about a separate amendment to the bill involving consent withdrawal. The measure would allow individuals appearing in adult content to withdraw consent at any time, with producers facing potential imprisonment and fines if they fail to comply.

Under the proposal, initial consent to publication would no longer be considered sufficient. If consent is withdrawn, platforms and studios would be required to remove the material within 24 hours of notification.

Read More »
Ofcom logo

Ofcom Calls on Major Tech Platforms to Implement Age-Verification Requirements

LONDON — A quiet warning landed this week on the desks of some of the biggest technology companies in the world. It didn’t come with fireworks or spectacle. Just a deadline — and a clear message.

The United Kingdom’s digital regulator, Ofcom, told major technology firms Thursday that they should begin putting real age-verification systems in place or face potential penalties under the country’s Online Safety Act.

The move arrives as governments around the world wrestle with the same uneasy question: how do you keep children safe online without reshaping the internet itself? The debate has spread well beyond Britain, with similar age-verification efforts underway across Western Europe, Australia and parts of the United States.

According to the regulator, letters were sent to government relations and compliance teams at the parent companies behind platforms including Facebook, Instagram, Roblox, Snapchat, TikTok and YouTube.

Those companies have until April 30 to report back on what progress they’ve made toward deploying stronger age-verification tools.

Regulators say they will review those responses and later publish an assessment outlining how well the companies are complying.

Ofcom Chief Executive Melanie Dawes said the platforms’ public commitments to child safety have not always translated into meaningful protections.

“These online services are household names, but they’re failing to put children’s safety at the heart of their products,” Dawes said. “There is a gap between what tech companies promise in private and what they’re doing publicly to keep children safe on their platforms.”

Dawes added, “Without the right protections, like effective age checks, children have been routinely exposed to risks they didn’t choose, on services they can’t realistically avoid. That must now change quickly, or Ofcom will act.”

Regulators outlined four specific expectations for the companies.

The first calls for “effective minimum-age policies.” The second requires “failsafe grooming protections.” The third focuses on creating “safer feeds for children.” And the fourth calls for “an end to product testing on children.”

Together, the measures are intended to help meet the Online Safety Act’s broader requirement that platforms adopt “age-appropriate design” and prevent minors from accessing services that are not meant for them.

Chris Sherwood, head of the child-protection charity National Society for the Prevention of Cruelty to Children, said stronger oversight has been overdue.

“For too long, social media giants have looked the other way while harmful and addictive content floods children’s feeds, undermining their safety and wellbeing,” Sherwood said.

“That’s why Ofcom’s demand for far greater transparency about the risks children face online, and how tech companies plan to protect them, is absolutely essential,” he added. “We’ve long called for minimum age limits to be properly enforced on social media, so it’s encouraging to see Ofcom confront this head-on.”

The regulator’s push also coincides with a separate warning from the U.K.’s data-privacy authority, the Information Commissioner’s Office, which sent a letter to “social media and video sharing platforms operating in the U.K.”

The letter stated, “We understand that most services are relying on self-declaration to identify whether children are 13 or over, with a limited number also utilising some form of profiling to enforce minimum age requirements.”

“As currently deployed, we don’t think that these tools are effective and therefore they should not continue to be relied upon to prevent access to under-13s.”

The letter was signed by Paul Arnold, whose agency oversees information rights, transparency in public bodies and personal data protections across the United Kingdom.

The regulator’s latest demands arrive just days after lawmakers in the U.K. Parliament declined to adopt an Australia-style proposal that would have barred all social media use for anyone under the age of 16.

Read More »
Click to cancel

FTC Requests Public Comment on Proposed ‘Click to Cancel’ Regulations

WASHINGTON — The Federal Trade Commission this week called for public comment on whether it should revise its Negative Option Rule to address deceptive or unfair practices.

The move is the latest step in the agency’s renewed rulemaking effort on negative option plans, after a federal court last year struck down a “click to cancel” rule intended to make it easier for consumers to end online subscriptions. Opponents of that rule argued the FTC exceeded its authority and failed to follow required procedures by not issuing a preliminary regulatory analysis.

In January, the FTC submitted a draft Advance Notice of Proposed Rulemaking, or ANPRM, on its Negative Option Rule to the Office of Information and Regulatory Affairs for review.

This week’s announcement seeks input on that ANPRM, stating, “The ANPRM asks the public: to weigh in on the current Rule; whether proposed amendments are needed; and about potential regulatory alternatives to address deceptive or unfair negative option practices.”

Christopher Mufarrige, director of the FTC’s Bureau of Consumer Protection, said the agency believes new rulemaking may be warranted.

“Negative option subscriptions can offer procompetitive features to consumers and the marketplace more broadly by lowering transaction costs and ensuring consumers receive uninterrupted service,” Mufarrige said. “The Commission’s enforcement track record suggests, however, that negative option subscriptions continue to be plagued by difficult cancellation processes, unlawful retention tactics, and a suite of other impediments that prevent consumers from easily switching or ending subscription services. Neither consumers nor competition are protected when consumers are enrolled in programs that they either do not want or cannot cancel.”

The Negative Option Rule was first adopted in the 1970s to protect consumers from being automatically enrolled in subscription plans without their consent. As amended in 2024, the rule would have applied to nearly all negative option programs, including automatic renewal and free-to-pay offers. If the update had remained in effect, website operators likely would have been required to make substantial changes to their sign-up and cancellation practices.

The restarted rulemaking process could result in the FTC proposing the same changes again or advancing a similar set of revisions.

With the ANPRM now published in the Federal Register, the public comment period will remain open through April 13. Members of the public may submit comments during that period.

Read More »
Utah House building

Utah’s Proposed Porn Tax Raises Major Civil Liberties Concerns

SALT LAKE CITY — Utah lawmakers are again stepping into the middle of the long-running debate over how far governments should go when regulating online adult content. This time, the focus is a proposed tax on pornography purchased through digital platforms.

Senate Bill 73, introduced earlier this year by Republican lawmakers in the Utah Legislature, would impose what the bill calls a “material harmful to minors” tax on revenue generated from the sale of online pornography. The rate is currently set at 2 percent, after originally being proposed at 7 percent.

After several amendments, the measure passed the state Senate with broad support and now awaits further consideration in the House of Representatives. If approved there, it would head to the desk of Gov. Spencer Cox, who has publicly supported policies aimed at restricting access to online pornography.

The legislation was introduced by Republican state Sen. Calvin R. Musselman and state Rep. Steve Eliason, both of whom have supported previous efforts in Utah to regulate online adult content.

Under the proposal, revenue generated by the tax would be directed toward several state programs. The bill specifies that funds could be used to support enforcement efforts tied to Utah’s existing age verification laws for social media and adult websites, among other regulatory initiatives.

During the legislative process, lawmakers added language addressing virtual private networks (VPNs) and similar technologies used to bypass location-based restrictions. The revised bill would make it illegal to intentionally circumvent content blocks implemented by platforms as part of age verification compliance, with violations subject to civil penalties.

The measure also includes provisions aimed at limiting how websites communicate with users in Utah about these tools. Specifically, the bill states that platforms covered by age verification requirements may not provide instructions or guidance that would allow users to bypass those restrictions.

The current version of Senate Bill 73 states:

“A commercial entity that operates a website that contains a substantial portion of material harmful to minors may not facilitate or encourage the use of a virtual private network, proxy server, or other means to circumvent age verification requirements, including by providing: (a) instructions on how to use a virtual private network or proxy server to access the website; or (b) means for individuals in this state to circumvent geofencing or blocking.”

Measures regulating adult content have appeared in several states in recent years. Alabama, for example, enacted legislation that imposes a 10 percent tax on pornography-related revenue generated within the state, alongside additional legal requirements for adult performers involving notarized consent documentation.

Utah’s proposal does not include those record-keeping provisions, but it does expand the scope of enforcement mechanisms connected to age verification and online access controls.

The tax itself would function similarly to what policymakers often describe as a “sin tax,” a type of levy commonly applied to products such as alcohol, tobacco and gambling. In this case, the tax would apply to companies that generate revenue from online adult content through methods including clip sales, subscriptions and fan-based platforms.

Under the proposal, entities meeting the bill’s definition of “covered entities” would calculate the portion of revenue generated from Utah-based users and pay the 2 percent tax to the state on an annual basis.

If the measure becomes law, larger online platforms could likely absorb the additional compliance costs with relative ease. For smaller companies operating in the adult content market, however, the administrative and regulatory requirements could prove more challenging.

The bill’s future now depends on the outcome of deliberations in the Utah House. Should it pass there and receive the governor’s signature, the measure would add Utah to a growing list of states experimenting with new approaches to regulating digital adult content.

Whether the proposal ultimately reshapes how online platforms operate — or instead becomes the subject of courtroom challenges — may become clear only after the legislative process runs its course.

Read More »
AEPD Logo

Spain Imposes €950,000 Fine on Yoti Over Biometric Data and Consent Violations

MADRIDSpain’s data protection authority has imposed a total fine of €950,000 on Yoti Ltd, the British digital identity and age verification company, after determining that the company committed three separate violations of the General Data Protection Regulation (GDPR) in connection with the operation of its Digital ID application.

The decision, issued under file reference EXP202317887, was signed by Lorenzo Cotino Hueso, president of the Agencia Española de Protección de Datos (AEPD). The ruling provides a detailed examination of the regulatory obligations that apply to age verification providers operating in Spain.

The three penalties consist of €500,000 for unlawful processing of biometric data under Article 9 of the GDPR; €200,000 for obtaining invalid consent for research and development processing in violation of Article 7; and €250,000 for excessive data retention in breach of the storage limitation principle set out in Article 5.1(e). In addition to the financial penalties, the authority ordered Yoti to implement corrective measures within six months after the resolution becomes final.

Yoti Ltd, registered in the United Kingdom with tax identification number 08998951, provides age verification services used by platform operators across multiple markets. According to the resolution, all of the company’s verification methods — including facial age estimation, document-based verification, credit card checks, mobile number matching and the Digital ID application — are available for use in Spain. The company’s most recent published revenue figure, cited in the resolution as of March 2025, is €15,029,907, which the authority used as a reference point in determining proportionate and dissuasive penalties.

How Yoti’s technology works

The Digital ID application is the service at the center of the enforcement action. According to documentation submitted during the investigation, the application allows users to create a verified identity account by uploading a government-issued identity document and capturing a selfie image.

The technology uses deep neural networks to process the facial image. The image is converted into pixels treated as numerical values and analyzed through a layered network of mathematical nodes. A typical run through the system produces an estimated age in approximately one to 1.5 seconds.

Yoti describes its services to business clients as comprising eight verification methods. According to the company’s data protection impact assessment (DPIA), these include facial age estimation, verification through the Digital ID application, document identification, credit card verification, mobile number verification, database checks, electronic identity systems used in Switzerland, Denmark and Finland, and a U.S. mobile driver’s license option. When these services are offered on a software-as-a-service basis, client companies act as data controllers while Yoti acts as a processor. Within the Digital ID application itself, however, Yoti acts as the controller.

The facial age estimation model was trained using 12 age range categories (0-1, 2-3, 4-6, 7-9, 10-12, 13-15, 16-17, 18-24, 25-29, 30-39, 40-49 and 50-60), four gender groupings, and three skin tone groups based on the Fitzpatrick scale, producing 144 demographic combinations. According to a company white paper referenced in the resolution and updated in September 2024, the model demonstrated accuracy within 1.28 years across gender and skin tone categories.

Training images were collected through an online portal that required adult consent, as well as through a South African family welfare organization, Be In Touch, working with schools. The United Kingdom’s Information Commissioner’s Office, which previously included Yoti in a regulatory sandbox program, advised against the South African collection method due to potential data protection implications.

The Digital ID application also applies age restrictions based on jurisdiction. According to Yoti, “the Digital ID app cannot be used by persons under the digital age of consent, i.e. 13 years in the United Kingdom and 14 years in Spain.” During account creation the application detects a user’s location and, in Spain, presents two options: “I am 14 or over”or “I am 13 or under.” The registration process continues only if the user selects the first option. No technical mechanism verifies the accuracy of the declaration.

For repeated verification, Yoti implemented a cookie-based age token system. These tokens remain valid for 30 days, allowing users who have verified their age once to reuse the result across participating platforms. The company also provides an “age account” feature that stores tokens in a username-and-password account accessible across devices.

First violation: biometric special category data

The AEPD’s primary finding concerns the processing of biometric data without a valid legal basis under Article 9 of the GDPR. The regulation prohibits the processing of special category data — including biometric data used for identification — unless specific exemptions apply.

Yoti maintained during the investigation that the facial scans generated by its system should not be considered special category biometric data because they are intended to authenticate users rather than uniquely identify them. The authority rejected this interpretation.

According to the resolution, data qualifies as biometric special category data under Article 4.14 of the GDPR when it relates to physical or behavioral characteristics of an individual, is used to confirm unique identification and undergoes specific technical processing to generate biometric templates. The AEPD determined that Yoti’s system meets all three criteria.

The authority found that the facial scan produces a biometric template stored while the user account remains active. When users modify their PIN or recover their account, the system captures a new facial scan and compares it with the stored template through a 1:1 matching process.

According to the decision, “despite repeatedly asserting — both during account creation and in the privacy policy — that the purpose of processing the biometric facial pattern is to guarantee user identification, Yoti does not consider itself to be processing special category personal data,” a position the authority described as demonstrating “particular negligence.”

The fine for this violation was set at €500,000. The authority cited the involvement of minors and the international processing of data — including servers outside the European Union — as aggravating factors.

For transfers between the United Kingdom and India, where Yoti operates a Security Centre providing manual verification support, the company relies on EU standard contractual clauses with a UK addendum. According to the DPIA, personnel at this center can access document images and selfies through remote connections to UK servers using “thin terminals,” while no other staff outside the center can view the information. The AEPD noted that the cross-border dimension further limits users’ practical control over their data.

Second violation: pre-ticked consent boxes for R&D

The second violation concerns the mechanism used to obtain user consent for internal research and development.

According to the investigation, the application displayed a pre-selected checkbox allowing users’ biometric data to be used to train and improve Yoti’s facial age estimation algorithms unless users manually deselected the option.

Yoti’s documentation confirms this design. The company stated, “In the Digital ID app, the default value is that data can be used for R&D. Yoti has taken steps to make this clear to users. Users can opt out, preventing their data being used for R&D, by using the app settings.”

The AEPD determined that this approach does not meet GDPR requirements. Article 4.11 defines consent as “any freely given, specific, informed and unambiguous indication of the data subject’s wishes” expressed through a clear affirmative action. Pre-ticked checkboxes do not constitute such an action.

The authority cited the European Data Protection Board’s Guidelines 05/2020, which state that consent obtained through default settings cannot be considered valid even if users later have the ability to withdraw it. The resolution notes: “Yoti consciously notes that consent granted by default can be revoked, without taking into account that there should not be a subsequent revocation at the data subject’s request, but rather that consent should be obtained in accordance with the safeguards and guarantees established by the GDPR.”

According to the DPIA, the data used for research processing may include facial images with timestamps, dates of birth derived from identity documents, gender information, document type details, country codes, video and audio recordings, device information, behavioral data, health-related information and race or ethnicity estimates derived from the Fitzpatrick scale. Data from users aged 13 to 18 is also included.

The fine for this violation was set at €200,000, with the authority again citing the involvement of minors and processing on servers outside the EU as aggravating circumstances.

Third violation: retention periods beyond stated purposes

The third infringement relates to the retention of personal data — including biometric data — for longer than necessary.

According to Yoti’s DPIA, Digital ID data and age tokens may be retained while a user account remains active or for three years after the last activity. The biometric facial template is stored throughout that period. The AEPD found this disproportionate.

The authority determined that the liveness check, which verifies that a real person is present during registration, is completed during account creation. Once this verification occurs, the purpose of the biometric capture is fulfilled. Retaining the biometric pattern beyond that moment cannot be justified by reference to that completed purpose.

The resolution also noted that additional uses of the biometric template — such as PIN modification or account recovery — may never occur during the account’s lifetime, meaning the storage of biometric data for possible future events fails to meet the storage limitation principle.

The authority also identified concerns regarding geolocation data. According to Yoti, the company collects users’ country code, city and state derived from their IP address and retains this information for five years. The stated purpose is to determine which jurisdiction’s age restrictions apply. The AEPD concluded that once jurisdiction is determined during account creation, extended retention of the location data is unnecessary.

Another retention issue involved fraudulent identity documents. Yoti indicated that documents identified as fraudulent may be stored for up to two years to train fraud detection systems. The authority determined that improving software constitutes a separate purpose not directly related to the original identity verification objective.

Video recordings created during liveness checks were also examined. The company’s terms state that such recordings “will be permanently deleted within 30 days of the date it was recorded, unless we are required to retain it for regulatory reasons.” The authority concluded that once liveness is confirmed, retention beyond that moment exceeds the legitimate purpose of the recording.

The fine for this infringement was set at €250,000, reflecting the large number of affected users and the involvement of special category data.

Corrective measures and timeline

The AEPD ordered Yoti to implement three corrective measures within six months after the decision becomes final:

• Demonstrate that the processing of biometric special category data complies with GDPR requirements.

• Demonstrate that consent-based processing meets the standards established by the regulation.

• Demonstrate that personal data retention is limited to the period strictly necessary for each processing purpose under Article 5.1(e).

The decision becomes final once the one-month period for filing an administrative appeal before the AEPD presidency has passed without action, or once the resolution is formally notified if no appeal is filed. Yoti may also challenge the ruling before the Contentious-Administrative Chamber of the National Court within two months of notification.

Failure to comply with the corrective measures could constitute a separate administrative violation under Articles 83.5 and 83.6 of the GDPR, potentially resulting in further enforcement proceedings.

Regulatory context

The Yoti ruling forms part of a broader series of enforcement actions by Spain’s data protection authority. The AEPD previously imposed a €500,000 fine on FC Barcelona for deficiencies in a data protection impact assessment related to biometric facial and voice data from approximately 143,000 members. The authority also issued a €1.8 million fine against airport operator AENA over the deployment of facial recognition systems, and a €1.8 million penalty against Informa D&B for processing personal data without a valid legal basis.

The decision also references the European Data Protection Board’s Statement 1/2025, adopted in February 2025, which outlines ten principles for GDPR-compliant age assurance systems. These include requirements that age verification technologies use the least intrusive methods available, avoid enabling tracking or profiling and implement short retention periods.

The ruling highlights ongoing differences in how European regulators interpret biometric data rules. While previous guidance from the United Kingdom’s Information Commissioner’s Office indicated that facial age estimation may fall outside biometric identification rules when used only for categorization, Spain’s AEPD concluded that persistent facial templates used for matching operations constitute biometric processing under Article 9.

The decision underscores increasing scrutiny of age verification technologies across Europe as regulators examine both their effectiveness and the privacy implications of the systems used to implement them.

Read More »
Law books and a gavel

Aylo Challenges Indiana Lawsuit Over VPN Access and Age Verification

INDIANAPOLIS — A legal fight unfolding in Indiana courts is putting a familiar question under a bright light: how far must an adult website go to keep minors out — and what counts as “reasonable” when technology keeps finding new ways around the rules?

This week, Aylo asked a Marion Superior Court judge to dismiss a lawsuit brought by the state of Indiana, which accuses the company of violating the state’s age verification law by failing to stop users who bypass location restrictions with VPNs and similar tools.

Indiana Attorney General Todd Rokita filed the complaint late last year, arguing that the safeguards used by Pornhub and other Aylo-operated sites do not meet the requirements of the state’s law. According to the complaint, the sites rely primarily on blocking users whose internet addresses show they are located in Indiana — a method the state says can be easily sidestepped.

The complaint states that IP-based restrictions used by the company “are insufficient to comply with Indiana’s Age Verification Law because Indiana residents, including minors, can still easily access the Defendants’ websites with a VPN IP or proxy address from another jurisdiction or through the use of location spoofing software.”

Aylo, in its motion to dismiss, counters that the state is stretching the law far beyond what it actually requires. In a supporting brief filed with the court, the company argues that Indiana’s interpretation of the statute violates several constitutional protections, including the First Amendment, the Due Process Clause and the Commerce Clause.

“Plaintiff takes the position that website operators cannot avoid violating the AVL by blocking Internet traffic from Indiana IP addresses unless those technological restrictions also prevent users from circumventing the geoblocks through VPNs routing traffic through IP addresses associated with other states,” the company’s brief states. “But the AVL contains no such requirement.”

According to the state’s complaint, investigators working for Rokita’s office accessed Pornhub and other Aylo sites from Indiana by routing their internet connection through a VPN server that produced a Chicago-based IP address. Because the sites allowed access under those circumstances, the state argues that they “lacked any reasonable form of age verification.”

Aylo disputes that conclusion. The company says that since the law took effect, it has blocked all internet addresses associated with Indiana from accessing its sites directly. In its filing, Aylo also criticizes the state for deliberately bypassing those protections through what it describes as “technological subterfuge.”

“The statute mandates only ‘reasonable age verification’ — not technologically infallible measures that anticipate and defeat every possible user circumvention tool,” the brief argues.

Aylo also characterizes geoblocking as a widely used solution across the internet. The company’s filing describes the practice as “a widely recognized, industry-standard method of geographic access control used by major streaming and content platforms worldwide.”

From the company’s perspective, Indiana’s lawsuit goes too far. The brief argues that the state’s interpretation of its law would impose an unnecessary burden on protected speech, exceeding the limits set by courts when evaluating age verification laws.

In particular, Aylo points to the standard established in Free Speech Coalition v. Paxton, a case that allowed state age verification laws to stand so long as they meet what courts call “intermediate scrutiny.” Aylo maintains that Indiana’s interpretation of its law fails that test and therefore violates the First Amendment.

The company also raises concerns about due process. According to the brief, Indiana is attempting to apply its law beyond the state’s borders without clear guidance, which Aylo says makes the statute “unconstitutionally vague” under the 14th Amendment’s Due Process Clause.

Another argument centers on the Constitution’s Commerce Clause. Aylo contends that the state’s interpretation effectively forces companies to regulate activity far outside Indiana’s jurisdiction.

“To comply with Plaintiff’s interpretation of the AVL, a publisher, such as Aylo Freesites, would need to impose age verification nationwide, and perhaps worldwide, so as to account for the possibility that an Indiana resident might use a VPN to disguise their location as from another jurisdiction,” the brief states, adding that such an approach “impermissibly extends Indiana law beyond its territorial boundaries.”

The company also challenges whether the court even has jurisdiction in the case. According to the filing, the state’s argument assumes that Indiana residents may still access the sites by circumventing restrictions through VPNs or proxy servers. Aylo asks the court to reject that premise, noting that the company blocked Indiana IP addresses specifically to avoid operating in the state.

Aylo further disputes the state’s claim that it violated Indiana’s Deceptive Consumer Sales Act. The company’s brief says the complaint offers little more than what it calls “a word salad of accusations,” while failing to identify any actual consumer transaction or conduct that would violate the law.

The lawsuit arrives amid a broader debate across the United States about whether age verification rules can realistically keep minors away from adult content — particularly as tools like VPNs make it easier to appear as though a user is browsing from somewhere else.

Lawmakers in several states have begun exploring ways to address that issue. In Utah, for example, legislators recently passed a bill that would hold adult sites responsible if minors circumvent geolocation safeguards. The measure now awaits action from Gov. Spencer Cox.

In Ohio, a proposal known as the “Innocence Act” would require adult sites to use a geofencing system maintained by a licensed location-technology provider that could dynamically monitor a user’s physical location to determine whether they are inside the state and therefore subject to age verification requirements.

At the federal level, the Kids Internet and Digital Safety (KIDS) Act also addresses the issue. The proposal would establish nationwide age verification requirements and directs websites to take “reasonable measures” to address attempts to bypass those safeguards.

For now, the Indiana case remains at an early stage. The state has until April 10 to respond to Aylo’s motion to dismiss — and the court will then decide whether the case moves forward.

Behind the legal language and constitutional arguments lies a question lawmakers across the country are still wrestling with: when technology keeps changing the rules of the game, what does “reasonable” protection actually look like?

Read More »
Blurred highway

The Web Used to be the “Information Superhighway;” it’s Becoming a Low-Speed School Zone by Stan Q. Brick

Back in the late 90s and early aughts, it was commonplace to hear the internet referred to as “The Information Superhighway,” a term that for many of us connoted not just speed of transfer, but the relatively unfettered regulatory environment surrounding what was then an emerging network for communications and commerce.

Fast forward to 2026 and those heady days of rapid growth and regulatory permissiveness are gone. Some might say “good riddance,” but I can’t help but wonder what we’re losing as we grope for ways to make the web ‘safer’ for a population who arguably shouldn’t be using it, at all.

During an adult industry trade event over 20 years ago, an attorney friend of mine posed a good question: If the web is the “information superhighway,” who in their right mind would want to build a playground for children in the median of such a thoroughfare?

The answer, then and now, is: “Far too many people.” Crucially, a significant subset of those people are legislators, national, state and local. And these days, every time you turn around, one of them is sponsoring, writing or endorsing a measure like the Kids Internet and Digital Safety (KIDS) Act, or the Innocence Act, or some manner of tax directed specifically at adult websites.

I can’t speak for the populations of other countries, but here in the U.S., what I’ve noticed over the decades is many people look to the government to handle jobs they probably ought to be doing themselves – or indeed, that it’s only possible for them to do for themselves.

Look, I get it; it’s hard raising kids. But the difficulty of being a parent is not a new thing – and it certainly isn’t limited to the internet era. When I was kid, way back in the early 1970s, once I left the immediate vicinity of my parents’ home, they had almost no way of knowing what I was up to – a worrying fact for a lot of parents, especially during times when panics over child abductions and general “stranger danger” were in full swing.

Was it easier for my parents to watch me walk off to catch the school bus back when I couldn’t text to confirm my arrival at school than it is for parents these days to do the same, when their kids have dozens of options for checking in or marking themselves “safe”? I think that’s a tough argument to make.

Yes, largely because of the internet and related technologies, kids today have easier access to things like porn than I did when I was a kid. Guess what? Even in the days when we had to go digging through our fathers’ sock drawers to find porn, we still managed to find it. (Where there’s a hormone-fueled will, there is always a way.)

Of course, the impulse to restrict and regulate access to content deemed to be beyond the years of kids is a lot older than the internet, too. They seem almost quaint now, but broadcast decency standards have been around for decades. Does anyone believe these standards have prevented kids from hearing “profane language” or being exposed to content that is “patently offensive” but does not rise to the point of being “obscene” under federal law? If so, I have a healthy store of bridges on hand to sell to these poor, credulous souls.

Yes, the internet is filled with problematic content. But if your concern about what kids stumble across online is limited to “obscene” or “indecent” content, then you’re either ignorant of what lurks online, or the nature of your concern says more about you than it does the internet.

One thing about the internet has not changed since the days when it was common to call it the Information Superhighway: It remains an enormous network of independently operated computers, on which virtually anyone can publish virtually anything. Mixed in to that ‘everything’ is a long list of things that are potentially “harmful to minors.”

Are sites that promote racial hatred less damaging to minors than pornography? How about sites that disseminate misinformation and disinformation? Are false medical claims something we want kids to be perusing with no guidance or guardrails? How about deepfake videos of a war in progress?

Don’t get me wrong: Not for one minute am I suggesting all those things listed above should be subject to governmental blocking, censorship or over-regulation to prevent their spread. What I’m suggesting – and what I’ve been telling my less-wired friends for literal decades – is simply this: The internet isn’t for children, and it simply can’t be made “safe” for them, try as we might.

The difficult fact is, even if every proposed measure to limit kids’ access to “harmful” content currently under consideration is passed and vigorously enforced, the internet will remain as I described it above – “an enormous network of independently operated computers, on which virtually anyone can publish virtually anything.” To make it ‘safe’ will require fundamentally altering the nature of that network and siloing it to a degree where it will no longer be recognizable as the internet.

And guess what? Even if we do that, you’ll still have to parent your kids. You’ll still have to shepherd them through their early years – and you’ll still have to let go of being a shepherd when they become adults. The internet age didn’t change any of that, either.

If you believe the answer will come from the government, if you believe legislation like the KIDS Act or the Innocence Act will make the world (or even just the internet) a substantially safer place, knock yourself out. Write to your representatives and demand that they pass those laws – and then see what happens.

I’ll tell you what isn’t going to happen: Your job as a parent isn’t going to get easier. The sooner you accept that and get on with the difficult business of raising a child, the better.

Read More »
Missouri flag

Missouri House Advances Porn Age Verification Bill to Senate

JEFFERSON CITY, Mo.—Missouri lawmakers have advanced an age verification bill out of the GOP-controlled state House of Representatives, moving the state closer to joining others that have enacted laws regulating access to adult content online.

The bill received a third-reading vote on March 4 and was then transmitted to the Senate, where it was taken up Monday. A first reading was held that morning, and as of this writing it has not yet been referred to a Senate committee for a markup hearing.

Three separate proposals—HB 1839, introduced by state Rep. Sherri Gallick; HB 2921, introduced by state Rep. Melissa Schmidt; and HB 3015, introduced by state Rep. Jeff Farnan—were combined by the House Children and Families Committee into a substitute bill.

The measure would require websites in which at least 33 percent of the content is considered harmful to minors or “pornographic” to verify the ages of users. The requirement could apply to adult platforms as well as mainstream social media services, including Reddit and X.

Despite a non-legislative regulatory intervention issued by former Missouri Attorney Gen

JEFFERSON CITY, Mo.—Missouri lawmakers have advanced an age verification bill out of the GOP-controlled state House of Representatives, moving the state closer to joining others that have enacted laws regulating access to adult content online.

The bill received a third-reading vote on March 4 and was then transmitted to the Senate, where it was taken up Monday. A first reading was held that morning, and as of this writing it has not yet been referred to a Senate committee for a markup hearing.

Three separate proposals—HB 1839, introduced by state Rep. Sherri Gallick; HB 2921, introduced by state Rep. Melissa Schmidt; and HB 3015, introduced by state Rep. Jeff Farnan—were combined by the House Children and Families Committee into a substitute bill.

The measure would require websites in which at least 33 percent of the content is considered harmful to minors or “pornographic” to verify the ages of users. The requirement could apply to adult platforms as well as mainstream social media services, including Reddit and X.

Despite a non-legislative regulatory intervention issued by former Missouri Attorney General Andrew Bailey in 2025 and later supported by his successor, Catherine Hanaway, lawmakers in the Republican-controlled legislature are seeking to codify the requirement in state statute, which would make it more difficult to repeal in the future.

Attorney General Andrew Bailey in 2025 and later supported by his successor, Catherine Hanaway, lawmakers in the Republican-controlled legislature are seeking to codify the requirement in state statute, which would make it more difficult to repeal in the future.

Read More »
Judge's gavel

Kansas Plaintiff Dismisses Chaturbate AV Lawsuit, Updates SuperPorn Complaint

KANSAS CITY, Kan. — The plaintiff in a lawsuit alleging that cam platform Chaturbate violated Kansas’ age verification law has voluntarily dismissed that case, while continuing to pursue a related complaint involving another adult website.

Last year, the National Center on Sexual Exploitation (NCOSE), a conservative anti-pornography organization, filed lawsuits against four adult sites on behalf of a 14-year-old Kansas resident and the teen’s mother. The suits alleged that the minor was able to access content on the platforms without any form of age verification.

Last month, a federal judge dismissed two of the lawsuits, citing a lack of jurisdiction. The ruling could still be appealed.

A third case targeted Multi Media LLC, the company that operates Chaturbate. In that matter, the judge granted the defendant’s motion to compel arbitration and placed the case on hold while arbitration proceeded. Then, on March 4, the plaintiff filed a notice of dismissal, bringing that action to an end.

The fourth lawsuit originally named Techpump Solutions SL, which the complaint described as the operator of SuperPorn.com. Last week, the plaintiff filed an amended complaint that instead identifies Pump Lab SL as the site’s current owner and operator.

The revised complaint introduces new arguments aimed at establishing jurisdiction. In one of the previously dismissed cases, the judge concluded that the plaintiff had not demonstrated that the website “purposefully directed its activities at Kansas.” According to industry attorney Corey Silverstein, the amended filing attempts to address that issue.

“The new complaint is trying to do more than say, ‘A Kansan could reach the site,’” Silverstein said. “It is trying to say, ‘This company specifically aimed parts of its business at Kansas’ — through geotargeting, curated U.S. content, regional content delivery, cookies and ad monetization tied to Kansas users. In plain English, the plaintiff is no longer arguing that the website was merely on the internet; the plaintiff is arguing that the company was steering the website into Kansas on purpose.”

Silverstein added that whether the updated argument will ultimately establish jurisdiction will likely depend on the available evidence rather than the legal framing alone.

“This is a smarter and more developed jurisdictional theory than the one the judge already rejected, because it tries to answer the court’s central concern: Where is the evidence of deliberate targeting of Kansas itself?” Silverstein explained. “If the plaintiff can back up the allegations that the defendant actually selected regional CDN points, knowingly used geolocation tied to Kansas and commercially exploited Kansas users in a state-specific way, the argument is stronger.”

He also noted that if those claims ultimately resemble a broader argument that the site was simply accessible online and happened to be used by someone in Kansas, the court could reach the same conclusion as before.

Meanwhile, the state of Kansas has filed its own lawsuit against SARJ LLC, alleging that the company’s adult websites — including metart.com, sexart.com and vivthomas.com — failed to implement age verification requirements established under state law. SARJ has argued that the same jurisdictional issues that led to the dismissal of two of the NCOSE-backed lawsuits should also apply in its case. Whether the court will apply the same legal reasoning to a lawsuit brought by the state remains to be determined, and the earlier dismissals could still be appealed.

The Free Speech Coalition described last month’s dismissals as “an important victory against state laws enforced by private rights of action,” while also encouraging its members to comply with all applicable laws.

Read More »