Political Attacks

Spain Imposes €950,000 Fine on Yoti Over Biometric Data and Consent Violations

AEPD Logo

MADRIDSpain’s data protection authority has imposed a total fine of €950,000 on Yoti Ltd, the British digital identity and age verification company, after determining that the company committed three separate violations of the General Data Protection Regulation (GDPR) in connection with the operation of its Digital ID application.

The decision, issued under file reference EXP202317887, was signed by Lorenzo Cotino Hueso, president of the Agencia Española de Protección de Datos (AEPD). The ruling provides a detailed examination of the regulatory obligations that apply to age verification providers operating in Spain.

The three penalties consist of €500,000 for unlawful processing of biometric data under Article 9 of the GDPR; €200,000 for obtaining invalid consent for research and development processing in violation of Article 7; and €250,000 for excessive data retention in breach of the storage limitation principle set out in Article 5.1(e). In addition to the financial penalties, the authority ordered Yoti to implement corrective measures within six months after the resolution becomes final.

Yoti Ltd, registered in the United Kingdom with tax identification number 08998951, provides age verification services used by platform operators across multiple markets. According to the resolution, all of the company’s verification methods — including facial age estimation, document-based verification, credit card checks, mobile number matching and the Digital ID application — are available for use in Spain. The company’s most recent published revenue figure, cited in the resolution as of March 2025, is €15,029,907, which the authority used as a reference point in determining proportionate and dissuasive penalties.

How Yoti’s technology works

The Digital ID application is the service at the center of the enforcement action. According to documentation submitted during the investigation, the application allows users to create a verified identity account by uploading a government-issued identity document and capturing a selfie image.

The technology uses deep neural networks to process the facial image. The image is converted into pixels treated as numerical values and analyzed through a layered network of mathematical nodes. A typical run through the system produces an estimated age in approximately one to 1.5 seconds.

Yoti describes its services to business clients as comprising eight verification methods. According to the company’s data protection impact assessment (DPIA), these include facial age estimation, verification through the Digital ID application, document identification, credit card verification, mobile number verification, database checks, electronic identity systems used in Switzerland, Denmark and Finland, and a U.S. mobile driver’s license option. When these services are offered on a software-as-a-service basis, client companies act as data controllers while Yoti acts as a processor. Within the Digital ID application itself, however, Yoti acts as the controller.

The facial age estimation model was trained using 12 age range categories (0-1, 2-3, 4-6, 7-9, 10-12, 13-15, 16-17, 18-24, 25-29, 30-39, 40-49 and 50-60), four gender groupings, and three skin tone groups based on the Fitzpatrick scale, producing 144 demographic combinations. According to a company white paper referenced in the resolution and updated in September 2024, the model demonstrated accuracy within 1.28 years across gender and skin tone categories.

Training images were collected through an online portal that required adult consent, as well as through a South African family welfare organization, Be In Touch, working with schools. The United Kingdom’s Information Commissioner’s Office, which previously included Yoti in a regulatory sandbox program, advised against the South African collection method due to potential data protection implications.

The Digital ID application also applies age restrictions based on jurisdiction. According to Yoti, “the Digital ID app cannot be used by persons under the digital age of consent, i.e. 13 years in the United Kingdom and 14 years in Spain.” During account creation the application detects a user’s location and, in Spain, presents two options: “I am 14 or over”or “I am 13 or under.” The registration process continues only if the user selects the first option. No technical mechanism verifies the accuracy of the declaration.

For repeated verification, Yoti implemented a cookie-based age token system. These tokens remain valid for 30 days, allowing users who have verified their age once to reuse the result across participating platforms. The company also provides an “age account” feature that stores tokens in a username-and-password account accessible across devices.

First violation: biometric special category data

The AEPD’s primary finding concerns the processing of biometric data without a valid legal basis under Article 9 of the GDPR. The regulation prohibits the processing of special category data — including biometric data used for identification — unless specific exemptions apply.

Yoti maintained during the investigation that the facial scans generated by its system should not be considered special category biometric data because they are intended to authenticate users rather than uniquely identify them. The authority rejected this interpretation.

According to the resolution, data qualifies as biometric special category data under Article 4.14 of the GDPR when it relates to physical or behavioral characteristics of an individual, is used to confirm unique identification and undergoes specific technical processing to generate biometric templates. The AEPD determined that Yoti’s system meets all three criteria.

The authority found that the facial scan produces a biometric template stored while the user account remains active. When users modify their PIN or recover their account, the system captures a new facial scan and compares it with the stored template through a 1:1 matching process.

According to the decision, “despite repeatedly asserting — both during account creation and in the privacy policy — that the purpose of processing the biometric facial pattern is to guarantee user identification, Yoti does not consider itself to be processing special category personal data,” a position the authority described as demonstrating “particular negligence.”

The fine for this violation was set at €500,000. The authority cited the involvement of minors and the international processing of data — including servers outside the European Union — as aggravating factors.

For transfers between the United Kingdom and India, where Yoti operates a Security Centre providing manual verification support, the company relies on EU standard contractual clauses with a UK addendum. According to the DPIA, personnel at this center can access document images and selfies through remote connections to UK servers using “thin terminals,” while no other staff outside the center can view the information. The AEPD noted that the cross-border dimension further limits users’ practical control over their data.

Second violation: pre-ticked consent boxes for R&D

The second violation concerns the mechanism used to obtain user consent for internal research and development.

According to the investigation, the application displayed a pre-selected checkbox allowing users’ biometric data to be used to train and improve Yoti’s facial age estimation algorithms unless users manually deselected the option.

Yoti’s documentation confirms this design. The company stated, “In the Digital ID app, the default value is that data can be used for R&D. Yoti has taken steps to make this clear to users. Users can opt out, preventing their data being used for R&D, by using the app settings.”

The AEPD determined that this approach does not meet GDPR requirements. Article 4.11 defines consent as “any freely given, specific, informed and unambiguous indication of the data subject’s wishes” expressed through a clear affirmative action. Pre-ticked checkboxes do not constitute such an action.

The authority cited the European Data Protection Board’s Guidelines 05/2020, which state that consent obtained through default settings cannot be considered valid even if users later have the ability to withdraw it. The resolution notes: “Yoti consciously notes that consent granted by default can be revoked, without taking into account that there should not be a subsequent revocation at the data subject’s request, but rather that consent should be obtained in accordance with the safeguards and guarantees established by the GDPR.”

According to the DPIA, the data used for research processing may include facial images with timestamps, dates of birth derived from identity documents, gender information, document type details, country codes, video and audio recordings, device information, behavioral data, health-related information and race or ethnicity estimates derived from the Fitzpatrick scale. Data from users aged 13 to 18 is also included.

The fine for this violation was set at €200,000, with the authority again citing the involvement of minors and processing on servers outside the EU as aggravating circumstances.

Third violation: retention periods beyond stated purposes

The third infringement relates to the retention of personal data — including biometric data — for longer than necessary.

According to Yoti’s DPIA, Digital ID data and age tokens may be retained while a user account remains active or for three years after the last activity. The biometric facial template is stored throughout that period. The AEPD found this disproportionate.

The authority determined that the liveness check, which verifies that a real person is present during registration, is completed during account creation. Once this verification occurs, the purpose of the biometric capture is fulfilled. Retaining the biometric pattern beyond that moment cannot be justified by reference to that completed purpose.

The resolution also noted that additional uses of the biometric template — such as PIN modification or account recovery — may never occur during the account’s lifetime, meaning the storage of biometric data for possible future events fails to meet the storage limitation principle.

The authority also identified concerns regarding geolocation data. According to Yoti, the company collects users’ country code, city and state derived from their IP address and retains this information for five years. The stated purpose is to determine which jurisdiction’s age restrictions apply. The AEPD concluded that once jurisdiction is determined during account creation, extended retention of the location data is unnecessary.

Another retention issue involved fraudulent identity documents. Yoti indicated that documents identified as fraudulent may be stored for up to two years to train fraud detection systems. The authority determined that improving software constitutes a separate purpose not directly related to the original identity verification objective.

Video recordings created during liveness checks were also examined. The company’s terms state that such recordings “will be permanently deleted within 30 days of the date it was recorded, unless we are required to retain it for regulatory reasons.” The authority concluded that once liveness is confirmed, retention beyond that moment exceeds the legitimate purpose of the recording.

The fine for this infringement was set at €250,000, reflecting the large number of affected users and the involvement of special category data.

Corrective measures and timeline

The AEPD ordered Yoti to implement three corrective measures within six months after the decision becomes final:

• Demonstrate that the processing of biometric special category data complies with GDPR requirements.

• Demonstrate that consent-based processing meets the standards established by the regulation.

• Demonstrate that personal data retention is limited to the period strictly necessary for each processing purpose under Article 5.1(e).

The decision becomes final once the one-month period for filing an administrative appeal before the AEPD presidency has passed without action, or once the resolution is formally notified if no appeal is filed. Yoti may also challenge the ruling before the Contentious-Administrative Chamber of the National Court within two months of notification.

Failure to comply with the corrective measures could constitute a separate administrative violation under Articles 83.5 and 83.6 of the GDPR, potentially resulting in further enforcement proceedings.

Regulatory context

The Yoti ruling forms part of a broader series of enforcement actions by Spain’s data protection authority. The AEPD previously imposed a €500,000 fine on FC Barcelona for deficiencies in a data protection impact assessment related to biometric facial and voice data from approximately 143,000 members. The authority also issued a €1.8 million fine against airport operator AENA over the deployment of facial recognition systems, and a €1.8 million penalty against Informa D&B for processing personal data without a valid legal basis.

The decision also references the European Data Protection Board’s Statement 1/2025, adopted in February 2025, which outlines ten principles for GDPR-compliant age assurance systems. These include requirements that age verification technologies use the least intrusive methods available, avoid enabling tracking or profiling and implement short retention periods.

The ruling highlights ongoing differences in how European regulators interpret biometric data rules. While previous guidance from the United Kingdom’s Information Commissioner’s Office indicated that facial age estimation may fall outside biometric identification rules when used only for categorization, Spain’s AEPD concluded that persistent facial templates used for matching operations constitute biometric processing under Article 9.

The decision underscores increasing scrutiny of age verification technologies across Europe as regulators examine both their effectiveness and the privacy implications of the systems used to implement them.

Read More »

Aylo Challenges Indiana Lawsuit Over VPN Access and Age Verification

Law books and a gavel

INDIANAPOLIS — A legal fight unfolding in Indiana courts is putting a familiar question under a bright light: how far must an adult website go to keep minors out — and what counts as “reasonable” when technology keeps finding new ways around the rules?

This week, Aylo asked a Marion Superior Court judge to dismiss a lawsuit brought by the state of Indiana, which accuses the company of violating the state’s age verification law by failing to stop users who bypass location restrictions with VPNs and similar tools.

Indiana Attorney General Todd Rokita filed the complaint late last year, arguing that the safeguards used by Pornhub and other Aylo-operated sites do not meet the requirements of the state’s law. According to the complaint, the sites rely primarily on blocking users whose internet addresses show they are located in Indiana — a method the state says can be easily sidestepped.

The complaint states that IP-based restrictions used by the company “are insufficient to comply with Indiana’s Age Verification Law because Indiana residents, including minors, can still easily access the Defendants’ websites with a VPN IP or proxy address from another jurisdiction or through the use of location spoofing software.”

Aylo, in its motion to dismiss, counters that the state is stretching the law far beyond what it actually requires. In a supporting brief filed with the court, the company argues that Indiana’s interpretation of the statute violates several constitutional protections, including the First Amendment, the Due Process Clause and the Commerce Clause.

“Plaintiff takes the position that website operators cannot avoid violating the AVL by blocking Internet traffic from Indiana IP addresses unless those technological restrictions also prevent users from circumventing the geoblocks through VPNs routing traffic through IP addresses associated with other states,” the company’s brief states. “But the AVL contains no such requirement.”

According to the state’s complaint, investigators working for Rokita’s office accessed Pornhub and other Aylo sites from Indiana by routing their internet connection through a VPN server that produced a Chicago-based IP address. Because the sites allowed access under those circumstances, the state argues that they “lacked any reasonable form of age verification.”

Aylo disputes that conclusion. The company says that since the law took effect, it has blocked all internet addresses associated with Indiana from accessing its sites directly. In its filing, Aylo also criticizes the state for deliberately bypassing those protections through what it describes as “technological subterfuge.”

“The statute mandates only ‘reasonable age verification’ — not technologically infallible measures that anticipate and defeat every possible user circumvention tool,” the brief argues.

Aylo also characterizes geoblocking as a widely used solution across the internet. The company’s filing describes the practice as “a widely recognized, industry-standard method of geographic access control used by major streaming and content platforms worldwide.”

From the company’s perspective, Indiana’s lawsuit goes too far. The brief argues that the state’s interpretation of its law would impose an unnecessary burden on protected speech, exceeding the limits set by courts when evaluating age verification laws.

In particular, Aylo points to the standard established in Free Speech Coalition v. Paxton, a case that allowed state age verification laws to stand so long as they meet what courts call “intermediate scrutiny.” Aylo maintains that Indiana’s interpretation of its law fails that test and therefore violates the First Amendment.

The company also raises concerns about due process. According to the brief, Indiana is attempting to apply its law beyond the state’s borders without clear guidance, which Aylo says makes the statute “unconstitutionally vague” under the 14th Amendment’s Due Process Clause.

Another argument centers on the Constitution’s Commerce Clause. Aylo contends that the state’s interpretation effectively forces companies to regulate activity far outside Indiana’s jurisdiction.

“To comply with Plaintiff’s interpretation of the AVL, a publisher, such as Aylo Freesites, would need to impose age verification nationwide, and perhaps worldwide, so as to account for the possibility that an Indiana resident might use a VPN to disguise their location as from another jurisdiction,” the brief states, adding that such an approach “impermissibly extends Indiana law beyond its territorial boundaries.”

The company also challenges whether the court even has jurisdiction in the case. According to the filing, the state’s argument assumes that Indiana residents may still access the sites by circumventing restrictions through VPNs or proxy servers. Aylo asks the court to reject that premise, noting that the company blocked Indiana IP addresses specifically to avoid operating in the state.

Aylo further disputes the state’s claim that it violated Indiana’s Deceptive Consumer Sales Act. The company’s brief says the complaint offers little more than what it calls “a word salad of accusations,” while failing to identify any actual consumer transaction or conduct that would violate the law.

The lawsuit arrives amid a broader debate across the United States about whether age verification rules can realistically keep minors away from adult content — particularly as tools like VPNs make it easier to appear as though a user is browsing from somewhere else.

Lawmakers in several states have begun exploring ways to address that issue. In Utah, for example, legislators recently passed a bill that would hold adult sites responsible if minors circumvent geolocation safeguards. The measure now awaits action from Gov. Spencer Cox.

In Ohio, a proposal known as the “Innocence Act” would require adult sites to use a geofencing system maintained by a licensed location-technology provider that could dynamically monitor a user’s physical location to determine whether they are inside the state and therefore subject to age verification requirements.

At the federal level, the Kids Internet and Digital Safety (KIDS) Act also addresses the issue. The proposal would establish nationwide age verification requirements and directs websites to take “reasonable measures” to address attempts to bypass those safeguards.

For now, the Indiana case remains at an early stage. The state has until April 10 to respond to Aylo’s motion to dismiss — and the court will then decide whether the case moves forward.

Behind the legal language and constitutional arguments lies a question lawmakers across the country are still wrestling with: when technology keeps changing the rules of the game, what does “reasonable” protection actually look like?

Read More »

The Web Used to be the “Information Superhighway;” it’s Becoming a Low-Speed School Zone by Stan Q. Brick

Blurred highway

Back in the late 90s and early aughts, it was commonplace to hear the internet referred to as “The Information Superhighway,” a term that for many of us connoted not just speed of transfer, but the relatively unfettered regulatory environment surrounding what was then an emerging network for communications and commerce.

Fast forward to 2026 and those heady days of rapid growth and regulatory permissiveness are gone. Some might say “good riddance,” but I can’t help but wonder what we’re losing as we grope for ways to make the web ‘safer’ for a population who arguably shouldn’t be using it, at all.

During an adult industry trade event over 20 years ago, an attorney friend of mine posed a good question: If the web is the “information superhighway,” who in their right mind would want to build a playground for children in the median of such a thoroughfare?

The answer, then and now, is: “Far too many people.” Crucially, a significant subset of those people are legislators, national, state and local. And these days, every time you turn around, one of them is sponsoring, writing or endorsing a measure like the Kids Internet and Digital Safety (KIDS) Act, or the Innocence Act, or some manner of tax directed specifically at adult websites.

I can’t speak for the populations of other countries, but here in the U.S., what I’ve noticed over the decades is many people look to the government to handle jobs they probably ought to be doing themselves – or indeed, that it’s only possible for them to do for themselves.

Look, I get it; it’s hard raising kids. But the difficulty of being a parent is not a new thing – and it certainly isn’t limited to the internet era. When I was kid, way back in the early 1970s, once I left the immediate vicinity of my parents’ home, they had almost no way of knowing what I was up to – a worrying fact for a lot of parents, especially during times when panics over child abductions and general “stranger danger” were in full swing.

Was it easier for my parents to watch me walk off to catch the school bus back when I couldn’t text to confirm my arrival at school than it is for parents these days to do the same, when their kids have dozens of options for checking in or marking themselves “safe”? I think that’s a tough argument to make.

Yes, largely because of the internet and related technologies, kids today have easier access to things like porn than I did when I was a kid. Guess what? Even in the days when we had to go digging through our fathers’ sock drawers to find porn, we still managed to find it. (Where there’s a hormone-fueled will, there is always a way.)

Of course, the impulse to restrict and regulate access to content deemed to be beyond the years of kids is a lot older than the internet, too. They seem almost quaint now, but broadcast decency standards have been around for decades. Does anyone believe these standards have prevented kids from hearing “profane language” or being exposed to content that is “patently offensive” but does not rise to the point of being “obscene” under federal law? If so, I have a healthy store of bridges on hand to sell to these poor, credulous souls.

Yes, the internet is filled with problematic content. But if your concern about what kids stumble across online is limited to “obscene” or “indecent” content, then you’re either ignorant of what lurks online, or the nature of your concern says more about you than it does the internet.

One thing about the internet has not changed since the days when it was common to call it the Information Superhighway: It remains an enormous network of independently operated computers, on which virtually anyone can publish virtually anything. Mixed in to that ‘everything’ is a long list of things that are potentially “harmful to minors.”

Are sites that promote racial hatred less damaging to minors than pornography? How about sites that disseminate misinformation and disinformation? Are false medical claims something we want kids to be perusing with no guidance or guardrails? How about deepfake videos of a war in progress?

Don’t get me wrong: Not for one minute am I suggesting all those things listed above should be subject to governmental blocking, censorship or over-regulation to prevent their spread. What I’m suggesting – and what I’ve been telling my less-wired friends for literal decades – is simply this: The internet isn’t for children, and it simply can’t be made “safe” for them, try as we might.

The difficult fact is, even if every proposed measure to limit kids’ access to “harmful” content currently under consideration is passed and vigorously enforced, the internet will remain as I described it above – “an enormous network of independently operated computers, on which virtually anyone can publish virtually anything.” To make it ‘safe’ will require fundamentally altering the nature of that network and siloing it to a degree where it will no longer be recognizable as the internet.

And guess what? Even if we do that, you’ll still have to parent your kids. You’ll still have to shepherd them through their early years – and you’ll still have to let go of being a shepherd when they become adults. The internet age didn’t change any of that, either.

If you believe the answer will come from the government, if you believe legislation like the KIDS Act or the Innocence Act will make the world (or even just the internet) a substantially safer place, knock yourself out. Write to your representatives and demand that they pass those laws – and then see what happens.

I’ll tell you what isn’t going to happen: Your job as a parent isn’t going to get easier. The sooner you accept that and get on with the difficult business of raising a child, the better.

Read More »

Utah Adult Website Tax Bill Advances to Governor’s Desk

Taxes

SALT LAKE CITY — A bill that would tax adult websites and hold them liable if minors circumvent geolocation safeguards has passed the Utah Legislature and now heads to the desk of Gov. Spencer Cox for signature or veto.

In addition to updating investigation and enforcement rules for age verification in Utah, SB 73 would impose an excise tax of 2% on adult sites operating in the state. The tax would apply to transactions involving “access to digital images, digital audio-visual works, digital audio works, digital books, or gaming services,” including streaming or subscription access to those works and services.

Industry attorneys have cited several potential legal hurdles the tax could face if enacted. However, Alabama has already adopted a similar measure imposing a 10% tax on adult content, while lawmakers in Virginia and Pennsylvaniahave introduced proposals exploring similar policies.

Revenue generated by the proposed Utah tax would be directed to a state account intended to fund “(a) mental health treatment programs for minors affected by material harmful to minors; (b) educational programs for parents, guardians, educators, and minors on the mental health risks associated with material harmful to minors; (c) early prevention and intervention programs for minors at risk of mental health harm from material harmful to minors; and (d) research and public awareness campaigns addressing mental health harm to minors caused by material harmful to minors.”

VPN Requirements

The legislation also includes a provision stating: “An individual is considered to be accessing the website from this state if the individual is actually located in the state, regardless of whether the individual is using a virtual private network, proxy server, or other means to disguise or misrepresent the individual’s geographic location to make it appear that the individual is accessing a website from a location outside this state.”

In December, officials in Indiana filed a lawsuit against Aylo, alleging that the company and its affiliates violated the state’s age-verification law by failing to prevent access by users employing virtual private networks to bypass geolocation controls. The VPN language in SB 73 could similarly affect enforcement of Utah’s age-verification law, which took effect in January 2023.

If Cox signs the bill, it would take effect Oct. 1.

Read More »

Pornhub Parent Aylo Limits Access in Australia Ahead of Age Verification Deadline

The Australian flag

NICOSIA, Cyprus — Pornhub parent company Aylo will restrict access to its free video-sharing platforms in Australia in response to new age verification regulations, the company confirmed Thursday.

Australia’s Designated Internet Services Code comes into force March 9. Finalized last year by Australia’s online safety regulator, eSafety, the rules require sites and platforms with “the sole or predominant purpose” of providing online adult content to implement age-assurance measures before allowing users to access such material.

Failure to comply could result in civil penalties of up to 49.5 million Australian dollars (more than $35 million) per breach.

An Aylo spokesperson said that users in Australia will be presented with a “safe for work” version of the platform when they visit the sites and provided the following statement:

“In response to Australia’s new age verification law, Aylo’s video-sharing platforms will be restricting access to adult material before the deadline on March 9th. Australia is following a similar approach to the U.K., which all our evidence shows does not effectively protect minors, and instead creates harms relating to data privacy and exposure to illegal content on noncompliant platforms.”

Earlier this year, Aylo began blocking access to its free sites in the United Kingdom as of Feb. 2 unless users had already set up accounts prior to that date.

Australian news site Crikey reported that users attempting to access Aylo sites Redtube, YouPorn and Tube8 are encountering a message stating that the platforms are “not currently accepting new account registrations” in the region.

The Aylo statement also cited a recent survey by child abuse prevention charity the Lucy Faithfull Foundation, which found that 45% of U.K. pornography users have visited sites that are not compliant with age verification rules under the Online Safety Act. The organization expressed concern that such sites may be more likely to host harmful or illegal content.

The statement also reiterated that Aylo supports device-based solutions as “the most realistic and effective way to protect minors online.”

“We encourage regulators to require operating systems to play their part in the protection of minors and the reduction of data sharing,” the statement reads. “For example, we have seen that Apple will be deploying an age verification requirement for 18+ app downloads. We suggest Apple enable the screen time feature to limit adult websites by default, and make it such that the very same verification be required to disable it. This can ensure that only age-verified Apple devices can access adult websites.”

While drafting the Designated Internet Services Code, eSafety conducted public consultations but did not adopt recommendations submitted by some industry representatives, including the Free Speech Coalition.

Read More »

House Panel Approves Online Safety Bill Requiring Age Verification for Adult Sites

US Congress

WASHINGTON, D.C. — Lawmakers on Capitol Hill took another step Thursday toward creating a nationwide rulebook for how adult websites verify the ages of their users.

The U.S. House of Representatives Committee on Energy and Commerce approved the Kids Internet and Digital Safety (KIDS) Act, a broad online safety package that includes provisions requiring age verification by adult websites at the federal level.

The KIDS Act is an omnibus measure combining several online safety proposals into a single bill. Much of the public discussion around it has centered on the Kids Online Safety Act portion of the legislation, particularly revisions to language that previously would have imposed a “duty of care” standard on social media platforms. But the package also includes an updated version of the Shielding Children’s Retinas from Egregious Exposure on the Net (SCREEN) Act, which would establish nationwide age-verification requirements for adult websites.

Title I of the legislation, titled “Shielding Minors From Obscenity,” requires adult platforms to implement what the bill calls a “technology verification measure.” The bill defines that as technology that “(A) employs a system or process to determine whether it is more likely than not that a user of a covered platform is a minor; and (B) prevents access by minors to any sexual material harmful to minors on a covered platform.”

To comply with the proposal, platforms — or third-party age-verification providers working on their behalf — would need to use such verification technology to assess a user’s age and also take “reasonable measures” to address attempts to bypass those systems. That provision appears aimed at methods commonly used to avoid verification requirements, including virtual private networks, or VPNs.

Failure to comply would be treated as a violation of the Federal Trade Commission Act’s prohibition on unfair or deceptive practices. Under the proposal, civil penalties could reach up to $10,000 for each violation.

Industry Legal Perspectives

About half of U.S. states have already enacted their own age-verification laws. If the KIDS Act ultimately becomes law, its age-verification provisions would override those state-level requirements.

Industry attorney Corey Silverstein said that while he views any form of mandatory age verification as a violation of the First Amendment and a prior restraint on free speech, he believes a single federal standard would be preferable to a patchwork of state laws.

“The various state-level AV laws have created absolute havoc throughout the industry, containing small differences that make compliance a nightmare for service providers,” he said.

Silverstein noted, however, that the bill’s language leaves certain areas of state authority intact. While the KIDS Act would generally preempt state age-verification laws, it specifies that it would not preempt laws related to trespass, contract, tort, product liability, consumer protection, or laws carrying criminal penalties.

“This would still leave the door open to individual states to pursue criminal charges and the filing of private lawsuits,” Silverstein cautioned. “Additionally, an overly aggressive attorney general could still attempt to pursue an adult platform under the guise of ‘general consumer protection,’ although I believe that such an attempt would have considerable obstacles to overcome.”

Industry attorney Lawrence Walters said the age-verification requirement in the KIDS Act appears “more forgiving” than many existing state laws.

“The state laws typically require that the platform verify the user is not a minor,” Walters said. “The KIDS Act requires that the covered platform determine whether it is ‘more likely than not’ that the user is a minor.”

Walters added that the specific verification methods that would qualify under the legislation would likely be clarified later through guidance or rulemaking from the Federal Trade Commission.

He also pointed to a potential timing issue if the bill becomes law.

“These obligations would kick in one year after passage of the KIDS Act,” he said. “Given federal preemption, this could create an environment where the state AV laws are unenforceable for a year until the federal standard becomes effective.”

The legislation will next move to the House Judiciary Committee for consideration. If approved by Congress and signed into law by President Donald Trump, the KIDS Act would take effect one year after enactment.

Read More »

Ohio Lawmakers Move Forward With Updated Age Verification Bill

Ohio state house

COLUMBUS, Ohio — Republican state representatives in the Ohio House of Representatives have advanced a revised age-verification bill intended to close what lawmakers describe as a loophole that allowed some adult websites to avoid implementing required age checks across the state’s digital space.

The age-verification bill, called the Innocence Act, was introduced by a bipartisan group of lawmakers led by Republican state Reps. Steve Demetriou and Josh Williams.

Demetriou previously introduced an earlier version of the Innocence Act that included criminal penalties for noncompliance and a misdemeanor charge for minors who successfully bypassed a content block. The current version of the bill removes those criminal provisions.

The House Technology and Innovation Committee voted unanimously to advance the measure, sending the Innocence Act (House Bill 84) to the House floor for debate and a vote.

House Bill 84 is expected to advance in the Senate without significant opposition and could be signed into law by Gov. Mike DeWine.

Demetriou and Williams said they worked with the office of Ohio Attorney General Dave Yost in drafting the legislation. The attorney general’s office is responsible for enforcing age-verification requirements against adult website owners and platforms that host a substantial amount of adult content.

Lawmakers have described the measure as a “redo” after Aylo, the parent company of Pornhub, interpreted the state’s original age-verification law as exempting “interactive computer services,” as defined by Section 230 of the federal Communications Decency Act of 1996, from Ohio’s civil enforcement authority related to age-gating.

Section 230 is a federal statute that provides liability protection for online platforms that host third-party content, shielding them from legal responsibility for material posted by users or other publishers.

According to the bill’s sponsors, House Bill 84 addresses that interpretation by clarifying enforcement authority. The updated measure establishes civil penalties instead of criminal sanctions, including fines of up to $100,000 per day for noncompliance.

The bill is currently before the House floor.

Read More »

Sex Work Decriminalization Debated in Alaska and Colorado

Colorado flag

LOS ANGELES — In two very different corners of the country, the same question is starting to echo through courtrooms and statehouses: what should the law actually do about consensual sex work?

Right now, that conversation is unfolding in Alaska and Colorado.

The Alaska state courts are considering whether the state’s prostitution laws conflict with the constitution’s protections around privacy — specifically the right to make private decisions about consensual sexual relationships, even when money or some other form of compensation is involved. At the same time, lawmakers in Colorado have introduced legislation aimed at decriminalizing sex work and escorting.

The organization Community United for Safety and Protection (CUSP) has filed a complaint in Alaska Superior Court seeking to end the criminalization of sex work performed for a fee. The group argues that existing prostitution statutes unfairly target both sex workers and survivors of trafficking.

Amber Batts, a member of CUSP, told local news outlets that the lawsuit is intended to remove criminal penalties tied to sex work. According to Batts, such a ruling “would take the criminalization out” of state law.

According to the local NBC affiliate KTUU, the lawsuit argues that Alaska’s constitutional “right to privacy” is being violated because it involves “private decisions regarding bodily autonomy and consensual sexual relationships without a compelling government interest.”

Batts also told KTUU that her focus is on decriminalization rather than legalization, drawing a distinction from regulated systems such as those in parts of Nevada.

Batts explained, “You have to work for someone, you have to be licensed, you know, there’s different aspects to legalization, whereas decriminalization would just take the whole aspect of the prostitution code out.” Litigation in the case remains ongoing.

Meanwhile, in Colorado, lawmakers have introduced legislation that would decriminalize sex work. Senate Bill (SB) 26-097 was introduced by Democratic state Sens. Nick Hinrichsen and Lisa Cutter, along with Democratic state Reps. Lorena Garcia and Rebekah Stewart. The measure is currently under review by the Senate Judiciary Committee.

“The bill requires the statewide decriminalization of commercial sexual activity among consenting adults,” reads a summary of the proposed legislation.

“It declares that decriminalizing commercial sexual activity among consenting adults is a matter of statewide concern and expressly preempts statutory or home rule city, town, city, and county, or county ordinances, resolutions, regulations, or codes criminalizing commercial sexual activity,” the summary adds.

Supporters of SB 97 are seeking to repeal criminal offenses tied to prostitution, soliciting prostitution, keeping a place of prostitution, and patronizing a prostitute. The legislation would keep existing criminal penalties related to pandering but would also update the legal language used in the statutes, replacing the term “prostitution” with “commercial sexual activity.”

If adopted, the proposal would move Colorado toward a framework that removes criminal penalties for both sex workers and their clients — a model that differs from systems where only the buyer is criminalized. The measure remains under consideration by state lawmakers.

Read More »

House Committee to Consider Online Safety Bill With Federal Age-Verification Requirement

US Congress

WASHINGTON — The U.S. House of Representatives Committee on Energy and Commerce is scheduled to meet Thursday to review and potentially amend the Kids Internet and Digital Safety (KIDS) Act, a measure that includes provisions establishing federal age-verification requirements for adult websites.

The KIDS Act is an omnibus bill that combines several online safety proposals. Among the measures included in the legislation is the Shielding Children’s Retinas from Egregious Exposure on the Net (SCREEN) Act, a federal age-verification bill introduced last year by Sen. Mike Lee of Utah and Rep. Mary Miller of Illinois.

An updated version of the SCREEN Act, amended by Congressman Craig Goldman of Texas, forms the basis of Title I of the KIDS Act. That section of the bill is titled “Shielding Minors From Obscenity.”

The section requires adult websites to implement what the bill describes as a “technology verification measure.” The legislation defines that as “technology that (A) employs a system or process to determine whether it is more likely than not that a user of a covered platform is a minor; and (B) prevents access by minors to any sexual material harmful to minors on a covered platform.”

To comply with the proposal, websites or their third-party age-verification providers would be required to use such a “technology verification measure” to verify a user’s age and take “reasonable measures” to prevent circumvention of those systems. The provision appears intended to address the use of virtual private networks (VPNs) and other tools that can bypass age-verification requirements.

About half of U.S. states currently have age-verification laws in place. If the KIDS Act becomes law, its age-verification provisions would supersede those state laws. The bill states: “No State, or political subdivision of a State, may prescribe, maintain, enforce, or continue in effect any law, rule, regulation, requirement, standard, or other provision” that requires age verification by adult sites.

If the bill is passed by Congress and signed into law by President Trump, it would take effect one year after enactment.

Failure to comply with the proposed law would be treated as a violation of the Federal Trade Commission Act’s prohibition against unfair or deceptive acts or practices. Violators could face civil penalties of up to $10,000 for each violation.

If the Committee on Energy and Commerce approves the KIDS Act following Thursday’s markup session, the bill could then move to the full House of Representatives for consideration.

Read More »

U.K. Parliament Moves Forward With ‘Step’ Porn Ban, Consent Withdrawal Rules

UK Government House and Big Ben

LONDON — The House of Lords, the U.K.’s upper chamber of Parliament, on Monday approved a series of amendments to the pending Crime and Policing Bill that would invalidate certain talent contracts and ban several categories of adult content, including so-called “step” pornography and scenes in which adult performers appear to portray minors.

The proposals mark another step in the British government’s ongoing effort to tighten regulation of online adult content — an effort that has gathered momentum over the past year as lawmakers debate how far those restrictions should go.

‘Step’ Content Ban

The House of Lords approved an amendment classifying pornographic images depicting sex between relatives as a priority offense under the Online Safety Act — a category that currently includes material such as child sexual abuse imagery and terrorism content.

In December, the government rejected amendments that would have criminalized depictions of sexual activity between stepparents and stepsiblings. At the time, officials maintained that the proposed language should not extend to step-family scenarios.

That position changed during Monday’s debate.

Baroness Gabrielle Bertin, a Conservative member of the House of Lords who previously led an independent government-commissioned review of pornography regulation, introduced language expanding the prohibition to include “step” content. The chamber approved the change.

Bertin told the House of Lords, “Depictions of incest being banned is great, but it is just token if you do not ban step incest as it will all be driven into the step incest category, which is just as damaging.”

If the Crime and Policing Bill becomes law with the amendment intact, possession of such material could carry a penalty of up to two years in prison, a fine, or both. Publishing it could bring a sentence of up to five years in prison, a fine, or both.

Withdrawal of Consent

Lawmakers also approved an amendment granting individuals appearing in adult content the ability to withdraw consent for publication at any time.

Under the proposal, it would become legally “irrelevant” whether a performer had previously agreed to the publication of a video or image. If consent is later withdrawn, platforms hosting the material would be required to remove it within 24 hours.

Before the vote, Parliamentary Under-Secretary of State Baroness Alison Levitt urged caution about the proposal’s practical consequences.

“The part of the amendment relating to the withdrawal of consent and its application to professional entertainment contracts has a number of practical implications,” Levitt cautioned. “Where content is produced legally, as with the wider film industry, the rules and regulations governing its use are usually a commercial matter to be agreed between the performer and the production company, taking into account the intellectual property framework.”

If adopted into law, violations could result in penalties of up to two years’ imprisonment, a fine, or both. Platforms found in breach of the rules could face fines of up to £18 million or 10% of their global revenue.

Adult Performers Portraying Minors

Another amendment approved by the House of Lords would outlaw content that “mimics” child sexual abuse by featuring adult performers who appear to be minors or are implied to be minors.

The provision would allow authorities to interpret factors such as costumes, dialogue or settings as indicators that a performer is portraying a child, even if no explicit statement about the character’s age is made.

The government opposed the amendment during debate.

Levitt warned lawmakers that expanding the scope of the law could complicate enforcement efforts targeting actual child sexual abuse material.

“It is important to remember that the purpose of this suite of legislation is to criminalize indecent images of actual children and to help identify and swiftly safeguard children who are subject to sexual abuse,” Levitt said. “Expanding the scope of the Act to include adults who can and have consented to make pornography risks diverting resources for the police to try to distinguish children from adults who are pretending to be children. It risks delaying necessary safeguarding activity and leaving real children at continued risk of harm.”

If enacted with the amendment intact, content interpreted as featuring adults portraying minors would also become a priority offense under the Online Safety Act.

Pornography Review Head Denounces Industry

Momentum for stricter online content regulation intensified following the release, in February 2025, of a government-commissioned “pornography review” launched under the administration of former Prime Minister Rishi Sunak.

Among the review’s recommendations was a ban on adult content deemed “degrading, violent and misogynistic.” That recommendation later helped shape provisions in the Crime and Policing Bill addressing depictions of nonfatal strangulation, commonly referred to as “choking.”

Bertin, who led the review, delivered a sharp critique of the adult industry during Monday’s debate.

“It is a sector that has been driven to abusive extremes by powerful, profit-driven algorithms, too often monetizing sexual violence and degradation,” Bertin told the House of Lords. “Exploitation and trafficking are rife. Sexual abuse material remains far too easy to find on these sites, and many survivors tell us that what is filmed as content is in reality recorded abuse. This cannot continue.”

Bertin called for what she described as stronger intervention across the industry’s business structure.

“Porn is ultimately about the money,” she said. “We need far tighter regulation and law that ends the grey area and replaces the passive, light-touch self-regulation with far more proactive scrutiny.”

With the amendments approved by the House of Lords, the Crime and Policing Bill will now return to the House of Commons, where lawmakers will review the newly added provisions before the legislation can move forward.

Read More »