Bali

Three Tourists Detained in Bali Over Alleged Filming of Porn

BALI, Indonesia — Three tourists have been arrested in Bali after local police accused them of filming pornographic content on the island.

A 23-year-old French woman, identified as Melisa Mireille Jeanine, was arrested on March 13 alongside a 24-year-old Italian man, reported as Nadir ben Said, as the pair attempted to leave Indonesia through Denpasar airport en route to Thailand.

Another man, a 26-year-old French national known only as ERB, was arrested in Canggu, Bali, on Monday. Police described him as the woman’s “manager.”

Police chief Joseph Edward Purba of Bali’s Badung district said the trio were being held on suspicion of creating and distributing pornographic content for profit.

A 23-year-old French woman, Melisa Mireille Jeanine, was arrested alongside two men.

The trio were accused by Bali police of making pornographic content.

“Their motivation to do the (alleged) crime is seeking profit from pornographic video content,” Purba told a press conference on Tuesday.

“All the three suspects are now facing Indonesian electronic information and transaction laws for making and spreading the content.”

Purba said police seized three mobile phones, a camera, a MacBook laptop and a motorcycle taxi vest from the suspects.

Pornographic content is illegal in Indonesia, and those convicted can face up to 10 years in prison on pornography charges, along with an additional six years for online distribution.

Although Bali is predominantly Hindu, Indonesia is a Muslim-majority country with strict laws regarding pornography.

On January 2, Indonesia implemented a new Criminal Code that introduced and revised laws criminalizing premarital sex, cohabitation and public drunkenness.

Under the code, adultery, premarital sex and cohabitation can carry penalties ranging from six months to one year in prison.

Legal experts say these provisions require a formal complaint from certain parties before authorities can take action.

“These alleged crimes cannot be processed by the police without a complaint which can only be filed by the legal husband or wife, parents or children of the perpetrator,” said Retno Murni, a legal expert and founder of the People’s Law Centre.

“Therefore, foreign tourists cannot be arrested, raided, or prosecuted simply for staying or residing with a partner, unless there is a valid complaint from these parties.”

Murni added that tourists who follow local laws and customs have no reason for concern.

The arrests follow a separate case in December 2025 involving British adult content creator Bonnie Blue, who was detained and later deported from Bali.

The 26-year-old was subsequently barred from entering Indonesia for at least 10 years, according to immigration authorities.

During a press conference outside Bali’s Ngurah Rai Immigration Office, Immigration chief Heru Winarko said the British national and her team had violated the terms of their visas.

“They have misused the visa they have to make content in Bali,” Winarko said.

“They will be black-listed from entering Indonesia for at least 10 years (that) could be extended.”

The performer, whose real name is Tia Billinger, was arrested along with 17 male tourists during a raid at a studio in Badung, Bali.

Fourteen of the men, all Australian nationals, were released without charge while authorities continued their investigation into Billinger and three others.

After two days of interviews, Badung Police said they had not identified pornographic elements during the raid, and Billinger was released without charge.

Officials said those present at the studio told investigators they were participating in the production of reality show content.

Read More »
Superheroes

Beware Opportunists in Superhero Capes by Stan Q. Brick

Some folks who favor suppression of sexually explicit materials are more forthright about what gives life to their censorious zeal than others. Say what you will about the old “Morality in Media” brand, back when the organization went by that moniker, everybody knew where they were coming from just by reading the sign on their door.

Perhaps because the folks at Morality in Media perceived they were limiting their demographic reach with the judgy-sounding, clunky old name, they opted for a rebrand back in 2015, becoming the National Center on Sexual Exploitation. Suddenly, with the flip of a logo, they sounded less like angry Bible thumpers out to cancel your favorite sitcom and more like a serious nongovernmental agency out to prevent real harm.

You know what didn’t change when MIM became NCOSE? The president of the organization. Patrick A. Trueman ran the joint on both sides of the rebrand, from 2010 to 2023. Before that, Trueman was prosecutor at the U.S. Department of Justice during the administration of George H.W. Bush, which also happens to be the last time federal prosecutors aggressively enforced the nation’s obscenity laws. Trueman remains the President Emeritus of NCOSE to this day.

Just as I doubt Trueman lost his zest for cleaning up American media when his organization rebranded, I don’t buy that a lot of the organizations most strenuously supporting various age verification mandates at the state and federal level are really in it to protect minors from harmful materials online – unless one happens to define “harmful” the same way they do, of course.

Referencing remarks recently made by Rep. Leigh Finke, a transgender member of the Minnesota Legislature who has criticized elements of her state’s proposed age verification law, Rindala Alajaji, Associate Director of State Affairs at the Electronic Frontier Foundation (EFF), and Molly Buckley, one of the organization’s legislative analysts, call attention not only to the impact of the Supreme Court’s ruling in Free Speech Coalition v. Paxton, but the nature of the organizations supporting Texas in the case.

“The Paxton case, and the coalition behind it, illustrates exactly how these laws can be weaponized,” Alajaji and Buckley write. “They weren’t there just to stand up for young people’s privacy online—they were there to argue that the state has a compelling interest in shielding minors from material that, in practice, often includes LGBTQ content. Ultimately, these groups would like to age-gate not just porn sites, but also any content that might discuss sex, sexuality, gender, reproductive health, abortion, and more.”

Alajaji and Buckley add that the “coalition of organizations that filed amicus briefs in support of Texas’s age verification law tells us everything we need to know about the true intentions behind legislating access to information online: censorship, surveillance, and control.”

“After all, if the race to age-gate the internet was purely about child safety, we would expect its strongest supporters to be child-development experts or privacy advocates,” the authors note. “Instead, the loudest advocates are organizations dedicated to policing sexuality, attacking LGBTQ+ folks and reproductive rights, and censoring anything that doesn’t fit within their worldview.”

The thing about appealing to people’s desire to protect children is that it works – and for a good reason. It’s a good thing to want to protect your kids. God knows they need protection, including from themselves. Parents should do all the reasonable, rational, normal things they can do to protect their kids.

But if you’re denying a gay or trans kid access to information from people who have been through the same things that kid is going through and can offer guidance, support and maybe a little solace for the kid, you’re not protecting that kid; you’re stifling, aggravating and alienating that kid. Shit, you might be killing that kid – even if you earnestly believe you’re helping.

I can also understand why the idea of age-gating the internet might sound good to people, especially frightened people who are raising kids who are online much more than their parents. But fear is a state of mind that can make people suggestible – and that’s when opportunists don their superhero capes and make a dramatic entrance, promising to make the world (wide web) a safer, better place for you and your kids—without really mentioning the part about how they’re actually in this to keep The Gays from enacting their Sinister Agenda, or whatever it is that animates some of these zealots.

I guess what I’m saying is this: You can’t save your kid from drowning by throwing someone else’s kid into the deep end of the pool with lead boots on. And some of the people promising to provide your kid a life jacket are heavily invested in lead.

Read More »
Brazil flag

Brazil Issues Initial Framework for New Age-Verification Rules

BRASÍLIA, Brazil — President Luiz Inácio Lula da Silva on Wednesday signed a decree setting out how Brazil will move forward with new rules requiring adult websites to verify the ages of users accessing content from within the country.

The decree follows the Digital Statute for Children and Adolescents (Digital ECA), which took effect Tuesday. The law is aimed at strengthening protections for minors online and requires adult content providers to implement age verification measures that go beyond simple self-declaration, regardless of where those platforms operate.

The scope extends beyond traditional websites. Marketplaces and delivery applications offering adult or erotic products and services must also verify the age of customers and block minors from accessing those products.

Enforcement authority rests with the National Data Protection Authority (ANPD), which was recently elevated to the status of a regulatory agency and contributed to drafting the decree.

The ANPD has also released a question-and-answer document outlining how the law is expected to function in practice. According to that guidance, platforms must verify a user’s age before granting access to adult material. If explicit content is visible prior to verification, it must be hidden or blurred by default. The rules also require platforms to prevent minors from creating or maintaining accounts.

Penalties for noncompliance begin with a warning and a 30-day window to correct violations. After that, regulators may impose fines of up to 10% of a company’s revenue in Brazil or up to 1,000 Brazilian reais (approximately $195) per registered user, capped at a total of 50 million reais (approximately $9.73 million).

The ANPD has not yet issued a formal compliance timeline or detailed technical standards for age verification systems. The agency indicated in its guidance that additional rules and best-practice recommendations will be released at a later stage.

Industry response has already begun to take shape. The Brazilian Association of Adult Entertainment Industry Professionals (ABIPEA), launched in September, has offered to provide technical and institutional guidance to companies operating both inside and outside Brazil as they adapt to the new framework.

ABIPEA is also preparing to host a dedicated space at the Intimi Expo trade show, scheduled for March 20–22 in São Paulo, focused on “educating and guiding the adult industry regarding the Digital Statute for Children and Adolescents, its practical implications and compliance strategies.”

For now, the framework is in place. What comes next will depend on how it’s applied — and how the industry adjusts once the rules move from paper into practice.

Read More »
US Congress

Senate Panel Examines Potential Reforms to Section 230

WASHINGTON — The U.S. Senate Committee on Commerce, Science, and Transportation held a hearing Wednesday on potential changes to Section 230 of the Communications Decency Act, which protects online platforms — including adult websites — from liability for user-generated content.

Three bills proposing a full repeal of Section 230 are currently pending in Congress. However, those measures were not addressed during the hearing. Instead, lawmakers focused on possible reforms to the law in a session titled “Liability or Deniability? Platform Power as Section 230 Turns 30.”

The push to revisit Section 230 stems from two primary concerns.

First, lawmakers from both parties have criticized major technology companies for allegedly profiting from harmful or illegal content while avoiding responsibility. Some argue that increased liability would encourage stronger moderation. During the hearing, Sen. Marsha Blackburn said, “Big Tech has proven they are incapable of regulating or policing themselves. They will not do it.”

Second, some conservative lawmakers argue that platforms use Section 230 protections to justify restricting certain viewpoints, particularly conservative speech. Sen. Eric Schmitt cited efforts by the Biden administration to limit the reach of COVID-19 misinformation and 2020 election claims, describing those actions as violations of the First Amendment.

Sen. Ted Cruz, who chairs the committee, referenced both issues, stating that Congress should act “to prevent social media from harming Americans, especially children, while not incentivizing Big Tech censorship.”

Cruz did not advocate for a full repeal of Section 230.

“I’m concerned that a full repeal or sunset would lead platforms to engage in worse behavior — to engage in more censorship to protect themselves from litigation,” Cruz said. “But we should consider whether reform of Section 230 is needed.”

Sen. Brian Schatz, the committee’s ranking Democrat present, also supported revisiting the law.

“We can work together and fix the law,” Schatz said. “This idea that we can’t touch it, otherwise internet freedom incinerates, is preposterous.”

Possible Impact on Adult

Current proposals to reform Section 230 are not specifically directed at adult platforms, but they could have implications for the industry.

Much of the hearing focused on issues involving minors, including cases where individuals encountered harmful content or online predators. Lawmakers also discussed whether algorithmic systems and AI-generated content should be covered under Section 230 protections.

Industry attorneys and advocates have raised concerns that changes to the law could lead to targeted exemptions, similar to those created under FOSTA/SESTA, which removed liability protections for platforms found to “unlawfully promote and facilitate” prostitution or sex trafficking.

Such exemptions could expose adult platforms to increased civil litigation related to user-generated content.

While many cases could ultimately be dismissed on First Amendment grounds, Section 230 currently allows defendants to avoid prolonged litigation. As Techdirt’s Mike Masnick has written, the law “provides a procedural advantage in getting vexatious, frivolous nuisance lawsuits shut down much faster than they would be otherwise.”

Without those protections, larger companies may still be able to manage legal costs, but smaller platforms could face greater challenges.

Testifying at the hearing, Stanford Law School expert Daphne Keller said eliminating Section 230 would create legal and financial burdens that disproportionately affect smaller companies.

A world without the law, she said, “would impose legal uncertainty and expense that today’s incumbent giants could survive but their smaller rivals could not.”

Keller also noted that under other regulatory systems, platforms often receive high volumes of complaints seeking removal of lawful content.

“We have a lot of data to predict what happens when platforms are held liable for the speech of their users,” Keller said. “Platforms receive huge numbers of false allegations under laws like the DMCA here or the Digital Services Act in Europe, from people demanding the removal of perfectly legal speech. Governments do this, companies do this against their competitors — and platforms have strong incentives to simply comply.”

During the hearing, Sen. Tammy Baldwin warned against indirect government pressure on platforms.

She cautioned against “informal, often coercive efforts by government officials to pressure private companies into moderating or removing content that they cannot legally censor directly.”

Keller, in written testimony, cited actions by Federal Communications Commission chair Brendan Carr, including pressure directed at ABC that temporarily affected comedian Jimmy Kimmel’s program.

Carr also contributed to Project 2025’s “Mandate for Leadership,” which calls for changes to Section 230 and argues that pornography should not be protected under the First Amendment.

“Pornography should be outlawed,” the document states. “The people who produce and distribute it should be imprisoned.”

The document has been cited as a policy framework for the current administration. Other officials associated with the administration have also expressed support for restrictions on adult content. Trump advisor Russell Vought has discussed limiting pornography through indirect regulatory approaches, while Vice President Vance has called for a ban.

Read More »
Pornhub warning collage

Australia’s Porn Age-Verification Law Sparks Debate Over Safety and Shift to “Darker Corners”

Something changed overnight — not just on adult sites, but in how people moved through the internet itself.

When major porn platforms began blocking Australians from access, it didn’t stop there. X also started requiring age checks before users could view adult content. And for some, that meant something far more intrusive: being asked to submit a video selfie just to look at a single post.

“Almost every post on my alt account has a content warning and asks me [for a] selfie for age verification,” one Australian porn consumer, Joe*, said. “It’s maddening.”

Others described pulling back entirely, choosing to walk away rather than comply.

“I’m honestly no longer engaging with any of the sites and platforms I used to use because not only is the verification process really invasive, but some of them even give you the option to sign in with Google … and that’s the last platform I’d trust with any sensitive data,” Jethro said.

“The choices are: link your perversions to your government ID, or submit your face into the AI slop machine,” Chris* said.

It’s still early days. Aside from several Aylo-owned sites like RedTube blocking Australians outright, and Pornhub limiting access to safe-for-work content for users who aren’t logged in, most of the top free adult platforms have yet to fully implement age verification.

Data from the SEO firm Semrush suggests only one site in the country’s top 20 — Thisvid — had complied so far. But with potential fines reaching $49.5 million for violations, more platforms are expected to follow. Users have already begun to react.

Search interest in porn-related terms has climbed to its highest level since pandemic lockdowns ended in 2022. At the same time, searches for virtual private networks — tools that allow users to appear as though they’re browsing from outside Australia — have surged to levels not seen since 2015, when website blocking laws targeting piracy were introduced.

Sex workers say none of this is surprising. For years, they warned that regulations developed between the eSafety commissioner and industry stakeholders could drive users away from regulated spaces and into less controlled environments.

“We’ve already warned that these laws will funnel traffic away from platforms that do have moderation safeguards in place and towards sites that profit from non-consensual and stolen porn, including the unpaid work of sex workers,” said Mish Pony, chief executive of Scarlet Alliance.

“So driving people off mainstream services, such as Pornhub, does not stop porn consumption, it just pushes it into darker corners of the internet. It makes it harder to address real harms.”

Andy Conboi, an OnlyFans creator based in Sydney, said he has already seen the effects firsthand. Engagement on his posts has dropped.

“People don’t really want to send a photo of themselves or their licence or whatever to these platforms, particularly Twitter [X],” he said.

“In the group chats I do have with creators, people are just frustrated and annoyed, their engagement is down [and] it’s much more difficult to put stuff out there and be seen a lot of the time.”

Some creators, he added, are pivoting. They’re shifting toward safe-for-work content on platforms like Instagram and TikTok just to maintain visibility — a move he described as ironic, given the presence of underage users on those services.

For longtime opponents of pornography, however, the changes mark a milestone.

After earlier attempts at internet filtering fell short under previous governments, and opt-out filtering proposals were abandoned before the 2013 election, regulators have gradually expanded their authority over online content. The eSafety commissioner’s role has grown significantly over the past decade.

Advocacy groups that have campaigned for tighter controls welcomed the developments.

“This day was hard fought for,” said Melinda Tankard Reist, movement director for Collective Shout. “Collective Shout and our partners and allies worked hard to bring it to fruition.”

“It is a relief to know proof-of-age protections are now in place as one obstacle in the way of young people being exposed to rape porn, torture porn, incest porn and extreme violence and degradation of women.”

The Australian Christian Lobby also supported the outcome.

“The fact that P*rnhub have ceased operating in Australia is already proof of its effectiveness,” said chief executive Michelle Pearse.

Questions remain about whether those outcomes will hold — or simply shift behavior elsewhere.

Researchers studying similar laws in parts of the United States found that when major sites restricted access, users didn’t necessarily stop searching. They redirected.

“We saw very large substitution effects for search traffic for XVideos, which is the second largest porn website in the states,” said David Lang, a Stanford University researcher and lead author of the report.

“It’s a sufficiently large change that the No 2 site is now the No 1 site in states that passed those laws.”

Tracking VPN use proved more difficult, researchers noted, since users often disappear from local data once they connect through external servers.

For digital rights advocates, the concern isn’t just where people go — it’s what they leave behind.

Tom Sulston, head of policy at Digital Rights Watch, warned that age-verification systems could create centralized pools of highly sensitive personal data.

“It would be absolutely trivial for a criminal to set up porn sites as honeytraps to capture Australians’ identities and sexual interests; and then use that material for blackmail, similar to existing sextortion schemes,” Sulston said.

“Foreign intelligence services looking to trap Australian targets could easily do the same. The age-verification regime puts Australians at greater risk of harm, not less.”

And that’s the uneasy part of it all. The behavior doesn’t disappear — it just moves.

Read More »
Big Ben

Starmer Government Pushes Back on MPs’ Bid to Ban Taboo Porn in U.K.

LONDON — U.K. Prime Minister Keir Starmer is facing the prospect of dissent within his own Labour Party if the government does not support a proposed ban on certain categories of pornography included in the Crime and Policing Bill.

The pressure follows a narrow vote in the House of Lords earlier this month, where peers approved an amendment by 144 to 143 to prohibit simulated incest pornography, step-relationship content and depictions such as consensual strangulation.

Several Labour backbenchers, many of them women, have raised concerns about the availability of so-called “step-incest” material online and its potential impact on victims of child sexual abuse. Some lawmakers argue that such content could contribute to harm, according to reports from U.K. media outlets.

One unnamed Labour member of Parliament described “step-incest” pornography as a “gateway drug” to illegal material. Lawmakers from Labour have also worked with Conservative MPs on efforts to criminalize depictions of step-family sexual relationships, even when they are fictional.

Data from Pornhub’s 2025 Year in Review shows that “step mom” remains among the most frequently searched terms on the platform.

If enacted, the law would make a range of currently legal pornography depicting step-relationships subject to potential prosecution by the Crown Prosecution Service, as well as enforcement by agencies including the Metropolitan Police Service and regional police forces.

Baroness Gabby Bertin, who led an independent parliamentary review on the harms of pornography, urged peers to support restrictions on what she described as taboo content, including material portraying “intercourse with a step-child.”

Bertin said online pornography often includes scenes “with settings in children’s bedrooms, with actors in children’s clothes, braces, toys, pigtails, and other markers of childhood. Millions of videos and images are then tagged as ‘little,’ ‘tiny,’ ‘age gap,’ ‘mommy,’ ‘daddy,’ or ‘teen.’”

The government has also drafted provisions to ban the possession or publication of pornography depicting sex between relatives.

The inclusion of step-relationship content in the proposed restrictions prompted debate within the government. Justice minister Baroness Alison Levitt said that while such material is controversial, these relationships are “not illegal in real life.”

Levitt also raised concerns about a separate amendment to the bill involving consent withdrawal. The measure would allow individuals appearing in adult content to withdraw consent at any time, with producers facing potential imprisonment and fines if they fail to comply.

Under the proposal, initial consent to publication would no longer be considered sufficient. If consent is withdrawn, platforms and studios would be required to remove the material within 24 hours of notification.

Read More »
Ofcom logo

Ofcom Calls on Major Tech Platforms to Implement Age-Verification Requirements

LONDON — A quiet warning landed this week on the desks of some of the biggest technology companies in the world. It didn’t come with fireworks or spectacle. Just a deadline — and a clear message.

The United Kingdom’s digital regulator, Ofcom, told major technology firms Thursday that they should begin putting real age-verification systems in place or face potential penalties under the country’s Online Safety Act.

The move arrives as governments around the world wrestle with the same uneasy question: how do you keep children safe online without reshaping the internet itself? The debate has spread well beyond Britain, with similar age-verification efforts underway across Western Europe, Australia and parts of the United States.

According to the regulator, letters were sent to government relations and compliance teams at the parent companies behind platforms including Facebook, Instagram, Roblox, Snapchat, TikTok and YouTube.

Those companies have until April 30 to report back on what progress they’ve made toward deploying stronger age-verification tools.

Regulators say they will review those responses and later publish an assessment outlining how well the companies are complying.

Ofcom Chief Executive Melanie Dawes said the platforms’ public commitments to child safety have not always translated into meaningful protections.

“These online services are household names, but they’re failing to put children’s safety at the heart of their products,” Dawes said. “There is a gap between what tech companies promise in private and what they’re doing publicly to keep children safe on their platforms.”

Dawes added, “Without the right protections, like effective age checks, children have been routinely exposed to risks they didn’t choose, on services they can’t realistically avoid. That must now change quickly, or Ofcom will act.”

Regulators outlined four specific expectations for the companies.

The first calls for “effective minimum-age policies.” The second requires “failsafe grooming protections.” The third focuses on creating “safer feeds for children.” And the fourth calls for “an end to product testing on children.”

Together, the measures are intended to help meet the Online Safety Act’s broader requirement that platforms adopt “age-appropriate design” and prevent minors from accessing services that are not meant for them.

Chris Sherwood, head of the child-protection charity National Society for the Prevention of Cruelty to Children, said stronger oversight has been overdue.

“For too long, social media giants have looked the other way while harmful and addictive content floods children’s feeds, undermining their safety and wellbeing,” Sherwood said.

“That’s why Ofcom’s demand for far greater transparency about the risks children face online, and how tech companies plan to protect them, is absolutely essential,” he added. “We’ve long called for minimum age limits to be properly enforced on social media, so it’s encouraging to see Ofcom confront this head-on.”

The regulator’s push also coincides with a separate warning from the U.K.’s data-privacy authority, the Information Commissioner’s Office, which sent a letter to “social media and video sharing platforms operating in the U.K.”

The letter stated, “We understand that most services are relying on self-declaration to identify whether children are 13 or over, with a limited number also utilising some form of profiling to enforce minimum age requirements.”

“As currently deployed, we don’t think that these tools are effective and therefore they should not continue to be relied upon to prevent access to under-13s.”

The letter was signed by Paul Arnold, whose agency oversees information rights, transparency in public bodies and personal data protections across the United Kingdom.

The regulator’s latest demands arrive just days after lawmakers in the U.K. Parliament declined to adopt an Australia-style proposal that would have barred all social media use for anyone under the age of 16.

Read More »
Click to cancel

FTC Requests Public Comment on Proposed ‘Click to Cancel’ Regulations

WASHINGTON — The Federal Trade Commission this week called for public comment on whether it should revise its Negative Option Rule to address deceptive or unfair practices.

The move is the latest step in the agency’s renewed rulemaking effort on negative option plans, after a federal court last year struck down a “click to cancel” rule intended to make it easier for consumers to end online subscriptions. Opponents of that rule argued the FTC exceeded its authority and failed to follow required procedures by not issuing a preliminary regulatory analysis.

In January, the FTC submitted a draft Advance Notice of Proposed Rulemaking, or ANPRM, on its Negative Option Rule to the Office of Information and Regulatory Affairs for review.

This week’s announcement seeks input on that ANPRM, stating, “The ANPRM asks the public: to weigh in on the current Rule; whether proposed amendments are needed; and about potential regulatory alternatives to address deceptive or unfair negative option practices.”

Christopher Mufarrige, director of the FTC’s Bureau of Consumer Protection, said the agency believes new rulemaking may be warranted.

“Negative option subscriptions can offer procompetitive features to consumers and the marketplace more broadly by lowering transaction costs and ensuring consumers receive uninterrupted service,” Mufarrige said. “The Commission’s enforcement track record suggests, however, that negative option subscriptions continue to be plagued by difficult cancellation processes, unlawful retention tactics, and a suite of other impediments that prevent consumers from easily switching or ending subscription services. Neither consumers nor competition are protected when consumers are enrolled in programs that they either do not want or cannot cancel.”

The Negative Option Rule was first adopted in the 1970s to protect consumers from being automatically enrolled in subscription plans without their consent. As amended in 2024, the rule would have applied to nearly all negative option programs, including automatic renewal and free-to-pay offers. If the update had remained in effect, website operators likely would have been required to make substantial changes to their sign-up and cancellation practices.

The restarted rulemaking process could result in the FTC proposing the same changes again or advancing a similar set of revisions.

With the ANPRM now published in the Federal Register, the public comment period will remain open through April 13. Members of the public may submit comments during that period.

Read More »
Utah House building

Utah’s Proposed Porn Tax Raises Major Civil Liberties Concerns

SALT LAKE CITY — Utah lawmakers are again stepping into the middle of the long-running debate over how far governments should go when regulating online adult content. This time, the focus is a proposed tax on pornography purchased through digital platforms.

Senate Bill 73, introduced earlier this year by Republican lawmakers in the Utah Legislature, would impose what the bill calls a “material harmful to minors” tax on revenue generated from the sale of online pornography. The rate is currently set at 2 percent, after originally being proposed at 7 percent.

After several amendments, the measure passed the state Senate with broad support and now awaits further consideration in the House of Representatives. If approved there, it would head to the desk of Gov. Spencer Cox, who has publicly supported policies aimed at restricting access to online pornography.

The legislation was introduced by Republican state Sen. Calvin R. Musselman and state Rep. Steve Eliason, both of whom have supported previous efforts in Utah to regulate online adult content.

Under the proposal, revenue generated by the tax would be directed toward several state programs. The bill specifies that funds could be used to support enforcement efforts tied to Utah’s existing age verification laws for social media and adult websites, among other regulatory initiatives.

During the legislative process, lawmakers added language addressing virtual private networks (VPNs) and similar technologies used to bypass location-based restrictions. The revised bill would make it illegal to intentionally circumvent content blocks implemented by platforms as part of age verification compliance, with violations subject to civil penalties.

The measure also includes provisions aimed at limiting how websites communicate with users in Utah about these tools. Specifically, the bill states that platforms covered by age verification requirements may not provide instructions or guidance that would allow users to bypass those restrictions.

The current version of Senate Bill 73 states:

“A commercial entity that operates a website that contains a substantial portion of material harmful to minors may not facilitate or encourage the use of a virtual private network, proxy server, or other means to circumvent age verification requirements, including by providing: (a) instructions on how to use a virtual private network or proxy server to access the website; or (b) means for individuals in this state to circumvent geofencing or blocking.”

Measures regulating adult content have appeared in several states in recent years. Alabama, for example, enacted legislation that imposes a 10 percent tax on pornography-related revenue generated within the state, alongside additional legal requirements for adult performers involving notarized consent documentation.

Utah’s proposal does not include those record-keeping provisions, but it does expand the scope of enforcement mechanisms connected to age verification and online access controls.

The tax itself would function similarly to what policymakers often describe as a “sin tax,” a type of levy commonly applied to products such as alcohol, tobacco and gambling. In this case, the tax would apply to companies that generate revenue from online adult content through methods including clip sales, subscriptions and fan-based platforms.

Under the proposal, entities meeting the bill’s definition of “covered entities” would calculate the portion of revenue generated from Utah-based users and pay the 2 percent tax to the state on an annual basis.

If the measure becomes law, larger online platforms could likely absorb the additional compliance costs with relative ease. For smaller companies operating in the adult content market, however, the administrative and regulatory requirements could prove more challenging.

The bill’s future now depends on the outcome of deliberations in the Utah House. Should it pass there and receive the governor’s signature, the measure would add Utah to a growing list of states experimenting with new approaches to regulating digital adult content.

Whether the proposal ultimately reshapes how online platforms operate — or instead becomes the subject of courtroom challenges — may become clear only after the legislative process runs its course.

Read More »
AEPD Logo

Spain Imposes €950,000 Fine on Yoti Over Biometric Data and Consent Violations

MADRIDSpain’s data protection authority has imposed a total fine of €950,000 on Yoti Ltd, the British digital identity and age verification company, after determining that the company committed three separate violations of the General Data Protection Regulation (GDPR) in connection with the operation of its Digital ID application.

The decision, issued under file reference EXP202317887, was signed by Lorenzo Cotino Hueso, president of the Agencia Española de Protección de Datos (AEPD). The ruling provides a detailed examination of the regulatory obligations that apply to age verification providers operating in Spain.

The three penalties consist of €500,000 for unlawful processing of biometric data under Article 9 of the GDPR; €200,000 for obtaining invalid consent for research and development processing in violation of Article 7; and €250,000 for excessive data retention in breach of the storage limitation principle set out in Article 5.1(e). In addition to the financial penalties, the authority ordered Yoti to implement corrective measures within six months after the resolution becomes final.

Yoti Ltd, registered in the United Kingdom with tax identification number 08998951, provides age verification services used by platform operators across multiple markets. According to the resolution, all of the company’s verification methods — including facial age estimation, document-based verification, credit card checks, mobile number matching and the Digital ID application — are available for use in Spain. The company’s most recent published revenue figure, cited in the resolution as of March 2025, is €15,029,907, which the authority used as a reference point in determining proportionate and dissuasive penalties.

How Yoti’s technology works

The Digital ID application is the service at the center of the enforcement action. According to documentation submitted during the investigation, the application allows users to create a verified identity account by uploading a government-issued identity document and capturing a selfie image.

The technology uses deep neural networks to process the facial image. The image is converted into pixels treated as numerical values and analyzed through a layered network of mathematical nodes. A typical run through the system produces an estimated age in approximately one to 1.5 seconds.

Yoti describes its services to business clients as comprising eight verification methods. According to the company’s data protection impact assessment (DPIA), these include facial age estimation, verification through the Digital ID application, document identification, credit card verification, mobile number verification, database checks, electronic identity systems used in Switzerland, Denmark and Finland, and a U.S. mobile driver’s license option. When these services are offered on a software-as-a-service basis, client companies act as data controllers while Yoti acts as a processor. Within the Digital ID application itself, however, Yoti acts as the controller.

The facial age estimation model was trained using 12 age range categories (0-1, 2-3, 4-6, 7-9, 10-12, 13-15, 16-17, 18-24, 25-29, 30-39, 40-49 and 50-60), four gender groupings, and three skin tone groups based on the Fitzpatrick scale, producing 144 demographic combinations. According to a company white paper referenced in the resolution and updated in September 2024, the model demonstrated accuracy within 1.28 years across gender and skin tone categories.

Training images were collected through an online portal that required adult consent, as well as through a South African family welfare organization, Be In Touch, working with schools. The United Kingdom’s Information Commissioner’s Office, which previously included Yoti in a regulatory sandbox program, advised against the South African collection method due to potential data protection implications.

The Digital ID application also applies age restrictions based on jurisdiction. According to Yoti, “the Digital ID app cannot be used by persons under the digital age of consent, i.e. 13 years in the United Kingdom and 14 years in Spain.” During account creation the application detects a user’s location and, in Spain, presents two options: “I am 14 or over”or “I am 13 or under.” The registration process continues only if the user selects the first option. No technical mechanism verifies the accuracy of the declaration.

For repeated verification, Yoti implemented a cookie-based age token system. These tokens remain valid for 30 days, allowing users who have verified their age once to reuse the result across participating platforms. The company also provides an “age account” feature that stores tokens in a username-and-password account accessible across devices.

First violation: biometric special category data

The AEPD’s primary finding concerns the processing of biometric data without a valid legal basis under Article 9 of the GDPR. The regulation prohibits the processing of special category data — including biometric data used for identification — unless specific exemptions apply.

Yoti maintained during the investigation that the facial scans generated by its system should not be considered special category biometric data because they are intended to authenticate users rather than uniquely identify them. The authority rejected this interpretation.

According to the resolution, data qualifies as biometric special category data under Article 4.14 of the GDPR when it relates to physical or behavioral characteristics of an individual, is used to confirm unique identification and undergoes specific technical processing to generate biometric templates. The AEPD determined that Yoti’s system meets all three criteria.

The authority found that the facial scan produces a biometric template stored while the user account remains active. When users modify their PIN or recover their account, the system captures a new facial scan and compares it with the stored template through a 1:1 matching process.

According to the decision, “despite repeatedly asserting — both during account creation and in the privacy policy — that the purpose of processing the biometric facial pattern is to guarantee user identification, Yoti does not consider itself to be processing special category personal data,” a position the authority described as demonstrating “particular negligence.”

The fine for this violation was set at €500,000. The authority cited the involvement of minors and the international processing of data — including servers outside the European Union — as aggravating factors.

For transfers between the United Kingdom and India, where Yoti operates a Security Centre providing manual verification support, the company relies on EU standard contractual clauses with a UK addendum. According to the DPIA, personnel at this center can access document images and selfies through remote connections to UK servers using “thin terminals,” while no other staff outside the center can view the information. The AEPD noted that the cross-border dimension further limits users’ practical control over their data.

Second violation: pre-ticked consent boxes for R&D

The second violation concerns the mechanism used to obtain user consent for internal research and development.

According to the investigation, the application displayed a pre-selected checkbox allowing users’ biometric data to be used to train and improve Yoti’s facial age estimation algorithms unless users manually deselected the option.

Yoti’s documentation confirms this design. The company stated, “In the Digital ID app, the default value is that data can be used for R&D. Yoti has taken steps to make this clear to users. Users can opt out, preventing their data being used for R&D, by using the app settings.”

The AEPD determined that this approach does not meet GDPR requirements. Article 4.11 defines consent as “any freely given, specific, informed and unambiguous indication of the data subject’s wishes” expressed through a clear affirmative action. Pre-ticked checkboxes do not constitute such an action.

The authority cited the European Data Protection Board’s Guidelines 05/2020, which state that consent obtained through default settings cannot be considered valid even if users later have the ability to withdraw it. The resolution notes: “Yoti consciously notes that consent granted by default can be revoked, without taking into account that there should not be a subsequent revocation at the data subject’s request, but rather that consent should be obtained in accordance with the safeguards and guarantees established by the GDPR.”

According to the DPIA, the data used for research processing may include facial images with timestamps, dates of birth derived from identity documents, gender information, document type details, country codes, video and audio recordings, device information, behavioral data, health-related information and race or ethnicity estimates derived from the Fitzpatrick scale. Data from users aged 13 to 18 is also included.

The fine for this violation was set at €200,000, with the authority again citing the involvement of minors and processing on servers outside the EU as aggravating circumstances.

Third violation: retention periods beyond stated purposes

The third infringement relates to the retention of personal data — including biometric data — for longer than necessary.

According to Yoti’s DPIA, Digital ID data and age tokens may be retained while a user account remains active or for three years after the last activity. The biometric facial template is stored throughout that period. The AEPD found this disproportionate.

The authority determined that the liveness check, which verifies that a real person is present during registration, is completed during account creation. Once this verification occurs, the purpose of the biometric capture is fulfilled. Retaining the biometric pattern beyond that moment cannot be justified by reference to that completed purpose.

The resolution also noted that additional uses of the biometric template — such as PIN modification or account recovery — may never occur during the account’s lifetime, meaning the storage of biometric data for possible future events fails to meet the storage limitation principle.

The authority also identified concerns regarding geolocation data. According to Yoti, the company collects users’ country code, city and state derived from their IP address and retains this information for five years. The stated purpose is to determine which jurisdiction’s age restrictions apply. The AEPD concluded that once jurisdiction is determined during account creation, extended retention of the location data is unnecessary.

Another retention issue involved fraudulent identity documents. Yoti indicated that documents identified as fraudulent may be stored for up to two years to train fraud detection systems. The authority determined that improving software constitutes a separate purpose not directly related to the original identity verification objective.

Video recordings created during liveness checks were also examined. The company’s terms state that such recordings “will be permanently deleted within 30 days of the date it was recorded, unless we are required to retain it for regulatory reasons.” The authority concluded that once liveness is confirmed, retention beyond that moment exceeds the legitimate purpose of the recording.

The fine for this infringement was set at €250,000, reflecting the large number of affected users and the involvement of special category data.

Corrective measures and timeline

The AEPD ordered Yoti to implement three corrective measures within six months after the decision becomes final:

• Demonstrate that the processing of biometric special category data complies with GDPR requirements.

• Demonstrate that consent-based processing meets the standards established by the regulation.

• Demonstrate that personal data retention is limited to the period strictly necessary for each processing purpose under Article 5.1(e).

The decision becomes final once the one-month period for filing an administrative appeal before the AEPD presidency has passed without action, or once the resolution is formally notified if no appeal is filed. Yoti may also challenge the ruling before the Contentious-Administrative Chamber of the National Court within two months of notification.

Failure to comply with the corrective measures could constitute a separate administrative violation under Articles 83.5 and 83.6 of the GDPR, potentially resulting in further enforcement proceedings.

Regulatory context

The Yoti ruling forms part of a broader series of enforcement actions by Spain’s data protection authority. The AEPD previously imposed a €500,000 fine on FC Barcelona for deficiencies in a data protection impact assessment related to biometric facial and voice data from approximately 143,000 members. The authority also issued a €1.8 million fine against airport operator AENA over the deployment of facial recognition systems, and a €1.8 million penalty against Informa D&B for processing personal data without a valid legal basis.

The decision also references the European Data Protection Board’s Statement 1/2025, adopted in February 2025, which outlines ten principles for GDPR-compliant age assurance systems. These include requirements that age verification technologies use the least intrusive methods available, avoid enabling tracking or profiling and implement short retention periods.

The ruling highlights ongoing differences in how European regulators interpret biometric data rules. While previous guidance from the United Kingdom’s Information Commissioner’s Office indicated that facial age estimation may fall outside biometric identification rules when used only for categorization, Spain’s AEPD concluded that persistent facial templates used for matching operations constitute biometric processing under Article 9.

The decision underscores increasing scrutiny of age verification technologies across Europe as regulators examine both their effectiveness and the privacy implications of the systems used to implement them.

Read More »