The War on Porn

OpenAI Advisory Panel Opposed ‘Adult Mode’ for ChatGPT

ChatGPT Logo

SAN FRANCISCO — OpenAI is expected to limit its planned “adult mode” feature so that it does not generate deepfakes or synthetic NSFW images, instead restricting the tool to sexually explicit text, according to a report published Sunday.

The report cited an unnamed company spokesperson who said the change is necessary, adding that a rollout timeline has not yet been finalized. The feature had already been delayed as the company reviews concerns related to mental health risks and other potential uses of the technology.

Members of an advisory council of mental health experts selected by OpenAI warned the company in January that the proposed “adult mode” could pose significant risks to minors. One council member, who was not identified, said the feature could potentially lead to the creation of what they described as a “sexy suicide coach.”

The report also indicated that OpenAI’s internal age verification efforts had been considered “spotty.” Chief executive officer and co-founder Sam Altman said in October that the company planned to deploy age assurance and estimation tools to identify users aged 18 and older. In January, OpenAI expanded those efforts by implementing technology from an online identity provider called Persona, which has faced criticism from some observers who described it as invasive and prone to errors.

Read More »

Utah Enacts New Law Imposing Adult Site Tax and Expanding VPN Accountability

Utah House building

SALT LAKE CITY — The conversation around online access just took another turn in Utah, where the rules are getting a little tighter—and, depending on who you ask, a lot more complicated.

Governor Spencer Cox on Thursday signed a bill into law that will tax adult websites and hold them liable if minors bypass geolocation safeguards.

In addition to updating how the state investigates and enforces age-verification requirements, SB 73 introduces a 2% excise tax on adult sites operating in Utah. The tax applies to transactions for “access to digital images, digital audio-visual works, digital audio works, digital books, or gaming services,” including streaming and subscription-based content.

Industry attorneys have pointed to potential legal challenges the tax could face. Still, Utah isn’t entirely alone here—Alabama passed a similar tax at 10% last year, and lawmakers in Virginia and Pennsylvania have begun exploring comparable measures.

Revenue from the new tax will be directed to a state fund supporting “(a) mental health treatment programs for minors affected by material harmful to minors; (b) educational programs for parents, guardians, educators, and minors on the mental health risks associated with material harmful to minors; (c) early prevention and intervention programs for minors at risk of mental health harm from material harmful to minors; and (d) research and public awareness campaigns addressing mental health harm to minors caused by material harmful to minors.”

VPN Requirements

The legislation also includes a provision addressing how location is determined. It states: “An individual is considered to be accessing the website from this state if the individual is actually located in the state, regardless of whether the individual is using a virtual private network, proxy server, or other means to disguise or misrepresent the individual’s geographic location to make it appear that the individual is accessing a website from a location outside this state.”

That language reflects a growing concern among lawmakers about how easily age-verification systems can be bypassed. VPNs, proxies—tools that once felt niche—are now part of everyday digital life, and policymakers are clearly trying to catch up.

In Ohio, a bill known as the “Innocence Act” would require adult websites to “utilize a geofence system maintained and monitored by a licensed location-based technology provider” to track a user’s physical location and determine whether they are in the state and subject to age-verification requirements.

Meanwhile, in Washington, the proposed Kids Internet and Digital Safety (KIDS) Act—aimed at establishing a federal age-verification standard—would require platforms to take “reasonable measures” to prevent users from circumventing those safeguards.

In Indiana, the state has taken a more direct route, filing a lawsuit against Aylo and its affiliates. Officials allege the company failed to prevent access by users using VPNs to bypass geolocation, even though current state law does not explicitly require platforms to account for intentional circumvention.

The VPN provision in Utah’s SB 73 could influence how the state enforces its own age-verification laws, and it may also raise broader legal questions about whether websites can be held responsible for users who actively try to work around location restrictions.

The law is scheduled to take effect on Oct. 1.

Read More »

Three Tourists Detained in Bali Over Alleged Filming of Porn

Bali

BALI, Indonesia — Three tourists have been arrested in Bali after local police accused them of filming pornographic content on the island.

A 23-year-old French woman, identified as Melisa Mireille Jeanine, was arrested on March 13 alongside a 24-year-old Italian man, reported as Nadir ben Said, as the pair attempted to leave Indonesia through Denpasar airport en route to Thailand.

Another man, a 26-year-old French national known only as ERB, was arrested in Canggu, Bali, on Monday. Police described him as the woman’s “manager.”

Police chief Joseph Edward Purba of Bali’s Badung district said the trio were being held on suspicion of creating and distributing pornographic content for profit.

A 23-year-old French woman, Melisa Mireille Jeanine, was arrested alongside two men.

The trio were accused by Bali police of making pornographic content.

“Their motivation to do the (alleged) crime is seeking profit from pornographic video content,” Purba told a press conference on Tuesday.

“All the three suspects are now facing Indonesian electronic information and transaction laws for making and spreading the content.”

Purba said police seized three mobile phones, a camera, a MacBook laptop and a motorcycle taxi vest from the suspects.

Pornographic content is illegal in Indonesia, and those convicted can face up to 10 years in prison on pornography charges, along with an additional six years for online distribution.

Although Bali is predominantly Hindu, Indonesia is a Muslim-majority country with strict laws regarding pornography.

On January 2, Indonesia implemented a new Criminal Code that introduced and revised laws criminalizing premarital sex, cohabitation and public drunkenness.

Under the code, adultery, premarital sex and cohabitation can carry penalties ranging from six months to one year in prison.

Legal experts say these provisions require a formal complaint from certain parties before authorities can take action.

“These alleged crimes cannot be processed by the police without a complaint which can only be filed by the legal husband or wife, parents or children of the perpetrator,” said Retno Murni, a legal expert and founder of the People’s Law Centre.

“Therefore, foreign tourists cannot be arrested, raided, or prosecuted simply for staying or residing with a partner, unless there is a valid complaint from these parties.”

Murni added that tourists who follow local laws and customs have no reason for concern.

The arrests follow a separate case in December 2025 involving British adult content creator Bonnie Blue, who was detained and later deported from Bali.

The 26-year-old was subsequently barred from entering Indonesia for at least 10 years, according to immigration authorities.

During a press conference outside Bali’s Ngurah Rai Immigration Office, Immigration chief Heru Winarko said the British national and her team had violated the terms of their visas.

“They have misused the visa they have to make content in Bali,” Winarko said.

“They will be black-listed from entering Indonesia for at least 10 years (that) could be extended.”

The performer, whose real name is Tia Billinger, was arrested along with 17 male tourists during a raid at a studio in Badung, Bali.

Fourteen of the men, all Australian nationals, were released without charge while authorities continued their investigation into Billinger and three others.

After two days of interviews, Badung Police said they had not identified pornographic elements during the raid, and Billinger was released without charge.

Officials said those present at the studio told investigators they were participating in the production of reality show content.

Read More »

Beware Opportunists in Superhero Capes by Stan Q. Brick

Superheroes

Some folks who favor suppression of sexually explicit materials are more forthright about what gives life to their censorious zeal than others. Say what you will about the old “Morality in Media” brand, back when the organization went by that moniker, everybody knew where they were coming from just by reading the sign on their door.

Perhaps because the folks at Morality in Media perceived they were limiting their demographic reach with the judgy-sounding, clunky old name, they opted for a rebrand back in 2015, becoming the National Center on Sexual Exploitation. Suddenly, with the flip of a logo, they sounded less like angry Bible thumpers out to cancel your favorite sitcom and more like a serious nongovernmental agency out to prevent real harm.

You know what didn’t change when MIM became NCOSE? The president of the organization. Patrick A. Trueman ran the joint on both sides of the rebrand, from 2010 to 2023. Before that, Trueman was prosecutor at the U.S. Department of Justice during the administration of George H.W. Bush, which also happens to be the last time federal prosecutors aggressively enforced the nation’s obscenity laws. Trueman remains the President Emeritus of NCOSE to this day.

Just as I doubt Trueman lost his zest for cleaning up American media when his organization rebranded, I don’t buy that a lot of the organizations most strenuously supporting various age verification mandates at the state and federal level are really in it to protect minors from harmful materials online – unless one happens to define “harmful” the same way they do, of course.

Referencing remarks recently made by Rep. Leigh Finke, a transgender member of the Minnesota Legislature who has criticized elements of her state’s proposed age verification law, Rindala Alajaji, Associate Director of State Affairs at the Electronic Frontier Foundation (EFF), and Molly Buckley, one of the organization’s legislative analysts, call attention not only to the impact of the Supreme Court’s ruling in Free Speech Coalition v. Paxton, but the nature of the organizations supporting Texas in the case.

“The Paxton case, and the coalition behind it, illustrates exactly how these laws can be weaponized,” Alajaji and Buckley write. “They weren’t there just to stand up for young people’s privacy online—they were there to argue that the state has a compelling interest in shielding minors from material that, in practice, often includes LGBTQ content. Ultimately, these groups would like to age-gate not just porn sites, but also any content that might discuss sex, sexuality, gender, reproductive health, abortion, and more.”

Alajaji and Buckley add that the “coalition of organizations that filed amicus briefs in support of Texas’s age verification law tells us everything we need to know about the true intentions behind legislating access to information online: censorship, surveillance, and control.”

“After all, if the race to age-gate the internet was purely about child safety, we would expect its strongest supporters to be child-development experts or privacy advocates,” the authors note. “Instead, the loudest advocates are organizations dedicated to policing sexuality, attacking LGBTQ+ folks and reproductive rights, and censoring anything that doesn’t fit within their worldview.”

The thing about appealing to people’s desire to protect children is that it works – and for a good reason. It’s a good thing to want to protect your kids. God knows they need protection, including from themselves. Parents should do all the reasonable, rational, normal things they can do to protect their kids.

But if you’re denying a gay or trans kid access to information from people who have been through the same things that kid is going through and can offer guidance, support and maybe a little solace for the kid, you’re not protecting that kid; you’re stifling, aggravating and alienating that kid. Shit, you might be killing that kid – even if you earnestly believe you’re helping.

I can also understand why the idea of age-gating the internet might sound good to people, especially frightened people who are raising kids who are online much more than their parents. But fear is a state of mind that can make people suggestible – and that’s when opportunists don their superhero capes and make a dramatic entrance, promising to make the world (wide web) a safer, better place for you and your kids—without really mentioning the part about how they’re actually in this to keep The Gays from enacting their Sinister Agenda, or whatever it is that animates some of these zealots.

I guess what I’m saying is this: You can’t save your kid from drowning by throwing someone else’s kid into the deep end of the pool with lead boots on. And some of the people promising to provide your kid a life jacket are heavily invested in lead.

Read More »

Brazil Issues Initial Framework for New Age-Verification Rules

Brazil flag

BRASÍLIA, Brazil — President Luiz Inácio Lula da Silva on Wednesday signed a decree setting out how Brazil will move forward with new rules requiring adult websites to verify the ages of users accessing content from within the country.

The decree follows the Digital Statute for Children and Adolescents (Digital ECA), which took effect Tuesday. The law is aimed at strengthening protections for minors online and requires adult content providers to implement age verification measures that go beyond simple self-declaration, regardless of where those platforms operate.

The scope extends beyond traditional websites. Marketplaces and delivery applications offering adult or erotic products and services must also verify the age of customers and block minors from accessing those products.

Enforcement authority rests with the National Data Protection Authority (ANPD), which was recently elevated to the status of a regulatory agency and contributed to drafting the decree.

The ANPD has also released a question-and-answer document outlining how the law is expected to function in practice. According to that guidance, platforms must verify a user’s age before granting access to adult material. If explicit content is visible prior to verification, it must be hidden or blurred by default. The rules also require platforms to prevent minors from creating or maintaining accounts.

Penalties for noncompliance begin with a warning and a 30-day window to correct violations. After that, regulators may impose fines of up to 10% of a company’s revenue in Brazil or up to 1,000 Brazilian reais (approximately $195) per registered user, capped at a total of 50 million reais (approximately $9.73 million).

The ANPD has not yet issued a formal compliance timeline or detailed technical standards for age verification systems. The agency indicated in its guidance that additional rules and best-practice recommendations will be released at a later stage.

Industry response has already begun to take shape. The Brazilian Association of Adult Entertainment Industry Professionals (ABIPEA), launched in September, has offered to provide technical and institutional guidance to companies operating both inside and outside Brazil as they adapt to the new framework.

ABIPEA is also preparing to host a dedicated space at the Intimi Expo trade show, scheduled for March 20–22 in São Paulo, focused on “educating and guiding the adult industry regarding the Digital Statute for Children and Adolescents, its practical implications and compliance strategies.”

For now, the framework is in place. What comes next will depend on how it’s applied — and how the industry adjusts once the rules move from paper into practice.

Read More »

Senate Panel Examines Potential Reforms to Section 230

US Congress

WASHINGTON — The U.S. Senate Committee on Commerce, Science, and Transportation held a hearing Wednesday on potential changes to Section 230 of the Communications Decency Act, which protects online platforms — including adult websites — from liability for user-generated content.

Three bills proposing a full repeal of Section 230 are currently pending in Congress. However, those measures were not addressed during the hearing. Instead, lawmakers focused on possible reforms to the law in a session titled “Liability or Deniability? Platform Power as Section 230 Turns 30.”

The push to revisit Section 230 stems from two primary concerns.

First, lawmakers from both parties have criticized major technology companies for allegedly profiting from harmful or illegal content while avoiding responsibility. Some argue that increased liability would encourage stronger moderation. During the hearing, Sen. Marsha Blackburn said, “Big Tech has proven they are incapable of regulating or policing themselves. They will not do it.”

Second, some conservative lawmakers argue that platforms use Section 230 protections to justify restricting certain viewpoints, particularly conservative speech. Sen. Eric Schmitt cited efforts by the Biden administration to limit the reach of COVID-19 misinformation and 2020 election claims, describing those actions as violations of the First Amendment.

Sen. Ted Cruz, who chairs the committee, referenced both issues, stating that Congress should act “to prevent social media from harming Americans, especially children, while not incentivizing Big Tech censorship.”

Cruz did not advocate for a full repeal of Section 230.

“I’m concerned that a full repeal or sunset would lead platforms to engage in worse behavior — to engage in more censorship to protect themselves from litigation,” Cruz said. “But we should consider whether reform of Section 230 is needed.”

Sen. Brian Schatz, the committee’s ranking Democrat present, also supported revisiting the law.

“We can work together and fix the law,” Schatz said. “This idea that we can’t touch it, otherwise internet freedom incinerates, is preposterous.”

Possible Impact on Adult

Current proposals to reform Section 230 are not specifically directed at adult platforms, but they could have implications for the industry.

Much of the hearing focused on issues involving minors, including cases where individuals encountered harmful content or online predators. Lawmakers also discussed whether algorithmic systems and AI-generated content should be covered under Section 230 protections.

Industry attorneys and advocates have raised concerns that changes to the law could lead to targeted exemptions, similar to those created under FOSTA/SESTA, which removed liability protections for platforms found to “unlawfully promote and facilitate” prostitution or sex trafficking.

Such exemptions could expose adult platforms to increased civil litigation related to user-generated content.

While many cases could ultimately be dismissed on First Amendment grounds, Section 230 currently allows defendants to avoid prolonged litigation. As Techdirt’s Mike Masnick has written, the law “provides a procedural advantage in getting vexatious, frivolous nuisance lawsuits shut down much faster than they would be otherwise.”

Without those protections, larger companies may still be able to manage legal costs, but smaller platforms could face greater challenges.

Testifying at the hearing, Stanford Law School expert Daphne Keller said eliminating Section 230 would create legal and financial burdens that disproportionately affect smaller companies.

A world without the law, she said, “would impose legal uncertainty and expense that today’s incumbent giants could survive but their smaller rivals could not.”

Keller also noted that under other regulatory systems, platforms often receive high volumes of complaints seeking removal of lawful content.

“We have a lot of data to predict what happens when platforms are held liable for the speech of their users,” Keller said. “Platforms receive huge numbers of false allegations under laws like the DMCA here or the Digital Services Act in Europe, from people demanding the removal of perfectly legal speech. Governments do this, companies do this against their competitors — and platforms have strong incentives to simply comply.”

During the hearing, Sen. Tammy Baldwin warned against indirect government pressure on platforms.

She cautioned against “informal, often coercive efforts by government officials to pressure private companies into moderating or removing content that they cannot legally censor directly.”

Keller, in written testimony, cited actions by Federal Communications Commission chair Brendan Carr, including pressure directed at ABC that temporarily affected comedian Jimmy Kimmel’s program.

Carr also contributed to Project 2025’s “Mandate for Leadership,” which calls for changes to Section 230 and argues that pornography should not be protected under the First Amendment.

“Pornography should be outlawed,” the document states. “The people who produce and distribute it should be imprisoned.”

The document has been cited as a policy framework for the current administration. Other officials associated with the administration have also expressed support for restrictions on adult content. Trump advisor Russell Vought has discussed limiting pornography through indirect regulatory approaches, while Vice President Vance has called for a ban.

Read More »

Australia’s Porn Age-Verification Law Sparks Debate Over Safety and Shift to “Darker Corners”

Pornhub warning collage

Something changed overnight — not just on adult sites, but in how people moved through the internet itself.

When major porn platforms began blocking Australians from access, it didn’t stop there. X also started requiring age checks before users could view adult content. And for some, that meant something far more intrusive: being asked to submit a video selfie just to look at a single post.

“Almost every post on my alt account has a content warning and asks me [for a] selfie for age verification,” one Australian porn consumer, Joe*, said. “It’s maddening.”

Others described pulling back entirely, choosing to walk away rather than comply.

“I’m honestly no longer engaging with any of the sites and platforms I used to use because not only is the verification process really invasive, but some of them even give you the option to sign in with Google … and that’s the last platform I’d trust with any sensitive data,” Jethro said.

“The choices are: link your perversions to your government ID, or submit your face into the AI slop machine,” Chris* said.

It’s still early days. Aside from several Aylo-owned sites like RedTube blocking Australians outright, and Pornhub limiting access to safe-for-work content for users who aren’t logged in, most of the top free adult platforms have yet to fully implement age verification.

Data from the SEO firm Semrush suggests only one site in the country’s top 20 — Thisvid — had complied so far. But with potential fines reaching $49.5 million for violations, more platforms are expected to follow. Users have already begun to react.

Search interest in porn-related terms has climbed to its highest level since pandemic lockdowns ended in 2022. At the same time, searches for virtual private networks — tools that allow users to appear as though they’re browsing from outside Australia — have surged to levels not seen since 2015, when website blocking laws targeting piracy were introduced.

Sex workers say none of this is surprising. For years, they warned that regulations developed between the eSafety commissioner and industry stakeholders could drive users away from regulated spaces and into less controlled environments.

“We’ve already warned that these laws will funnel traffic away from platforms that do have moderation safeguards in place and towards sites that profit from non-consensual and stolen porn, including the unpaid work of sex workers,” said Mish Pony, chief executive of Scarlet Alliance.

“So driving people off mainstream services, such as Pornhub, does not stop porn consumption, it just pushes it into darker corners of the internet. It makes it harder to address real harms.”

Andy Conboi, an OnlyFans creator based in Sydney, said he has already seen the effects firsthand. Engagement on his posts has dropped.

“People don’t really want to send a photo of themselves or their licence or whatever to these platforms, particularly Twitter [X],” he said.

“In the group chats I do have with creators, people are just frustrated and annoyed, their engagement is down [and] it’s much more difficult to put stuff out there and be seen a lot of the time.”

Some creators, he added, are pivoting. They’re shifting toward safe-for-work content on platforms like Instagram and TikTok just to maintain visibility — a move he described as ironic, given the presence of underage users on those services.

For longtime opponents of pornography, however, the changes mark a milestone.

After earlier attempts at internet filtering fell short under previous governments, and opt-out filtering proposals were abandoned before the 2013 election, regulators have gradually expanded their authority over online content. The eSafety commissioner’s role has grown significantly over the past decade.

Advocacy groups that have campaigned for tighter controls welcomed the developments.

“This day was hard fought for,” said Melinda Tankard Reist, movement director for Collective Shout. “Collective Shout and our partners and allies worked hard to bring it to fruition.”

“It is a relief to know proof-of-age protections are now in place as one obstacle in the way of young people being exposed to rape porn, torture porn, incest porn and extreme violence and degradation of women.”

The Australian Christian Lobby also supported the outcome.

“The fact that P*rnhub have ceased operating in Australia is already proof of its effectiveness,” said chief executive Michelle Pearse.

Questions remain about whether those outcomes will hold — or simply shift behavior elsewhere.

Researchers studying similar laws in parts of the United States found that when major sites restricted access, users didn’t necessarily stop searching. They redirected.

“We saw very large substitution effects for search traffic for XVideos, which is the second largest porn website in the states,” said David Lang, a Stanford University researcher and lead author of the report.

“It’s a sufficiently large change that the No 2 site is now the No 1 site in states that passed those laws.”

Tracking VPN use proved more difficult, researchers noted, since users often disappear from local data once they connect through external servers.

For digital rights advocates, the concern isn’t just where people go — it’s what they leave behind.

Tom Sulston, head of policy at Digital Rights Watch, warned that age-verification systems could create centralized pools of highly sensitive personal data.

“It would be absolutely trivial for a criminal to set up porn sites as honeytraps to capture Australians’ identities and sexual interests; and then use that material for blackmail, similar to existing sextortion schemes,” Sulston said.

“Foreign intelligence services looking to trap Australian targets could easily do the same. The age-verification regime puts Australians at greater risk of harm, not less.”

And that’s the uneasy part of it all. The behavior doesn’t disappear — it just moves.

Read More »

Starmer Government Pushes Back on MPs’ Bid to Ban Taboo Porn in U.K.

Big Ben

LONDON — U.K. Prime Minister Keir Starmer is facing the prospect of dissent within his own Labour Party if the government does not support a proposed ban on certain categories of pornography included in the Crime and Policing Bill.

The pressure follows a narrow vote in the House of Lords earlier this month, where peers approved an amendment by 144 to 143 to prohibit simulated incest pornography, step-relationship content and depictions such as consensual strangulation.

Several Labour backbenchers, many of them women, have raised concerns about the availability of so-called “step-incest” material online and its potential impact on victims of child sexual abuse. Some lawmakers argue that such content could contribute to harm, according to reports from U.K. media outlets.

One unnamed Labour member of Parliament described “step-incest” pornography as a “gateway drug” to illegal material. Lawmakers from Labour have also worked with Conservative MPs on efforts to criminalize depictions of step-family sexual relationships, even when they are fictional.

Data from Pornhub’s 2025 Year in Review shows that “step mom” remains among the most frequently searched terms on the platform.

If enacted, the law would make a range of currently legal pornography depicting step-relationships subject to potential prosecution by the Crown Prosecution Service, as well as enforcement by agencies including the Metropolitan Police Service and regional police forces.

Baroness Gabby Bertin, who led an independent parliamentary review on the harms of pornography, urged peers to support restrictions on what she described as taboo content, including material portraying “intercourse with a step-child.”

Bertin said online pornography often includes scenes “with settings in children’s bedrooms, with actors in children’s clothes, braces, toys, pigtails, and other markers of childhood. Millions of videos and images are then tagged as ‘little,’ ‘tiny,’ ‘age gap,’ ‘mommy,’ ‘daddy,’ or ‘teen.’”

The government has also drafted provisions to ban the possession or publication of pornography depicting sex between relatives.

The inclusion of step-relationship content in the proposed restrictions prompted debate within the government. Justice minister Baroness Alison Levitt said that while such material is controversial, these relationships are “not illegal in real life.”

Levitt also raised concerns about a separate amendment to the bill involving consent withdrawal. The measure would allow individuals appearing in adult content to withdraw consent at any time, with producers facing potential imprisonment and fines if they fail to comply.

Under the proposal, initial consent to publication would no longer be considered sufficient. If consent is withdrawn, platforms and studios would be required to remove the material within 24 hours of notification.

Read More »

Ofcom Calls on Major Tech Platforms to Implement Age-Verification Requirements

Ofcom logo

LONDON — A quiet warning landed this week on the desks of some of the biggest technology companies in the world. It didn’t come with fireworks or spectacle. Just a deadline — and a clear message.

The United Kingdom’s digital regulator, Ofcom, told major technology firms Thursday that they should begin putting real age-verification systems in place or face potential penalties under the country’s Online Safety Act.

The move arrives as governments around the world wrestle with the same uneasy question: how do you keep children safe online without reshaping the internet itself? The debate has spread well beyond Britain, with similar age-verification efforts underway across Western Europe, Australia and parts of the United States.

According to the regulator, letters were sent to government relations and compliance teams at the parent companies behind platforms including Facebook, Instagram, Roblox, Snapchat, TikTok and YouTube.

Those companies have until April 30 to report back on what progress they’ve made toward deploying stronger age-verification tools.

Regulators say they will review those responses and later publish an assessment outlining how well the companies are complying.

Ofcom Chief Executive Melanie Dawes said the platforms’ public commitments to child safety have not always translated into meaningful protections.

“These online services are household names, but they’re failing to put children’s safety at the heart of their products,” Dawes said. “There is a gap between what tech companies promise in private and what they’re doing publicly to keep children safe on their platforms.”

Dawes added, “Without the right protections, like effective age checks, children have been routinely exposed to risks they didn’t choose, on services they can’t realistically avoid. That must now change quickly, or Ofcom will act.”

Regulators outlined four specific expectations for the companies.

The first calls for “effective minimum-age policies.” The second requires “failsafe grooming protections.” The third focuses on creating “safer feeds for children.” And the fourth calls for “an end to product testing on children.”

Together, the measures are intended to help meet the Online Safety Act’s broader requirement that platforms adopt “age-appropriate design” and prevent minors from accessing services that are not meant for them.

Chris Sherwood, head of the child-protection charity National Society for the Prevention of Cruelty to Children, said stronger oversight has been overdue.

“For too long, social media giants have looked the other way while harmful and addictive content floods children’s feeds, undermining their safety and wellbeing,” Sherwood said.

“That’s why Ofcom’s demand for far greater transparency about the risks children face online, and how tech companies plan to protect them, is absolutely essential,” he added. “We’ve long called for minimum age limits to be properly enforced on social media, so it’s encouraging to see Ofcom confront this head-on.”

The regulator’s push also coincides with a separate warning from the U.K.’s data-privacy authority, the Information Commissioner’s Office, which sent a letter to “social media and video sharing platforms operating in the U.K.”

The letter stated, “We understand that most services are relying on self-declaration to identify whether children are 13 or over, with a limited number also utilising some form of profiling to enforce minimum age requirements.”

“As currently deployed, we don’t think that these tools are effective and therefore they should not continue to be relied upon to prevent access to under-13s.”

The letter was signed by Paul Arnold, whose agency oversees information rights, transparency in public bodies and personal data protections across the United Kingdom.

The regulator’s latest demands arrive just days after lawmakers in the U.K. Parliament declined to adopt an Australia-style proposal that would have barred all social media use for anyone under the age of 16.

Read More »

FTC Requests Public Comment on Proposed ‘Click to Cancel’ Regulations

Click to cancel

WASHINGTON — The Federal Trade Commission this week called for public comment on whether it should revise its Negative Option Rule to address deceptive or unfair practices.

The move is the latest step in the agency’s renewed rulemaking effort on negative option plans, after a federal court last year struck down a “click to cancel” rule intended to make it easier for consumers to end online subscriptions. Opponents of that rule argued the FTC exceeded its authority and failed to follow required procedures by not issuing a preliminary regulatory analysis.

In January, the FTC submitted a draft Advance Notice of Proposed Rulemaking, or ANPRM, on its Negative Option Rule to the Office of Information and Regulatory Affairs for review.

This week’s announcement seeks input on that ANPRM, stating, “The ANPRM asks the public: to weigh in on the current Rule; whether proposed amendments are needed; and about potential regulatory alternatives to address deceptive or unfair negative option practices.”

Christopher Mufarrige, director of the FTC’s Bureau of Consumer Protection, said the agency believes new rulemaking may be warranted.

“Negative option subscriptions can offer procompetitive features to consumers and the marketplace more broadly by lowering transaction costs and ensuring consumers receive uninterrupted service,” Mufarrige said. “The Commission’s enforcement track record suggests, however, that negative option subscriptions continue to be plagued by difficult cancellation processes, unlawful retention tactics, and a suite of other impediments that prevent consumers from easily switching or ending subscription services. Neither consumers nor competition are protected when consumers are enrolled in programs that they either do not want or cannot cancel.”

The Negative Option Rule was first adopted in the 1970s to protect consumers from being automatically enrolled in subscription plans without their consent. As amended in 2024, the rule would have applied to nearly all negative option programs, including automatic renewal and free-to-pay offers. If the update had remained in effect, website operators likely would have been required to make substantial changes to their sign-up and cancellation practices.

The restarted rulemaking process could result in the FTC proposing the same changes again or advancing a similar set of revisions.

With the ANPRM now published in the Federal Register, the public comment period will remain open through April 13. Members of the public may submit comments during that period.

Read More »