Business Attacks

Banks, Payment Giants Face Scrutiny Over Growing Role in Online Speech Crackdowns

A board with the word debanking.

Money doesn’t just talk anymore. Increasingly, it decides who gets heard at all.

Banks are combing through adult websites searching for flagged words, questionable scenes, and content they think might create legal exposure. Payment processors are making judgment calls about alleged misinformation tied to wars and politics. In some cases, even a donation to a cannabis advocacy organization can trigger scrutiny. Quietly, steadily, financial institutions have become gatekeepers in places most people probably never imagined.

Rainey Reitman saw an early version of this more than a decade ago while helping support imprisoned whistleblower Chelsea Manning. The fundraising campaign she worked on operated through an organization called Courage to Resist. Then, in 2011, its PayPal account was suddenly frozen. Reitman and the group’s leadership struggled to get a straight answer beyond vague references to the PATRIOT Act. Repeated attempts to resolve the issue with PayPal representatives went nowhere. Public attention, however, changed things fast.

“When PayPal reversed their decision so quickly in response to the publicity surrounding our press release, it was clear to me….We really had done nothing wrong,” Reitman writes in Transaction Denied: Big Finance’s Power to Punish Speech. “If there had been any legal requirement for PayPal to suspend our account, they wouldn’t have changed their mind just because people were tweeting at them.”

That experience pushed Reitman deeper into what she now describes as “financial censorship.” In her book, the term refers to banks, credit card companies, and payment processors restricting or shutting down accounts belonging to “controversial or marginalized speakers who haven’t violated any laws,” effectively turning financial systems into “a tool to pressure dissenting and marginalized voices” into silence.

What Is Financial Censorship?

“It is a form of privatized censorship where banks and payment intermediaries act as censors in ways the government couldn’t do directly without violating the First Amendment,” writes Reitman, a longtime civil liberties advocate and co-founder of the Freedom of the Press Foundation.

And no, Reitman isn’t especially interested in debates over whether “censorship” technically applies only to governments. “I think that’s a pedantic and unhelpful distinction,” she writes.

Transaction Denied traces how financial censorship — sometimes called “financial exclusion” or “debanking” — has affected people and organizations over the last 15 years. Protesters, journalists, gun-rights activists, adult creators, Muslim business owners, cannabis advocates, erotica writers, religious liberty groups, and even naked yoga instructors all appear in its pages. Strange mix, honestly. But that’s partly the point.

Legally speaking, banks and payment processors generally have broad discretion over who they do business with, provided they are not discriminating against protected groups based on characteristics such as race, religion, or sex. A financial institution can usually close an account for almost any other reason, whether it’s reputational concerns, moral objections, or simple risk avoidance.

Reitman acknowledges that reality while also arguing the system may need reform. “People today cannot survive on wads of cash stuffed under a mattress; they need access to payment and banking services to exist in society,” she writes. Among her proposals are laws preventing financial institutions from denying services based on constitutionally protected speech, stronger enforcement of antidiscrimination protections, and greater transparency around account closures and appeals.

Still, even without embracing all of Reitman’s proposals, the broader concerns she raises are difficult to ignore. Financial exclusion, she argues, often overlaps with more traditional forms of speech suppression in ways that are increasingly hard to separate.

Government ‘Censorship by Proxy’

Reitman argues that financial companies are not always acting independently when accounts are closed over controversial speech or politically sensitive activity. In many cases, she says, institutions are responding to direct or indirect government pressure.

Sometimes that pressure is explicit, as in disputes involving the National Rifle Association, Backpage, and WikiLeaks.

In Illinois, former sheriff Tom Dart sent letters to credit card companies urging them to “cease and desist from allowing your credit cards to be used to place ads” on Backpage, a classified advertising platform widely associated with sex-work listings. In New York, financial regulators under then-Gov. Andrew Cuomo warned banks that maintaining ties to the NRA could pose “reputational risk,” language financial institutions often interpret as a warning sign for increased regulatory scrutiny. Following the publication of leaked State Department cables by WikiLeaks, then-Sen. Joe Lieberman publicly accused the organization of criminal conduct and suggested companies maintaining relationships with the group risked aiding illegal activity.

Other times, the pressure is less direct. Reitman points to efforts like Operation Choke Point, as well as regulatory systems used by agencies such as the Office of the Comptroller of the Currency and the Federal Deposit Insurance Corporation, where vague assessments of “reputational risk” can influence how banks handle customers. Financial institutions, eager to avoid regulatory headaches, often respond by adopting aggressive risk-management systems.

“Often in cases of financial censorship, there’s a whiff of government involvement but it’s hard to prove,” Reitman notes.

In other instances, the issue stems from compliance systems tied to anti-money laundering and anti-terrorism laws. “Know your customer” requirements, which force banks to verify customer identities and monitor transactions, expanded significantly after passage of the USA PATRIOT Act. Reitman argues that these systems frequently sweep up lawful customers who have done nothing illegal.

Economic sanctions create another layer of pressure. Financial institutions are expected to help enforce sanctions programs, and the penalties for mistakes can be severe. As a result, many institutions adopt broad enforcement policies to avoid scrutiny.

That environment contributed to a bizarre but revealing situation involving New York City Councilwoman Shahana Hanif. A $14 Venmo payment she sent to reimburse a friend for lunch at a Bronx Bangladeshi restaurant called Al Aqsa was reportedly blocked after the restaurant’s name triggered automated systems tied to sanctions enforcement. While “Al Aqsa” is a common Arabic term used by many businesses, an unrelated organization with a similar name appears on U.S. sanctions lists.

For many Americans — particularly Muslim communities and people connected to certain foreign regions — such incidents can create recurring financial obstacles, even when no wrongdoing exists.

Culture and Courts Encourage Financial Censorship

Government pressure may play a significant role, but Reitman argues cultural and political activism have also shaped the current environment.

Activists across the political spectrum increasingly pressure financial institutions and digital intermediaries to sever ties with people or organizations viewed as objectionable. Over time, providing neutral services has become framed less as infrastructure and more as implicit endorsement.

That shift, critics argue, creates a dangerous precedent.

“Just because it might make you happy today to see a person that you don’t agree with losing access to their money and suffering, that doesn’t mean that that same mechanism might not be turned against you down the road,” Lia Holland of Fight for the Future told Reitman.

The legal consequences of this evolving mindset are already surfacing in court.

One closely watched case involves VisaAttachment.tiff, Pornhub, and Serena Fleites, who alleges that videos she recorded as a teenager were uploaded to Pornhub without her consent. Fleites argues that Pornhub facilitated sex trafficking and that Visa, by processing payments connected to the platform, participated in that venture. Reitman notes that there is no allegation Visa processed payments tied specifically to the videos in question, and that the overwhelming majority of Pornhub’s content involved legal adult material. Still, a judge declined to dismiss the claims against Visa.

The implications could be enormous.

“If credit card companies are held liable for the potential illegal content hosted by websites that have any kind of payment or advertising service, it creates an untenable burden on credit card companies to review and police every piece of content on any aspect of the web that has any form of payment,” Reitman writes. “It is hard to overstate how far-reaching and dangerous it would be for the courts to hold Visa liable because users decided to upload illegal content onto Pornhub.”

Critics warn that such liability standards could pressure payment processors to aggressively police speech and content across vast portions of the internet, especially on platforms hosting user-generated material or controversial discussions.

Bankers as Sex Police

Faced with legal exposure and reputational concerns, many financial institutions have adopted increasingly strict oversight of adult content platforms.

“Bankers are making sweeping decisions about what types of sexual speech should exist online today,” Reitman writes.

Mike Stabile of the Free Speech Coalition told Reitman that adult websites sometimes provide banks with passwords allowing direct review of platform content. According to Stabile, banks routinely flag specific words, categories, and scenes that sites must remove in order to maintain payment processing services.

Meanwhile, Cathy Beardsley, CEO of Segpay, said banks and payment companies also use automated scanning systems to monitor merchant sites.

“Use spiders, and they’ll go through the websites monthly looking for terms and words that will get flagged, that we have to then have our merchants clean up,” Beardsley told Reitman.

MastercardAttachment.tiff receives particular attention in the book for what critics describe as broad and often vague standards governing acceptable content. Banks and payment processors are frequently left interpreting those standards on their own, creating inconsistent enforcement and uncertainty across the adult industry.

Whether driven by liability fears, public pressure, or institutional conservatism, large financial companies wield enormous influence over who can participate online. Reitman argues that limited competition within banking only magnifies that power.

“Banks enjoy special privileges and benefits (like government backed insurance), and there are lots of barriers to entry for start-up companies wanting to enter the financial space,” she notes.

More competition, she suggests, could reduce some of the pressure points currently shaping the industry. But heavy regulation also makes new entrants difficult.

A Section 230 for Banks?

Among Reitman’s more notable proposals is legislation modeled loosely after Section 230 protections for internet platforms.

“We need legislation to make it clear that payment intermediaries, banks, and credit card companies are not liable for the activities of the people and institutions who use their services,” she writes.

The book also explores the potential — and limitations — of alternatives such as cash and cryptocurrency.

More than anything, though, Transaction Denied frames financial exclusion as a growing systemic issue rather than a series of isolated incidents. Cases involving WikiLeaks, Backpage, Pornhub, or the NRA are often easy for the public to dismiss because they involve polarizing organizations or industries. But Reitman argues the underlying mechanisms extend far beyond any one political movement, industry, or ideology.

By tracing stories across a wide range of communities and viewpoints, the book presents financial exclusion as the product of overlapping political, regulatory, cultural, and corporate pressures. The institutions enforcing these restrictions may be private, but the incentives shaping their behavior often originate elsewhere.

And that may be the part making some people uneasy now. Not just who loses access to financial systems — but how ordinary the process has started to feel.

Read More »

Court Orders Lead to Restoration of Playboy Germany’s Facebook Page by Meta

Playboy logo

DÜSSELDORF, Germany — For two months, the Facebook page for Playboy Germany sat in limbo. Then, just like that, it was back — restored after a court stepped in and told Meta to reverse course.

A regional court in Düsseldorf issued an injunction against the company, finding that the decision to block the page wasn’t lawful.

The page, followed by roughly 1.8 million people, had gone dark on Feb. 17.

In a statement, Kouneli Media — the company behind Playboy Germany — said Meta justified the takedown by pointing to alleged violations of its community standards, including nudity and sexual content. But according to the company, no specific posts were identified. Instead, the notice referenced activity that only “seemed” to break the rules.

The situation echoes a broader pattern that’s been hard to ignore. Across Meta’s platforms, moderation decisions often land without much warning — or clarity. Accounts tied to adult content, even those operating within legal bounds, can disappear overnight. Earlier this month, the Instagram account of sex tech company Bellesa was also taken down.

Kouneli Media has since filed a complaint with Germany’s Federal Network Agency, an independent regulator overseeing telecommunications and digital infrastructure.

Read More »

OCC, FDIC Bar Regulators From Using ‘Reputation Risk’ in Bank Oversight

A board with the word debanking.

WASHINGTON — Federal banking regulators on Tuesday finalized a rule removing “reputation risk” as a factor in supervising financial institutions.

Under the new rule, the Office of the Comptroller of the Currency and the Federal Deposit Insurance Corporation are barred from “criticizing or taking adverse action against an institution on the basis of reputation risk.” The rule also bars the agencies from “requiring, instructing, or encouraging an institution to close an account, to refrain from providing an account, product, or service, or to modify or terminate any product or service” based on a customer’s political, social, cultural or religious views, constitutionally protected speech, or lawful business activity viewed as presenting reputation risk.

The action follows an Aug. 7, 2025, executive order directing financial institutions not to deny or limit services to customers engaged in lawful activities on political grounds.

After that order, the OCC released a report on debanking that identified several sectors facing account closures or service restrictions, including the adult entertainment industry, citing concerns among banks about alignment with internal standards.

In March, Federal Trade Commission Chairman Andrew Ferguson issued warnings to payment processors such as PayPal, Stripe, Visa and Mastercard regarding practices that restrict access to services based on lawful but higher-risk activities.

The impact of the rule on industries that have reported difficulties accessing banking services remains uncertain. Although the OCC report identified adult entertainment as one of the affected sectors, regulators have not provided additional detail on how the new rule will be applied in practice.

While the rule prevents the OCC and FDIC from penalizing institutions for serving customers engaged in “politically disfavored but lawful business activities perceived to present reputation risk,” it does not limit banks’ ability to make decisions based on other supervisory considerations, including “safety and soundness.” Institutions may continue to restrict services under those criteria.

The Free Speech Coalition submitted comments in support of the proposed rule and recommended expanding its scope to apply more directly to banks. Those proposals were not adopted in the final version.

“The rule removes a key driver of banking discrimination against the adult industry,” said Free Speech Coalition Executive Director Alison Boden. “Federal examiners can no longer pressure banks to close accounts or deny services to lawful businesses based on reputation risk. It’s not going to solve all of our problems, but it’s a necessary piece of securing fair banking access for our industry.”

Read More »

FTC Cautions PayPal, Stripe, Visa and Mastercard on Debanking Practices

Bank account closed

WASHINGTON — Federal Trade Commission Chairman Andrew Ferguson sent letters Thursday to the chief executives of PayPal, Stripe, Visa and Mastercard, warning against debanking practices, including denying access to financial services based on a customer’s lawful business activities.

“It is inconsistent with American values to deny law-abiding individuals the ability to run their legitimate businesses and feed their families because they attracted the ire of rogue American officials, overzealous activists, or, more worryingly, foreign governments seeking to control public discourse,” the letters state. “That is why President Trump’s August 7, 2025, Executive Order on debanking makes clear that it is unacceptable to debank law-abiding citizens due to ‘political affiliations, religious beliefs, or lawful business activities.’”

The executive order prohibits banks, savings associations, credit unions and other financial service providers from restricting access to accounts, loans or other services based on a customer’s lawful business activities that the institution may disagree with or view unfavorably for political reasons.

Following the order, the Office of the Comptroller of the Currency issued a report on debanking that identified several sectors, including adult entertainment, as facing potential discrimination due to activities considered inconsistent with certain financial institutions’ values.

Ferguson’s letters state that companies engaging in deplatforming or denying services to such customers could face Federal Trade Commission investigations and possible enforcement action.

Possible Pressure on Banks via Card Brands

The letters to Visa and Mastercard also reference the role of payment networks and providers, noting concerns about financial institutions that restrict access to services for these reasons. Ferguson wrote that it is “critical” for card networks not to allow unlawful debanking by member institutions, including banks that process transactions on their systems.

“Consumers cannot reasonably avoid this harm, particularly where, as is almost always the case, the First Amendment-protected activity that triggered the adverse action against them had no logical connection to, or material bearing on, their commercial relationship with the payment provider or network,” Ferguson wrote.

The letters suggest that payment networks may play a role in addressing practices by financial institutions that deny services under these circumstances.

The potential for additional oversight comes as questions remain about the extent of regulatory action from banking agencies, including the Federal Deposit Insurance Corporation and the National Credit Union Administration.

Proposed rules would restrict those agencies from taking action against institutions for providing services to individuals or businesses engaged in lawful activities that may be viewed as presenting reputational risk. However, those rules would still allow banks to make decisions regarding customers based on considerations tied to safety and soundness.

It remains unclear how enforcement priorities will be applied across different industries, including those identified in prior regulatory reports.

Read More »

OpenAI Advisory Panel Opposed ‘Adult Mode’ for ChatGPT

ChatGPT Logo

SAN FRANCISCO — OpenAI is expected to limit its planned “adult mode” feature so that it does not generate deepfakes or synthetic NSFW images, instead restricting the tool to sexually explicit text, according to a report published Sunday.

The report cited an unnamed company spokesperson who said the change is necessary, adding that a rollout timeline has not yet been finalized. The feature had already been delayed as the company reviews concerns related to mental health risks and other potential uses of the technology.

Members of an advisory council of mental health experts selected by OpenAI warned the company in January that the proposed “adult mode” could pose significant risks to minors. One council member, who was not identified, said the feature could potentially lead to the creation of what they described as a “sexy suicide coach.”

The report also indicated that OpenAI’s internal age verification efforts had been considered “spotty.” Chief executive officer and co-founder Sam Altman said in October that the company planned to deploy age assurance and estimation tools to identify users aged 18 and older. In January, OpenAI expanded those efforts by implementing technology from an online identity provider called Persona, which has faced criticism from some observers who described it as invasive and prone to errors.

Read More »

Debanking Explained: How Politics, Policy, and Perception Shape Account Closures

A board with the word debanking.

The Cato Institute has published a report on debanking. You can find it here: https://www.cato.org/policy-analysis/understanding-debanking-evaluating-governmental-operational-political-religious?utm_source=social&utm_medium=email&utm_campaign=Cato%20Social%20Share

Here’s what the report is all about:

Debanking can be a frustrating and deeply unsettling experience. One day everything seems fine, and the next, a notice arrives giving just 30 days to withdraw funds and find a new financial institution. Confusion quickly turns into anxiety. Bills were paid, nothing appeared out of the ordinary — so what changed? A call to the bank rarely brings clarity, and the response is often the same: no further details can be provided. Customers are left with more questions than answers.

On the other side of the conversation, bankers are frequently constrained by strict confidentiality requirements. Even frontline staff may not have access to the underlying reasons for an account closure. Financial institutions operate within a framework of anti–money laundering, know your customer, and countering the financing of terrorism regulations — commonly referred to as AML, KYC, and CFT. While these rules are standard practice within the industry, they remain largely invisible to the public, creating a disconnect that fuels frustration on both sides.

For those looking to address the growing concern around debanking, some argue that meaningful change will require greater transparency. That could mean reconsidering the confidentiality that surrounds account closures, removing reputational risk as a regulatory factor, and reevaluating the Bank Secrecy Act framework that effectively places financial institutions in the role of investigative gatekeepers.

Read More »

When Women’s Wellness Gets Labeled “Adult,” the Bank Account Disappears

A board with the word debanking.

It starts with a quiet email. No warning. No phone call. Just a notice that your account is under review—or worse, closed.

That’s what’s happening to women’s health and sexual wellness companies across the United Kingdom and Europe. Not because they’ve broken laws. Not because they’ve done anything shady. But because somewhere in a compliance department, someone ticked the wrong box and labeled them “adult content.”

New research from two U.K.-based advocacy groups, CensHERship and The Case For Her, digs into what’s really going on. And it’s uncomfortable reading. The report argues that women’s health innovators are being misclassified in financial systems, triggering debanking and account shutdowns that can stall or even sink young companies.

“What we find is that misclassification, over-compliance, cultural discomfort, and outdated policy language combine to create structural barriers for women’s health innovation, and that the identified structural barriers tend to fall into two forms,” the research explains.

Those two forms? “Misclassification” and “misunderstanding.”

Misclassification is described as “where women’s health and sexual wellbeing are misread as adult content. This is the most visible and well-documented form of bias.”

Misunderstanding, on the other hand, is “where women’s health is overlooked as too new, complex, or unfamiliar to fit existing risk templates. These cases are harder to surface because they are often resolved quietly or never formally recorded.”

That second one hits differently. It’s the kind of bias that doesn’t shout. It just shrugs. Too new. Too complex. Too awkward. Next.

Some of the companies affected are household names in the space, including SheSpot, a widely recognized brand. These aren’t fringe operators. They’re mainstream wellness businesses trying to build products around bodies that, frankly, have been underserved for generations.

The report goes further: “Because most financial institutions have never explicitly defined women’s health or FemTech within their risk frameworks, systems default to the nearest analogue—typically adult content, vice categories, or other ‘sensitive’ sectors.”

That line sticks. Systems default. That’s how it happens. No villain twirling a mustache. Just outdated templates and risk models built in eras when women’s sexual health was barely discussed in polite company. So instead of creating a new category, institutions drag these companies into old ones—adult content, vice, high-risk.

It’s lazy. It’s structural. And it’s expensive.

Both CensHERship and The Case For Her argue that outdated classification systems, cultural discomfort, and unconscious bias are creating real barriers to growth. Being labeled “adult content” or “adult services” doesn’t just sound insulting—it places these businesses in a “high-risk sector” alongside firearms manufacturing and tobacco cultivation and marketing.

Think about that for a second. A company developing pelvic health tools or hormone-tracking tech ends up sitting in the same risk bucket as cigarette production.

This isn’t just semantics. Risk labels determine whether you can open a bank account, process payments, attract investors, or scale internationally. When the system quietly decides your innovation is morally adjacent to vice, you feel it everywhere.

Honestly, it raises a bigger question: if financial institutions can’t tell the difference between sexual wellness and adult entertainment, what does that say about the frameworks we’re still operating under?

Maybe the real issue isn’t that women’s health is “too new.” Maybe it’s that the systems judging it are too old.

Read More »

Adult Creators Keep Getting Debanked — And the Fallout Goes Far Beyond Them

Financial discrimination

Your bank may never send you a memo about it, but it’s quietly shaping your life.

Every time you click “buy now,” a small army of institutions decides whether that purchase gets to exist. And for adult creators, that army has been steadily tightening its grip. For years, people in the industry have been warning about financial discrimination and debanking — the sudden closure of accounts, the polite but devastating “we can no longer do business with you.” It’s happening more often now. And it’s happening quietly.

“I don’t know what could happen next or when it might happen,”

Adult VTuber, journalist, and activist Ana Valens says. In just two weeks last November, nearly every platform she relied on either removed her content or suspended her outright. “While my Patreon and Ko-fi were reinstated, I’ve spent the past two months waiting for the other shoe to drop — another Patreon ban, my PayPal deactivated, and so on.” She reached out for explanations. Most platforms couldn’t clearly articulate how she’d violated their terms. Ko-fi didn’t respond until repeated messages finally led to reinstatement.

That kind of uncertainty lingers. It’s like walking on ice that might crack at any moment.

“Deplatforming and debanking are an occupational hazard for any adult content creator,” says Gina, a co-founder of PeepMe, a startup that set out to build a worker-owned creator marketplace. PeepMe was imagined as an alternative to OnlyFans and Patreon — a space where creators could hold equity, elect a democratic board, and receive quarterly profit-sharing dividends.

Gina requested that a pseudonym be used, given her continued work adjacent to the adult industry and the very real fear of financial fallout. “Even still, I’ve never seen someone banned on so many sites before [as Ana has been],” she says.

And it’s not just adult creators feeling the pressure. Companies in oil and gas, cryptocurrency, tobacco, and firearms have also raised concerns about politically motivated debanking. The pushback has grown loud enough that U.S. regulators are now stepping in, attempting to rein in financial discrimination.

Who’s Blocking My Buying?

When you make an online purchase, your money doesn’t travel in a straight line. It passes through layers of gatekeepers. The pipeline often looks like this:

  1. Platform (merchant) websites: where creators earn income — YouTube, Patreon, Etsy, DoorDash, Steam.

  2. Payment processors: companies that route the transaction between card networks and banks — PayPal, Stripe.

  3. Card networks: Visa, American Express, Mastercard — the rule-makers that standardize how buyers and sellers interact.

  4. Your bank and the seller’s bank: Wells Fargo, Bank of America, and so on.

Each step has discretion. Beyond preventing illegal activity, these institutions can decide what kinds of money they’re willing to touch.

“The rules set by card networks are sometimes vague,” says Dr. Val Webber, a postdoctoral researcher at Dalhousie University’s Sexual Health and Gender Research Lab. Mastercard’s June 2025 rules restrict “any Transaction that […] in the sole discretion of [Mastercard], may damage the goodwill of [Mastercard] or reflect negatively on the [brand].”

“In the sole discretion” is doing a lot of work there.

Last summer, Steam and itch.io removed or deindexed adult games after pressure from payment processors and card networks. Steam cited pressure from Mastercard, conveyed through processors like Stripe. Stripe told itch.io, “Stripe is currently unable to support sexually explicit content due to restrictions placed on them by their banking partners, despite card networks generally supporting adult content.” Stripe’s prohibited business list includes “pornography and other mature audience content (including literature, imagery, and other media) designed for the purpose of sexual gratification.”

Mastercard later denied involvement. In August 2025, the company stated, “Mastercard has not evaluated any game or required restrictions of any activity on game creator sites and platforms, contrary to media reports and allegations.”

Meanwhile, Valens saw her articles disappear from Vice. “My suspicion is that it was easy for a financial company to flag me as high risk as a punitive measure for my content, or my activism work,” she says. Attempts to obtain comment from Vice were unsuccessful.

Who Can Get Debanked?

“We have lots of data to show that people in the adult industry face financial discrimination in the form of their accounts being closed, being denied mortgages, business loans, and other banking services — despite banks often not being able to substantiate legal reasons related to these individual accounts,” says Maggie MacDonald, a PhD researcher at the University of Toronto.

The tension escalated in December 2020 when Visa and Mastercard cut ties with Pornhub, citing child sexual abuse material (CSAM). “Our adult content standards allow for legal adult activity created by consenting individuals or studios,” Mastercard said at the time. “Merchants must have controls to monitor, block and remove unlawful content from being posted.” Pornhub denied hosting illegal content and emphasized the harm to “the hundreds of thousands of models who rely on [their] platform for their livelihoods.”

But here’s the inconsistency that nags at people: X continues to process payments despite widespread reports of CSAM and non-consensual deepfake content. No sweeping financial freeze there.

Watching major platforms lose payment relationships makes smaller startups tread lightly. “We just can’t afford to lose our ability to do business with these financial companies,” Gina says. “Stripe takes only 2.9 percent from businesses they’re willing to work with, while high-risk processors willing to take on adult content can charge up to 15 percent.”

That difference can sink a company before it starts.

“Losing a relationship with card networks is a risk payment processors can’t afford, and losing relationships with payment processors is a risk that platform websites can’t afford,” explains Webber. “In the end, the responsibility of ensuring their content stays within the lines of these oftentimes unclear rules trickles down to each individual creator. Because ultimately, content creators are more expendable to platforms than payment processors and card networks.”

One justification often cited is chargebacks — when customers reverse credit card transactions. Gina isn’t convinced.

“Locking out entire industries makes less and less sense as fraud detection technology advances,” she says. “Payment processors and card networks already have processes to step in when an individual business has a high rate of chargebacks, there’s no reason to block out a whole industry.” Mastercard recently announced expanded generative AI fraud-detection tools, building on already sophisticated monitoring systems.

“We also haven’t seen the claim of high-chargebacks in adult content substantiated anywhere in terms of measured data,” adds MacDonald. “As a researcher, that makes me suspicious of the criteria these companies are using behind the scenes.”

The Evolving Landscape of Banking Regulations

In February 2025, the Free Speech Coalition filed a statement with the U.S. House Committee on Financial Services, calling for due process protections, objective risk assessments, and explicit recognition that lawful adult businesses do not inherently present financial crime risk. Blocking entire industries without individualized evaluation, the statement argued, is regulatory overreach with serious implications for free speech.

Multiple efforts are underway in the United States to limit financial institutions from denying service for reasons beyond legal violations. In August 2025, President Donald Trump issued an executive order directing regulators to investigate and reverse politically motivated debanking. Bank regulators have begun removing “reputational risk” from compliance criteria, and proposed Senate legislation would impose civil fines on banks and card networks that avoid entire categories of customers.

“Card networks and payment processors began by blocking pornography, but they’ve moved into other online industries as well,” says Webber. “The line in the sand continues to shift, and it has recently expanded to video game creators and streamers as well. We don’t know how these rules might evolve, and what type of online content might be next.”

Valens has spent months urging customers to call Mastercard, Visa, PayPal, and Stripe to question purchase restrictions and account freezes. Visa points to its policies for combating illegal activity; PayPal requires pre-approval for adult materials, similar to tobacco; Stripe states it does not support adult content.

“Private companies have been deputized to decide how we can earn and spend our money,” says MacDonald. “Anyone who is ideologically misaligned with any of these companies faces the risk of losing their livelihood.”

That’s the part that lingers.

It’s not just about porn, or games, or activism. It’s about the invisible committee that votes on your transactions — and whether one day, without warning, they decide you don’t get a vote at all.

Read More »

Discord Plans Mandatory Age Verification for All Users in 2026

Discord logo

Something quietly fundamental is about to change on one of the internet’s most familiar hangouts. Discord’s senior leadership confirmed this week that age verification will become mandatory for all users starting in March 2026, alongside a shift to what the company calls “teen-by-default” settings across the entire platform.

The expanded safety rollout, according to the company, is meant to create “a safer and more inclusive experience for users over the age of 13.” On paper, it sounds tidy. In practice, it signals a pretty big cultural shift for a platform that’s long felt like the digital equivalent of a messy, unlocked group chat.

“As part of this update, all new and existing users worldwide will have a teen-appropriate experience by default, with updated communication settings, restricted access to age-gated spaces, and content filtering that preserves the privacy and meaningful connections that define Discord,” the company said. It’s a careful balance they’re trying to strike — safety without sanding off the personality that made people show up in the first place.

“Nowhere is our safety work more important than when it comes to teen users, which is why we are announcing these updates in time for Safer Internet Day,” said Savannah Badalich, Discord’s head of product policy, referencing the February 10 awareness initiative. “Rolling out teen-by-default settings globally builds on Discord’s existing safety architecture, giving teens strong protections while allowing verified adults flexibility.”

Badalich added, “We design our products with teen safety principles at the core and will continue working with safety experts, policymakers, and Discord users to support meaningful, long-term wellbeing for teens on the platform.” It’s the kind of statement you’d expect — earnest, forward-looking, and clearly written with regulators peeking over shoulders.

Under the new “teen-by-default” framework, users will have to go through age-verification steps to access channels and servers labeled as “age-restricted.” That includes spaces run by adult content creators, online sex workers, sexually themed communities, sexual animation hubs, and certain fan communities that live closer to the edges of the platform.

There’s also an unspoken tension here that’s hard to ignore. Discord has been down this road before, and not without bruises.

The platform previously experienced a data breach involving one of its age-verification vendors, exposing sensitive verification materials, including government-issued identification. For users who already feel uneasy about handing over personal documents online, that memory hasn’t exactly faded.

That incident stemmed from mistakes by a customer experience vendor, 5CA, which outsources work to customer service agents in countries including the Philippines. Discord’s primary age-verification partner, K-ID, later stated that it had no involvement in the breach tied to its standard verification systems.

So here we are again — a platform promising better protection, safer defaults, and stronger guardrails, while carrying the weight of past missteps. Maybe this time the systems hold. Maybe trust rebuilds. Or maybe the internet does what it always does and asks the same old question, just in a new tone: how much safety is worth how much control?

Read More »

App Meant to Help Users Quit Porn Leaked Their Masturbation Habits

Hand and porn site

There’s something quietly disturbing about discovering that a tool meant to help people wrestle with their most private habits accidentally left the blinds wide open. An app that claims to help users stop consuming pornography ended up exposing intensely sensitive personal data — the kind of stuff most people wouldn’t even admit to a close friend. Ages. Masturbation frequency. Emotional triggers. How porn makes them feel afterward. And tucked inside that data were a lot of minors, which makes your stomach drop a little when you really sit with it.

One user profile, for instance, listed their age as “14.” Their “frequency” showed porn use “several times a week,” sometimes up to three times a day. Their “triggers” were logged as “boredom” and “Sexual Urges.” The app had even assigned a “dependence score” and listed their “symptoms” as “Feeling unmotivated, lack of ambition to pursue goals, difficulty concentrating, poor memory or ‘brain fog.’” It reads less like analytics and more like a vulnerable diary entry — something that was supposed to stay locked away.

The app isn’t being named because the developer still hasn’t fixed the issue. The problem was uncovered by an independent security researcher who asked to remain anonymous. He first flagged it to the app’s creator back in September. The creator said he’d fix it quickly. That didn’t happen. The flaw comes from a misconfiguration in how the app uses Google Firebase, a popular mobile app development platform. By default, Firebase can make it surprisingly easy for anyone to become an “authenticated” user and access backend storage — the digital attic where all the private boxes tend to live if you’re not careful.

Overall, the researcher said he could access information belonging to more than 600,000 users of the porn-quitting app, with roughly 100,000 identifying as minors. That number lands heavy. It’s not abstract. It’s classrooms. It’s school buses. It’s kids who probably assumed they were talking into a void, not a wide-open window.

The app also invites users to write confessions about their habits. One of them read: “I just can’t do this man I honestly don’t know what to do know more, such a loser, I need serious help.” You can almost hear the frustration in that sentence — the messy spelling, the emotional spill. That’s not data. That’s a human having a rough night.

When reached by phone, the creator of the app said he had spoken with the researcher but claimed the app never exposed any user data due to a misconfigured Google Firebase. He suggested the researcher may have fabricated the data that was reviewed.

“There is no sensitive information exposed, that’s just not true,” the founder said. “These users are not in my database, so, like, I just don’t give this guy attention. I just think it’s a bit of a joke.”

When asked why he previously thanked the researcher for responsibly disclosing the misconfiguration and said he would rush to fix it, he wished me a good day and hung up. One of those conversations that ends abruptly, leaving a strange quiet buzzing in the room.

After the call, an account was created on the app. The researcher was then able to see that new account appear inside the misconfigured Google Firebase environment — confirmation that user information was still exposed and accessible. Sometimes reality has a way of answering arguments faster than any debate ever could.

This type of Google Firebase misconfiguration isn’t new. Security researchers have been talking about it for years, and it continues to surface today. It’s one of those problems that feels boring until it suddenly isn’t — until someone’s real life data is sitting out in the open.

Dan Guido, CEO of cybersecurity research and consulting firm Trail of Bits, said in an email that this Firebase issue is “a well known weakness” and easy to find. He recently noted on X that Trail of Bits was able to build a tool using Claude to scan for this vulnerability in just 30 minutes.

“If anyone is best positioned to implement guardrails at scale, it is Google/Firebase themselves. They can detect ‘open rules’ in a user’s account and warn loudly, block production configs, or require explicit acknowledgement,” he said. “Amazon has done this successfully for S3.” S3 is a cloud storage product from AWS that previously struggled with similar data exposure issues due to misconfigurations.

The researcher who uncovered the app’s vulnerability added that this insecure setup is often the default in Google Firebase. He also pointed a finger at Apple, arguing that apps should be reviewed for backend security issues before being allowed into the App Store.

“Apple will literally decline an app from the App Store if a button is two pixels too wide against their design guidelines, but they don’t, and they don’t check anything to do with the back end database security you can find online,” he said. It’s one of those comments that lands with an uncomfortable kind of truth — polished surfaces, shaky foundations.

Apple and Google did not respond to requests for comment.

And that’s the part that lingers. People trusted this app with their most awkward truths, their late-night regrets, their quiet attempts at self-control. Some of them were kids. They weren’t posting for an audience. They were whispering into what they thought was a locked room. Turns out the door was never really closed.

Read More »