Legal Attacks

xHamster Ends Texas AV Lawsuit With $120K Settlement

Xhamster logo

Sometimes a legal fight doesn’t end with a dramatic ruling—just a quiet deal and a check. That’s what happened in Texas, where Hammy Media, the company behind xHamster, agreed to settle a lawsuit over alleged violations of the state’s age verification law with a $120,000 payment.

Texas Attorney General Ken Paxton launched the suit in 2024. The complaint painted the site’s early verification screen as little more than a digital speed bump, arguing, “Minors can simply click almost anywhere on the webpage away from the ‘I’m 18 or older’ button, including the ‘X’ in the top right corner of the message, to dismiss the pop-up message and proceed to the Defendant’s pornographic website … The age verification methods used by the Defendant on its websites cannot be said to verify anything at all.”

The state didn’t start small. Texas initially asked a district court to impose penalties of up to $1.67 million, plus another $10,000 for every day after the filing date—a financial threat large enough to make most companies blink.

Those cases stalled for a while as everyone waited for the U.S. Supreme Court to decide whether these types of laws even hold up under the Constitution. The case—FSC v. Paxton, brought by the Free Speech Coalition—became the legal bellwether. In March, the court sided with Texas, declaring the law constitutional and effectively giving other states a green light to move forward with similar efforts. Once that happened, dormant lawsuits snapped back to life.

According to the agreed final order filed Nov. 7, the company made changes quickly. “Promptly after suit was filed, on March 21, 2024, Hammy Media restricted access to its website,” and it has now rolled out the kind of age verification Texas requires. The order also “resolves any and all claims based on the facts alleged in the State’s Petition” and specifies that the settlement isn’t an admission of wrongdoing—just a resolution.

Texas didn’t stop at xHamster. The state filed similar lawsuits in 2024 against Multi Media, the company behind Chaturbate, and Aylo, which operates Pornhub. Chaturbate settled in April; the Aylo case is still moving through the courts.

Read More »

Supreme Court Lets New York Push Forward With Adult-Store Zoning Crackdown

Members of the Supreme Court

There’s a strange finality in watching a 30-year legal fight end not with a bang, but with a single line from the highest court in the country. The U.S. Supreme Court has refused to hear an appeal of a lower court ruling that lets New York City enforce a zoning law from 2001—one designed to quietly squeeze adult retail shops out of almost the entire city.

Justice Sonia Sotomayor denied the application for certiorari, closing the door on a legal battle that began back when Times Square was still transforming from peep booths and neon sex ads into a place where tourists buy oversized M&M’s.

When the law first passed in 1995, the city’s mission felt almost moralistic: push out adult businesses in most neighborhoods, especially in midtown, as part of the larger effort to “clean up” Times Square. The rule was mathematical, too—if 40% or more of your inventory or floor space involved sexual content, you were officially labeled an adult establishment.

But the city later decided that loophole was too easy to dance around. A 2001 amendment replaced the 40% rule with a broader mandate targeting any business that “primarily” markets adult entertainment, whether that meant a bookstore, strip club, or video shop.

What followed was two decades of litigation. Adult businesses argued the change trampled their constitutional rights—free speech, equal protection, the basics. The city argued the law was simply a zoning measure, not a moral crusade. Courtrooms kept the fight alive longer than most of the original storefronts survived.

In 2024, a district court judge finally sided with the city, deciding New York could enforce the amendment. Suddenly, even those who had meticulously complied with the old 60/40 rule were told to pack up and move to the tiny slivers of the city that still permit adult businesses—or to leave entirely.

The plaintiffs appealed to the U.S. Court of Appeals for the 2nd Circuit. That effort fell flat in July, when a three-judge panel rejected it.

They tried again, asking the full 2nd Circuit for a rehearing. In August, that request was denied as well.

By October, the 2nd Circuit wasn’t just done hearing arguments—it declined to issue a temporary stay and issued a judgment mandate, effectively giving the city permission to start enforcing the law.

So the plaintiffs went to the final stop: the Supreme Court. On Oct. 22, attorneys requested review. Nine days later, Justice Sotomayor shut the door.

With the Supreme Court denying certiorari, the legal journey is over. No more appeals. New York City now has full authority to enforce the zoning restrictions—and the businesses affected have to deal with what that means in the real world, not just on paper.

And that’s where things get messy. Even in the rare parts of the city where zoning technically allows adult shops, there are layered restrictions requiring a set distance from residential areas, schools, places of worship, and even other adult businesses. Put all those rules together and you start to wonder how many legal locations actually exist. Maybe none. Maybe a handful on the outskirts where nobody walks. It’s a bit like being told you’re “free to speak”—but only in an empty room no one’s allowed to enter.

Read More »

NetChoice Pushes Back, Suing Virginia Over Youth Social-Media Rules

Netchoice logo

There’s something strangely jarring about a state telling people how long they’re allowed to look at a screen. Maybe it’s because most of us grew up sneaking extra time on whatever device we had — the old family computer, a flip phone under the covers, whatever — and now here we are, watching lawmakers try to ration minutes like they’re passing out rations during a storm.

That’s the backdrop for a new fight in Virginia, where a major tech trade group just hit the state with a lawsuit over its sweeping new social-media law. NetChoice — the group that represents some of the biggest names in the digital world — filed its challenge in federal court in Alexandria, arguing that the state has stepped way, way over the line with Senate Bill 854. The suit names outgoing attorney general Jason Miyares and doesn’t pull punches about what’s at stake: the First Amendment rights of both adults and minors.

“Virginia must leave the parenting decisions where they belong: with parents,” said Paul Taske, co-director of NetChoice’s Litigation Center. “By asserting that authority for itself, Virginia not only violates its citizens’ rights to free speech but also exposes them to increased risk of privacy and security breaches.”

And then he doubled down: “We look forward to defending Virginians’ First Amendment rights in court.”

The law they’re talking about was signed back in May by Gov. Glenn Youngkin — who won’t be around when it actually kicks in on January 1, 2026. And once it does, things change fast. Anyone in Virginia who wants to access protected speech online has to verify their age. Adults, minors, parents — everyone gets checked at the door. No ID, no entry.

But the age-checks are only half the story. There’s also a hard, government-imposed time limit: one hour per day for anyone under 16. If an adult wants more time? They have to confirm that they’re the ones asking for it. It’s like a digital permission slip from the state.

Taske didn’t mince words about the absurdity he sees in that: “Virginia’s government cannot force you to read a book in one-hour chunks, and it cannot force you to watch a movie or documentary in state-preferred increments. That does not change when the speech in question happens online.”

Then there’s the kicker — the law also bans cellphones and mobile devices in schools. Not just a tweak, not a pilot program, but an across-the-board prohibition framed as a public-health move to protect kids. It’s sweeping, dramatic, and almost guaranteed to reshape daily life for families if it survives the courts.

NetChoice’s membership list reads like a who’s who of the digital universe: Meta Platforms, Netflix, Google, X, Etsy. The people building the platforms most of us touch every day. And they’re staring down a law that tries to regulate not just what people can see online, but how long they’re allowed to see it.

The whole thing feels like one of those moments where technology and policy collide in a way that makes you stop and wonder which part of the future we’re actually building — and who gets to hold the timer.

Read More »

Porn and Politicians: Still Reliable Clickbait by Stan Q. Brick

James Talarico

And here I thought being outraged by the things politicians say and do had become passe. Apparently not, if your transgressions include (please, those with delicate sensibilities, cover your eyes) following OnlyFans models and escorts on social media.

“‘Devout Christian’ Dem caught following prostitutes, OnlyFans models on social media,” proclaimed the New York Post, which as a publication that backs Donald Trump, clearly demands a higher standard of social media decorum than following an OnlyFans model. As I’m sure the discerning editors of the Post would tell him, James Talarico should stick to more family-friendly online activities, like sharing videos of well-informed patriots who can inform us of important, well-established facts – like Osama bin Laden is still alive.

Naturally, Talarico’s publicity flacks had to deny any meaningful personal interest in the eyebrow raising follows on the part of the good would-be Senator. After his timeline became news, they quickly fired off a statement stating the campaign’s “social media team – including James – follows back and engages with supporters who have large followings and does not investigate their backgrounds.”

To be fair, that explanation is plausible enough, so far as these things go. I can’t help but wonder, though; is part of the reason some of our elected officials are so inclined to support laws restricting and regulating all manner of sex-related things a need to distance themselves from their own desires? Or maybe more to the point, a need to be seen as standing against certain “immoral behaviors,” regardless of whether they truly are against those behaviors?

If you’re James Talarico, I suppose you must put out a statement like the semi-denial offered by his team. Your name isn’t Donald Trump, so you can’t simply say “Those social media follows were planted on my timeline by the Deep State” and expect half the country to believe you.

And I suppose if you’re the New York Post, you can’t go around taking the position it doesn’t matter if some Democrat from Texas likes and/or follows an OnlyFans model, just because the guy you endorsed for President paid hush money to a porn star. The same can be said for the rest of the media that seized on the story; they all have bills to pay and a sex-related scandals are reliable eyeball magnets.

But would the world (or the truth) truly suffer if we were to give a story like Talarico’s little social media snafu the sort of mundane headline it arguably deserves? Would people miss the momentary rush of self-righteous glee that accompanies such a story if we crowned it with something like: “Semi-Famous Texan Likes Pictures of Attractive Women”? Or how about “Guy Who Believes in God Sometimes Also Thinks About Sex”? Or perhaps “Stunner: Would-Be Senator Has Actual Blood in Veins”?

Either way, maybe James Talarico should look at the bright side: at least he isn’t a politician in the UK who was, say, following someone who made porn that depicts strangulation, or he could have much bigger problems on his hands.

Read More »

Pornhub, Stripchat Challenge EU’s VLOP Tag, Calling the Data Faulty

Pornhub logo

There’s something strangely surreal about watching two adult platforms square off against one of the most powerful regulatory bodies in Europe. It’s like seeing the quiet kid in class suddenly challenge the teacher on the grading rubric — bold, a little chaotic, and honestly pretty fascinating. That’s what played out this week in Luxembourg, where attorneys for Pornhub and Stripchat told the EU’s General Court that the European Commission misjudged them based on shaky data.

Both cases — Aylo Freesites v. Commission for Pornhub and Technius v. Commission for Stripchat — orbit around the same issue: whether these sites really belong in the Digital Services Act’s “very large online platform” category, the VLOP bucket reserved for players with at least 45 million monthly EU users. It’s a label that comes with heavy regulatory weight, not to mention a fee structure that can make a CFO sweat.

But on Friday, according to reporting from MLex, something interesting happened. Christopher Thomas, Aylo’s lawyer, basically asked the court the digital equivalent of: Why are you trusting the substitute teacher but not the person who actually runs the classroom? He questioned why the commission brushed off Pornhub’s own methodology as noncompliant with DSA rules — yet readily accepted Similarweb’s data, even though no one seemed to know “what underlying data Similarweb used, or the maths applied.”

Pornhub’s internal numbers reportedly fell below the VLOP threshold. Similarweb’s outside numbers? Above it.

A small difference in theory, a massive difference in regulatory reality.

Thomas drove the point home with a simple comparison:

“If a provider said they had purchased an estimate below 45 million, but didn’t know the data or methodology on which that was based and couldn’t explain it to the commission, it would obviously be unacceptable.”

It’s one of those arguments that’s hard to un-hear once it’s been said.

From the commission’s side, attorney Paul-John Loewenthal offered something that sounded like a quiet confession about the digital universe we all live in. He acknowledged that user counts are, at the end of the day, approximations. There is “no golden method to calculate user numbers; it’s not possible.”

And with that, the case hung in the air — unresolved, unsettled, and oddly revealing.

Stripchat’s situation wasn’t far off. Their VLOP label also came from Similarweb data, which originally pinned the site above the 45-million mark. Then came the twist: Similarweb later revised the estimate downward, dropping Stripchat below the threshold entirely.

On Thursday, Stripchat operator Technius asked the court to go back and undo the European Commission’s 2023 ruling that labeled them a VLOP in the first place — not just reversed going forward, but retroactively scrapped. And even though the commission already revoked the VLOP status earlier this year, Technius still has reasons to chase a clean erasure.

According to MLex, attorney Tobias Bosch explained that a retroactive annulment could do more than tidy the record. It could help Stripchat recover the supervisory fee it paid. It could shift the jurisdiction over who gets to police them. And it could influence an active investigation into whether the platform violated the DSA in the past.

It’s wild how much hinges on something as mundane-sounding as user-count methodology. But maybe that’s the real tension here — in a world obsessed with data, the people interpreting the numbers often have more power than the numbers themselves.

Read More »

Strike 3 Fires Back, Dismissing Meta’s “Personal Use” Claim in AI Battle

Meta logo

There’s a strange kind of déjà vu when a major tech company gets accused of dipping into the piracy pool — that uneasy feeling like we’ve been here before, even though each case somehow feels bigger than the last. That’s the energy swirling around the latest clash between Vixen Media Group owner Strike 3 Holdings and Meta, after Meta asked a federal judge in Northern California to toss their lawsuit.

Strike 3 didn’t pull any punches in its original complaint. They say Meta didn’t just accidentally stumble across their movies online — they accuse the company of deliberately scooping up VMG content from pirate sites to help train its AI systems. And they’re specific about it. BitTorrent, they say, is where those downloads came from. It’s the kind of accusation that makes you imagine a massive tech company hunched over a torrent client, pretending to be invisible.

Meta’s response in October? A very different story — one that almost sounds like a shrug. They told the court that if videos were downloaded, the timing and the number of those downloads “point solely to personal use, not AI training.” And then came the eyebrow-raising line:

“Meta contractors, visitors, or employees may have used Meta’s internet access over the years to download videos for personal use.”

In other words: Hey, people do stuff on our Wi-Fi. Not our problem.

Meta insisted the download activity — the volume, the pattern — just didn’t look like anything tied to large-scale AI development and “is plainly indicative of private personal use.”

They also brushed aside Strike 3’s attempt to link non-Meta IP addresses to the company, calling it a stretch.

Strike 3 fired back this week, and you can almost hear the disbelief between the lines. Their response to the “employees were just downloading porn” defense was blunt:

“Meta’s excuse that employees must be infringing Plaintiffs’ copyrights for ‘personal use’ does not fit the facts.”

What they say does fit the facts?

Something far more coordinated — something algorithmic. They point to what they call “unique patterns of Meta’s piracy,” arguing those patterns don’t look like a person casually searching for adult videos. Instead, Strike 3 claims the behavior “not only indicate[s] the use of a universal ‘script’ (likely developed, supplied, or encouraged by Meta), but also show[s] that the BitTorrent downloads are ‘for AI training data and not for personal use.’”

They bolster that argument with another recent copyright case: Kadrey v. Meta Platforms, Inc.

In that suit, Meta “admitted to using BitTorrent to obtain digital copies” of books for AI training — and Strike 3 alleges that the same IP addresses used in the book-related downloads also appear in their own infringement logs. According to them, Meta even acknowledged in that case that it masked its corporate IPs to avoid detection.

And then comes one of the more striking accusations (no pun intended):

“Plaintiffs were able to correlate Meta’s infringement through its Corporate IPs to six hidden data centers that were used to engage in massive infringement of both Plaintiffs’ and the Kadrey plaintiffs’ works as well. The scale of these infringements is staggering and are ‘beyond what a human could consume.’”

Strike 3 says activity from those masked IP addresses mirrors the activity from Meta’s known corporate IP blocks — a kind of digital fingerprinting that, in their telling, points to one conclusion: these downloads were never about personal viewing habits. “Those ends,” they write, “were to train Meta’s AI and not for the personal use of its employees.”

And if the court believes that interpretation?

Meta wouldn’t just be facing another infringement spat. The numbers involved are enormous — the complaint lists at least 2,396 allegedly infringed movies. Multiply that by the maximum statutory damages of $150,000 per work, and you land at a number that barely feels real: $359.4 million.

But the money, oddly enough, may not be the part that echoes the loudest. This lawsuit drops the adult industry directly into the center of the broader fight over whether training an AI model on copyrighted material counts as “fair use” — or whether it’s just a slicker version of old-school piracy wearing a futuristic badge.

No one really knows yet how courts will treat AI training that leans on protected works. Everyone’s waiting for the first big ruling — the one that will set the tone for all the cases piling up behind it. And maybe that’s the real story here: not just whether Meta downloaded some videos, but whether we’re watching the early days of a legal shift that’s going to redraw the boundaries of ownership, creativity, and machine learning.

Because if AI is the future, then this messy, uncomfortable question is coming along for the ride whether Silicon Valley likes it or not.

Read More »

Open The Age Verification Floodgates!

Ben Suroeste opines on a new age verification service. Here’s a summary:

Brady Mills Agency just rolled out AgeWallet, their shiny new age-verification system they say will help small adult-sector merchants survive the avalanche of new compliance rules. They’re leaning hard on the idea that the Supreme Court’s FSC v. Paxton ruling kicked the industry into fast-forward, and honestly, they’re not wrong—tools like this are going to keep popping up like mushrooms after rain.

What stands out, though, is the quieter warning baked into the announcement: the beginning of a serious crackdown on VPNs. AgeWallet claims it can detect proxies, masked locations, all of it—and force users to re-verify if anything looks off. Great for merchants trying to stay legal. Not so great if you’re living under a government that already decides what you’re allowed to read or watch. For those of us who remember the scrappy, boundary-free internet, this feels less like progress and more like another brick in the wall.

 

Read More »

Safe Bet: Soon, They’ll Try to Ban VPN Use by Stan Q. Brick

Laptop saying age verification

Over on Forbes.com right now, there’s an article making the point that when you read somewhere that traffic from the UK to Pornhub is down 77%, you might want to take that figure with a grain of salt. Or maybe a pillar of the stuff.

Writing for Forbes, Zak Doffman goes further still, suggesting “you can completely ignore” such a claim because “it’s not true.”

“What’s actually happening is that U.K. adults are turning to VPNs to mask their locations,” Doffman writes. “Just as residents of U.S. states affecting bans now pretend to be someplace else. Pornhub makes this as easy as possible.”

The article goes on to cite (perhaps accurately – I’m certainly no expert on VPNs) a variety of reasons why this sudden expansion in VPN use may not be a good thing, including the eye-catching assertion that “VPNs are dangerous.”

“You are trusting all your content to a third-party provider who can see where you are and the websites you visit,” Duffman writes. “At a minimum. There are plenty of reports of rogue VPNs doing much worse than that. In particular, you must avoid free VPNs and Chinese VPNs. Stick to bluechip options.”

Duffman is probably right and his advice on sticking to the name brand VPNs probably makes good sense. But as a guy who misses the era of what people call the “open internet” my concern isn’t so much rogue VPN operators as it is rogue legislators.

As I read Duffman’s piece, I couldn’t help but imagine some elected official somewhere reading the same piece and saying to himself/herself: “OH. MY. GOD. This VPN thing MUST be stopped, whatever it is.” The manner of legislation that follows this sort of epiphany typically tries to solve one problem by creating another. Or maybe several others.

The thing is, it’s not Duffman’s warning about the potential dangers of VPN use that will drive the concern of my hypothetical legislator, not the potential security threat or the nefarious actors out there offering free VPNs.

No, what will get the legislators all fired up and ready to wield their pens again will be the part about the ease of using VPNs to get around their precious, legally mandated age verification walls.

I don’t expect too many legislators will seek to ban VPN use altogether, although doubtlessly there will be some bright bulb somewhere who proposes exactly that. More likely, what they’ll do is add something to an existing age verification statute that prohibits “facilitating the use of technology to circumvent state law” on the part of the adult site, or mandating that adult sites have to do what a lot of paywalled sites do for their own reasons, which is try to detect and defeat VPN use.

As Duffman notes, websites can “look at your browser settings or cellular settings or recognize you from previous visits…. That’s why it’s harder to watch live sports from your usual provider when you’re away from home, their market restrictions try to catch you out. Porn sites do not.”

For the sake of adults in the UK and elsewhere who would rather not hand over their sensitive personal information to a third party just to exercise their right to look at sexually explicit images, here’s hoping porn sites aren’t soon forced to do what they’re currently choosing not to do.

Read More »

AI Porn Triggers Some Very Tricky Debates by Morley Safeword

Human head with AI

There’s been a lot of discussion of AI-generated porn lately, particularly in the days since OpenAI announced that starting in December, the firm would allow “mature content” to be generated by ChatGPT users who have verified their age on the platform. Understandably, much of that discussion has centered on consent—or the lack of such—in the context of AI content generation, given the proliferation of “deepfake” content in recent years.

Concern over publicly available images being used to create AI porn without the consent of the people being depicted is also driving legislative bodies everywhere to consider passing new laws that specifically forbid the practice. In South Dakota, for example, Attorney General Marty Jackley wants the legislature to craft a new law making it a felony to create AI-generated porn from an image of a non-consenting adult, which would mirror a law passed in the state last year making it a crime to do so using images of a minor.

You can certainly understand why this sort of law appeals to people, even if there are some potentially tricky First Amendment questions raised by such a prohibition. I don’t think any of us like the idea of someone grabbing our old yearbook photos and creating ‘porn doubles’ of us to be distributed willy nilly on the internet. But that very understandable and sensible concern doesn’t make the potential First Amendment questions magically disappear.

For one, if it’s not possible to make it illegal to create, say, a painting of a public figure without that person’s permission (and it isn’t), can it be made illegal to use AI to create an image of that same person? If it’s OK to create a non-pornographic image of that person, can a pornographic image of them be illegal only if it is also considered legally “obscene”?

While a lot of the questions around AI porn pertain to its potential for abuse, there’s a flipside to it, as well. For example, if one’s primary objection to the creation of pornography is rooted in its impact on the performers—the risks to their health and safety, the oft-cited potential for human trafficking being involved, etc.—then isn’t it better if the only “actors” involved are entirely digital beings?

On the other hand, if you’re someone who creates adult content, particular in a performing capacity, the prospect of being replaced by a competitor who doesn’t need to travel, sleep, undergo STD screening or pay any bills is a frightening one, I should think—particularly if there’s no legal mechanism preventing unscrupulous third parties from profiting by effectively pirating your very likeness. Getting replaced in a job by anyone sucks; just imagine what it would be like to get replaced by a counterfeit of yourself!

To sort all this out and craft effective legislation and regulation of AI porn is going to take a lot of careful, deliberate, rational thought. Unfortunately, I’m not sure there’s a lot of that to be found within the halls of Congress or any other legislative body. So, in all likelihood, states around the country and countries around the world will continue to struggle to get their heads wrapped around AI porn (and AI more generally) the same way they’ve struggled with the internet itself for the last several decades.

In the meantime, the rest of us will try to muddle through, as best we can. Personally, I have no plans to either create or consume AI porn… but will I even know I’m doing so, if it happens?

Add that to the list of thorny questions, I suppose.

Read More »

Mandatory Age Verification Is Creating a New Security Crisis by John Johnson – Cybersecurity Expert

Discord logo

There’s a quiet rule that’s floated around cybersecurity circles for years: don’t hold onto more data than you’re capable of protecting. Simple, elegant, almost parental in its logic — if you can’t safeguard it, don’t collect it.

But the world doesn’t care about that rule anymore.

Laws around identity and age verification are spreading fast, and they’re forcing companies—whether they’re ready or not—to gather and store the most intimate, high-risk documents a person can hand over. Passports. Driver’s licenses. National IDs. All the things you’d rather keep in your own pocket, not scattered across the servers of whoever happens to run the website you’re using.

And then something like the Discord breach happens.

In early October 2025, The recent data breach involving Discord. Not Discord’s internal systems—one of the partners handling support. Hackers got access to support-ticket data: names, emails, IP addresses, billing info, conversation logs… the usual mess. But tucked inside that mess was something far more sensitive: government-issued IDs.

These were collected for one reason: to prove a user was old enough to be there. To appeal an underage ban. And suddenly, the private documents people reluctantly handed over “just to get their account back,” were sitting in someone else’s hands entirely.

The Trap These Laws Create

Discord didn’t wake up one day deciding it wanted a folder full of driver’s licenses. Companies aren’t hungry for that kind of liability. But regulators have been ramping up age-verification mandates, and the penalties for non-compliance are steep enough to make anyone comply.

You can see the logic in the laws. Protect kids. Keep platforms accountable. Reasonable goals.

But look closely at the side effects:

We’ve built a system where organizations must stockpile some of the most breach-sensitive personal data in existence — even when they have no business storing it, no infrastructure built to protect it, and no desire to be holding it at all.

The old rule of “collect as little as possible” dies the moment a legal mandate requires collecting everything.

One Breach Becomes Everyone’s Problem

And once a company becomes responsible for storing IDs, the risk spreads. Healthcare portals, schools, banks, e-commerce shops, SaaS platforms — anyone providing service to the general public could end up in the same situation.

Every new database of passport scans is a future headline waiting to happen.

And when it happens, the fallout isn’t just personal. It’s financial. Legal. Reputational. You lose customer trust once — and you don’t get it back.

For small companies, one breach can simply end the business.

The MSPs Get Pulled Into the Storm

Managed service providers—MSPs—don’t get to sit this one out. They inherit the problem from every client they support. One MSP breach doesn’t just hit one organization. It hits all of them at the same time.

And the typical MSP environment? It’s a patchwork quilt of tools stitched together over time:

  • One for backups

  • One for endpoint protection

  • Another for vulnerability scanning

  • A different one for patching

  • Another for monitoring

  • And maybe one more to try and tie it all together

Every tool is another doorway. Another password. Another integration that can fail silently. Another shadow corner where data can slip unencrypted or unmonitored.

In an age when MSPs are being asked to guard government IDs, medical files, financial records, and entire networks—you can’t afford those shadows.

The Fix Isn’t “More Tools” — It’s Fewer

The only real path forward is simplification.

Not by removing security controls, but by merging them. Consolidation. Native integration. One platform where backup, protection, monitoring, and recovery exist inside the same ecosystem, speaking the same language, managed from the same place.

When everything runs through a single agent with one control plane:

  • There are fewer gaps.

  • There are fewer weak handoffs.

  • There are fewer places for attackers to slip in unnoticed.

  • And the attack surface shrinks dramatically.

You trade chaos for clarity.

You trade complexity for protection.

The New Reality

That old cybersecurity rule—don’t collect more data than you can protect—wasn’t wrong. It’s just not optional anymore.

The Discord breach isn’t a one-off story. It’s a preview. A warning shot.

Organizations are being legally pushed into storing the exact type of data that attracts attackers the most. And MSPs are being put in charge of securing it at scale.

So the question shifts:

If you no longer get to choose how much data you collect…

you have to be very deliberate about how you protect it.

And that means rethinking the entire structure of how we secure systems—not by addition, but by alignment.

Because now the stakes aren’t abstract. They are literal: your identity, my identity, everyone’s identity.

And someone is always watching for the first loose thread.

Read More »