Meta's Scam Factory!

avatar

Online fraud used to be just an annoying part of being digital, but it's gotten a lot worse. It's now a huge criminal operation.

Online fraud isn't just a load of one-off scams anymore. Instead, they're run by big, organized groups, often from 'scam hubs' in places like Southeast Asia. It sounds like these places are more like prisons than normal offices, with people who've been trafficked forced to work around the clock on investment scams, romance frauds, and identity theft.

What's really concerning is that these scams rely on regular tech platforms. You see fake ads and videos not just on the dark web, but on YouTube, Facebook, and Instagram, and there they are, right alongside legitimate stuff.

image.png

Platform Incentives and Blind Spots

Reuters has supposdely seen some internal Meta documents that show about 10% of their yearly income—around $16 billion comes from ads for scams or banned items. This kind of figure suggests a serious potential conflict of interest between Meta and protecting consumers.... that's a lot of money to lose if it spent even more money on clamping down on scams, which it can obviously do given its sheer financial weight.

And they clearly are not that interested in protecting consumers...

Take Meta’s rule for removing ads: they only take down ads their systems are at least 95% certain are fraudulent. With this low bar tt means a ton of fishy or harmful stuff stays up because losing ad money from falsely flagging an ad is worse to them than the harm caused to users. What happens then? Scammers get free rein to operate, adapt, and get better, while the platforms keep saying they're committed to fighting fraud.

This isn't really a tech problem; it's about how things are governed. Platforms have all the data, tools, and analysis power to do more. They just don't have enough legal and financial pressure to make their business goals match the public's best interests.

Liability as the Clear Answer

There is a simple solution: make platforms accountable for the costs of the fraud they allow. If social media companies had to pay back victims, reimburse banks, or fund efforts to catch criminals, their approach to moderation would change instantly. They’d check ads more carefully, get rid of bad accounts sooner, and wouldn’t tolerate borderline fraud as an acceptable way to make money.

Or at least contribute to the extent that they profit from these scams.

There’s a precedent for this thinking. Banks already have strict rules about money laundering because they control access to the financial system. Similarly, tech platforms now control access to our attention, trust, and how we're persuaded. It’s not believable to think of them as neutral anymore, especially with deepfakes and algorithms amplifying things.

Final Thoughts...

OK there's a potential problem with clamping down more harshly on ads which are legitimate, but honestly, given the sheer volume of material that is obviously dodgy, these companies should be doing better.



0
0
0.000
3 comments
avatar

I've seen plenty of scammy ads on FB, often with fake celebrity endorsement. I've reported some, but doubt it helps. Plenty of fake accounts too trying to lead you astray.

I think we'll see more automated scams using AI. Some phone calls I get are obviously not from a person, but they will improve.

0
0
0.000