Meta’s $16 billion scam economy: what leaked docs reveal about tech company self-regulation
Social media’s scam economy
An overarching question that matters for governance of artificial intelligence is how much we can trust technology companies to do the right thing in cases where it becomes seriously costly for them to do so.
Well, some internal documents leaked from Meta (the company behind Facebook, Instagram, and WhatsApp) are a really important piece of evidence in this debate — and one that, in my opinion, far too few people have heard anything about.
These documents, which were leaked to Reuters by a frustrated staff member, show that Meta itself estimated that 10% of all its revenue was coming from running ads for scams and goods that they themselves had banned: around $16 billion a year. That’s 10% of all revenue coming quite often from, fundamentally, enabling crimes.
Meta also estimated that its platforms were involved in initiating a third of all successful scams taking place in the entire United States. How much might we expect that to actually be costing ordinary people, on average? Well, the FTC estimates that Americans lose about $158 billion a year to fraud. If Meta is really leading to a third of that, we’re talking roughly $50 billion a year in losses connected to Meta’s platforms — or about $160 per person per year.
Now, not everyone inside Meta loved this state of affairs, as you can imagine. And the documents also show that a Meta anti-fraud team came up with a screening method that, when tested, successfully halved the prevalence of scams being operated from China — at the time, one of their worst fraud hotspots. The only trouble? Meta was making $3 billion a year from these fraudulent Chinese ads.
And according to the documents, after Zuckerberg was briefed on the team’s work, he instructed them to pause. Meta’s spokesperson disputes this characterisation. He says that Zuckerberg’s instruction “was to redouble efforts to reduce them all across the globe, including in China.” But whatever he said, what actually happened is that the China-focused anti-fraud team was disbanded entirely, and the freeze on giving new Chinese ad agencies access to the platform was lifted. Naturally then, within months, fraud from China bounced back to nearly its previous level.
More broadly, the documents show that managers overseeing anti-fraud efforts were told that they couldn’t take any action that would cost Meta more than 0.15% of total revenue. Meta says this wasn’t a hard limit, but the maths kind of speaks for itself. If fraud accounts for roughly 10% of your revenue, and your anti-fraud team is capped at affecting 0.15%, their hands are really pretty much tied. There’s not a lot that they’re going to be able to do.
A particularly grim detail about this is that Meta’s ad-targeting algorithm very naturally identifies people vulnerable to fraud, like the elderly or the desperate. And if they click one scam ad, it learns to feed them more and more until their feed is basically completely stuffed with them.
Now, this all sounds pretty outrageous. And Meta’s own documents suggest the company realised that it was legally exposed here. They anticipated fines of up to a billion dollars for what they were doing, but the documents show them doing a cold calculation: “A billion dollars in fines. But wait, we’re making $3.5 billion every six months from high-risk scam ads in the United States alone.” That figure, one document noted, almost certainly exceeds the cost of any regulatory settlement involving scam ads. So those fines to Meta just became another cost of doing business. Rather than voluntarily cracking down, the same document finds Meta’s leadership deciding to act only if faced with impending regulatory action.
Three fixes for social media’s scam problem
So what are the takeaways for policymakers?
First, and most obviously, penalties have to scale with expected profits and the probability of catching a lawbreaker in the first place. Meta could look at $1 billion in expected fines and shrug, because they were just making so much more money from running the fraudulent ads. If someone were to steal $1,000, we don’t fine them $100 and let them keep the other $900 — but that’s effectively how the law was written to operate here.
Second, voluntary commitments should be taken with an enormous bucket of salt. Meta repeatedly made them very strategically just to buy time and stall binding regulation with no honest intention of following through in the end. The documents are quite explicit about that being a deliberate strategy.
Another Reuters investigation revealed that Meta developed a global playbook for neutralising regulators, including manipulating their own ad library so that when regulators searched it, scam ads were quietly scrubbed from the results and would give a misleading impression about how common they were.
So third, I think regulators need genuine access to internal information. If the only window regulators have into a company is one the company completely controls, they’re going to see whatever the company wants them to see.
Thinking about all this has brought me back to an idea that I found myself repeatedly mulling over with respect to artificial intelligence governance. Compared to social media, AI technology changes faster, the potential consequences are larger, and the gaps between what companies know and what outsiders can verify is wider still. With AI systems, model outputs often have to be secret for good reason, much more secret than social media posts.
And regulators can’t usually evaluate what a model is doing and why, and often even the company itself only vaguely understands. These systems are literally the first invention humans have ever made that can independently determine when they’re being tested, and have been demonstrated to strategically shift their behaviour to try to trick the very company that made them about their personality and likely behaviour once released. It’s honestly bonkers, and truly something new under the sun that we’ve never had to deal with before.
Just in general, it’s a really complex situation to handle clearly, and we don’t yet know everything that we’re going to want companies to do to address it sensibly at a reasonable cost.
We should regulate AI companies as strictly as banks
That’s why some of the smartest people working in AI governance from across the political spectrum are converging on the idea that AI regulation needs to draw in some ways on how the Federal Reserve and other central banks watch over massive financial institutions. We’ll link to a major report on this that I can definitely recommend to you. It’s worth reading. But in brief, the Fed does not wait around for banks to collapse and then issue them a fine. It embeds supervisors directly inside systemically important banks — people who have deep technical expertise who remain there full time, watching how decisions get made, reviewing internal risk models, having constant conversations with leadership about what they perceive as emerging risks long before they can become crises.
Dean Ball, who previously served in the Trump White House Office of Science and Technology Policy, and is nobody’s idea of a heavy handed AI regulator, he told me on this podcast that bank supervision is actually quite analogous to what we ultimately might need for AI.
Now, this model, as you are probably thinking, it’s not perfect. The 2008 financial crisis happened despite embedded Fed supervision existing back then. But the status quo is a combination of no oversight, oversight by people who don’t understand what they’re dealing with, and oversight by technical experts who do understand but can’t access the information that they need to make the best decisions in time for it to actually matter to determine outcomes.
The leaks from Meta are “fun” in a perverse way — because it’s always interesting to look inside a company and have our very worst suspicions about human nature verified. But waiting around for powerful tech companies to cause disasters before building an understanding of what they’re up to is a choice that we as a society can make or choose not to make.
We’ll be trusting AI companies with something much more important than targeted advertising — and in my view, this time around, we should aim to be sophisticated actors, not total muppets who the companies can just easily run rings around.
And with that, I’ll speak to you again soon.
Learn more
- The Reuters investigation underpinning this episode:
- Meta is earning a fortune on a deluge of fraudulent ads, documents show by Jeff Horwitz
- Meta tolerates rampant ad fraud from China to safeguard billions in revenue by Jeff Horwitz and Engen Tham — covers the shelved anti-fraud screening method and Zuckerberg’s response
- Meta created ‘playbook’ to fend off pressure to crack down on scammers, documents show by Jeff Horwitz — includes evidence of manipulated ad library results shown to regulators
- Reports from the Centre for the Governance of AI:
- Frontier AI Auditing: Toward Rigorous Third-Party Assessment of Safety and Security Practices at Leading AI Companies by Miles Brundage et al.
- Frontier AI Regulation: Managing Emerging Risks to Public Safety by Markus Anderljung et al.
- Our podcast episode with Dean Ball on how AI is a huge deal — but we shouldn’t regulate it yet