According to Silicon Republic, the European Parliament and Council have agreed on a new law that makes social media platforms directly liable for financial scams originating on their services. Platforms like Meta will now have to reimburse banks and payment service providers who cover losses for defrauded customers. This applies specifically when platforms are informed about fraudulent content but fail to remove it. The regulation builds on the Digital Services Act, which already threatens penalties up to 6% of a company’s global annual revenue. Financial service advertisers will also need to prove they’re legally authorized to offer services. The law still needs formal adoption but represents a major shift in accountability for online platforms.
Why this matters
Here’s the thing – we’ve all seen those suspicious investment ads or fake celebrity endorsements flooding our feeds. For years, platforms treated this as someone else’s problem. Now they’re being put directly on the hook financially. When banks have to refund scam victims, they can turn around and bill the platform that hosted the fraud. That changes the economic calculation dramatically.
And let’s be honest – this isn’t just theoretical. Meta has been under particular scrutiny for being “rife with fraudulent advertising” according to the report. We’re talking about fake investments, scam purchases, fraudulent loans – the whole gamut. When platforms make money from advertising, they’ve had little incentive to aggressively police financial scams. Now the cost of ignoring them just got real.
Broader implications
This is part of a much bigger trend in EU tech regulation. The DSA is becoming the framework that keeps expanding, and we’re seeing personal liability discussions too. European MEPs are apparently considering making “senior managers” like Mark Zuckerberg and Elon Musk personally liable for harm. That’s a nuclear option that would really get executives’ attention.
But here’s what I find interesting – the burden sharing. Banks now have stronger obligations too. If they fail to implement proper fraud prevention, they’re on the hook. Strong customer authentication, risk assessments, spending limits – these aren’t optional anymore. Basically, everyone in the chain has skin in the game now.
What comes next
So when does this actually happen? The law still needs formal adoption, but the direction is clear. Platforms will need to ramp up their fraud detection dramatically. We’ll probably see more automated systems, more human moderators specifically for financial content, and definitely more paperwork for financial advertisers.
The customer support requirement is sneaky important too. Ever tried to report a financial scam to a platform? Good luck finding a human. Now they’ll have to provide actual human support for reporting fraudulent financial content. That alone could make a huge difference in how quickly scams get taken down.
Look, this is messy and complicated. But after years of watching people get ripped off while platforms shrugged, it feels like regulators are finally saying enough is enough. The question is whether this approach actually works or just pushes the scammers to find new loopholes. What do you think – will this actually clean up our feeds?
