According to Financial Times News, major UK mobile networks including BT EE, VodafoneThree and Virgin Media O2 have committed to upgrading their networks to block number spoofing within the next year using artificial intelligence. The telecoms charter with the government will deploy AI to identify and block suspicious calls and texts while boosting data sharing and call tracing technology to help police track scammers. Fraud now accounts for over 40% of all reported crime in the UK, with telecom-enabled scams making up 17% of cases but causing 29% of total losses. Criminals defrauded UK consumers out of £629 million in just the first half of 2025, while banks prevented an additional £682 million using their own AI systems. Lord Hanson, minister for fraud, said the government intends to make the UK “the hardest place in the world for scammers to operate.”
About damn time
Here’s the thing – Ofcom has been pushing carriers to fix this for three years. Three. Years. And they’re only now getting around to it? The UK’s telephone network is apparently so outdated that criminals can easily spoof numbers that look like they’re from your bank or the police. Basically, we’ve been using technology that’s vulnerable to attacks that other countries have already addressed through 5G upgrades.
And let’s talk about that timeline. “Within the next year” feels suspiciously vague. Scam expert Nick Stapleton isn’t wrong when he questions why this can’t happen tomorrow. If banks are already using AI to stop £682 million in fraud, as UK Finance data shows, what’s taking the telecom giants so long to implement similar protections?
The human cost of delay
Stapleton makes a crucial point that often gets lost in these discussions. Sure, new reimbursement rules mean most victims will eventually get their money back. But what about the psychological harm? The trauma of realizing you’ve been scammed, the financial stress, the embarrassment – that stuff doesn’t get reimbursed.
Think about it: 96% of mobile users decide whether to answer based on the number displayed. When scammers can make it look like your bank is calling, that trust gets completely eroded. And once that trust is gone, it’s incredibly difficult to rebuild. The delay in implementing these fixes means more people will experience that violation.
The AI paradox
Now here’s where it gets interesting. The same technology that’s being deployed to protect us – AI – is also being used against us. UK Finance notes that criminals are increasingly using AI to create convincing “deepfake” videos for investment and romance scams. So we’re essentially in an AI arms race.
But wait – if banks can use AI to scan real-time payments and stop suspicious transactions, why can’t telecom companies do the same with calls? The technology clearly exists. The will? That seems to be the question. Carriers have had years to address this vulnerability, and only now, with government pressure and public outrage over fraud statistics, are they finally acting.
Will this actually work?
I’m cautiously optimistic but deeply skeptical. Major telecom collaborations have a history of moving at glacial speeds. And let’s be honest – scammers are nothing if not adaptable. The moment number spoofing becomes harder, they’ll find another vulnerability.
The data sharing between networks and law enforcement sounds promising in theory. More intelligence for police to track down scammers could make a real difference. But overseas call centers operating in jurisdictions that don’t cooperate with UK authorities will remain a challenge.
So while this announcement is definitely a step in the right direction, the real test will be in the execution. Will carriers actually deliver within that one-year timeframe? And when they do, will it be enough to stay ahead of the scammers? Given that fraud has become such a massive chunk of UK crime, we’d better hope so.
