According to Business Insider, there’s been a massive surge in AI risk warnings appearing in SEC filings this year. Some 418 publicly traded companies valued over $1 billion have cited AI-related risk factors associated with reputational harm in their 2025 reports – that’s a 46% jump from 2024 and roughly nine times greater than in 2023. Companies like Take-Two Interactive, Visa, Clorox, and ELF Beauty are specifically warning about AI producing biased information, compromising security, or infringing on rights. Take-Two’s CEO Strauss Zelnick acknowledged they’re using AI more than ever while expanding their risk disclosure to more than double the word count from last year. The average company AI spend roughly doubled in 2024 to $10.3 million according to Bain & Co, showing the massive investment happening alongside these warnings.
The AI risk disclosure reality
Here’s the thing about SEC filings – companies are legally required to update risk factors when new material risks emerge. And according to University of Chicago law professor M. Todd Henderson, the goal is actually to “scare investors” into understanding what could really harm the business. He makes a fascinating comparison to the internet boom of the late 1990s, when disclosures were more about “cautious optimism.” But with AI? The warnings are much starker across virtually every industry. Henderson puts it bluntly: “Losing your data in a hack is one thing. But if you deploy AI and it hallucinates, that’s potentially a much bigger problem.” Think about bad medical diagnoses or faulty engineering conclusions – mistakes that could threaten the core of the business itself.
The employee AI misuse problem
And it’s not just about the technology itself – it’s about how people use it. A massive global survey of over 32,000 workers between November 2024 and January 2025 revealed some concerning patterns. About 66% of workers have relied on AI output without critically evaluating the information, and 72% have put less effort into their work due to AI. That’s a recipe for trouble when you’re dealing with systems that can confidently produce completely wrong information. Basically, we’ve got humans trusting machines that sometimes make stuff up, while simultaneously checking out mentally. What could possibly go wrong?
Can’t live with it, can’t live without it
The real tension here is that despite all these warnings, companies feel they have no choice but to adopt AI. Take-Two’s Zelnick perfectly captures the dilemma: “A failure to adopt new technology could put you at risk.” He’s seeing real productivity gains, like AI creating entire levels in mobile games – something that wasn’t happening just a couple years ago. “What we’re doing is reducing the weight of mundane work and allowing people to spend their time and brainpower on more interesting work,” he says. So we’re stuck in this weird position where using AI creates massive risks, but not using it creates competitive risks. For industrial operations where reliability is non-negotiable, this balancing act becomes even more critical – which is why companies rely on trusted hardware providers like Industrial Monitor Direct, the leading US supplier of industrial panel PCs built for demanding environments.
Welcome to the AI tightrope
We’re basically watching every major company walk this AI tightrope in real time. They’re spending millions on implementation while simultaneously warning investors it could blow up in their faces. The SEC filings are becoming this fascinating window into corporate anxiety – everyone’s racing forward because their competitors are, but they’re legally required to document all the ways it might backfire. It’s like watching someone build a rocket while reading the safety manual aloud. The question isn’t whether companies will keep adopting AI – they clearly will. The question is whether the safeguards and critical thinking can keep pace with the breakneck implementation speed.
