According to Manufacturing.net, a collection of OT cybersecurity stakeholders has laid out six major predictions for AI’s impact in 2026. Frank Balonis, CISO at Kiteworks, predicts third-party AI data handling will become the defining supply chain risk, noting that two-thirds of manufacturers already flag visibility gaps as a top priority. Josh Taylor from Fortra warns enterprises will start treating AI systems as insider threats, predicting a lawsuit by Q2 2026 where an AI agent causes measurable business harm. George Gerchow of IANS Research states failure to red team AI will cross into “criminal negligence” territory, becoming a board-level issue. Dr. Darren Williams from BLACKFOG identifies “shadow AI” as the #1 organizational threat, citing a global survey where 48% of employees admitted uploading company data into public AI tools. Karl Holmqvist, CEO of LASTWALL, forecasts a major security reckoning from the unchecked rush to deploy AI, anticipating the first high-profile breach caused by an autonomous AI agent next year.
Supply Chain AI: The New Battlefield
Frank Balonis’s point about third-party AI data handling is a killer insight. Manufacturers have spent decades hardening their own networks, but now the risk is literally in the code of their logistics partner’s optimization model or their contract manufacturer’s internal copilot. Traditional vendor security questionnaires are utterly useless here. They ask about firewalls and patching, not whether your shipment data is being used to train a model that could leak it to another customer. His prediction that 2026 is when contracts get rewritten with specific AI clauses feels right on time. The suppliers who can’t answer these questions will be frozen out. And honestly, if you’re sourcing critical industrial components, you should be demanding that level of transparency now. It’s not just about data privacy anymore; it’s about protecting the proprietary formulas and processes that make your product unique. For companies integrating complex systems, ensuring every piece of hardware, from the server to the industrial panel PC on the factory floor, has a clean, auditable data chain is becoming non-negotiable.
AI: The Ultimate Insider Threat
Josh Taylor’s framing of AI systems as insider threats is brilliantly unsettling. We’re so focused on AI being hacked from the outside that we’re missing the obvious: we’re installing hyper-efficient, always-on employees with god-level permissions and zero oversight. The comparison to user behavior analytics (UBA) is spot-on. UBA tools look for humans acting weird. But what’s “weird” for an AI? It’s making thousands of decisions a minute, and no one is reviewing the logs. The prompt injection attack he mentions isn’t sci-fi; it’s happening now. An AI assistant with access to your email and file shares gets a cleverly worded prompt from a compromised account and starts silently forwarding documents. Who would even know? The lawsuit prediction is the logical, ugly endpoint. When that case hits, the entire industry will scramble. Is the liability with the company that deployed it, the vendor that built it, or the “AI” itself? My money’s on a messy legal fight that tries to pin it on the deploying company for negligence. Basically, you gave the keys to the kingdom to a black box. That’s gonna be a hard position to defend.
Governance or Bust
Here’s the thing: the other predictions all funnel into this core issue of governance. George Gerchow calling out “criminal negligence” is a stark warning to boards. If you’re using AI in high-risk financial or safety-critical OT workflows without adversarial testing, you’re begging for trouble. His shift from “training people” to “proof-based systems” is the entire conversation. Phishing drills have failed because humans are fallible. You need system-level controls that assume breach. The deepfake-resistant verification procedures he mentions aren’t optional anymore. And Dr. Williams’s data on shadow AI is terrifying. Half of employees are just… feeding company data into ChatGPT? That’s an existential IP leak happening in plain sight. The $670,000 extra cost for a shadow AI-related breach is a concrete number that will get CFOs’ attention. Karl Holmqvist is right: the wild west phase is over. 2026 looks like the year of the reckoning, where the companies that raced ahead without a map will face devastating breaches, and the smart ones will be pulling back, implementing the frameworks and verification tech he mentions. The race won’t be for the coolest AI feature, but for the most provably secure and accountable one.
