According to TechRepublic, cloud security firm Zscaler has acquired AI security startup SPLX to integrate AI protection directly into its Zero Trust Exchange platform. The financial details weren’t disclosed, but the timing aligns with massive AI spending projections—companies are forecast to spend $375 billion on AI infrastructure alone in 2025, a 67 percent leap from last year. SPLX, founded just last year in 2023, had raised roughly $9 million from LAUNCHub Ventures and Rain Capital before this acquisition. Zscaler CEO Jay Chaudhry stated the combined technology will secure the entire AI lifecycle by classifying, governing, and preventing data loss across prompts, models, and outputs. The integration creates a dedicated AI protection layer within Zscaler’s existing zero-trust framework, specifically targeting what the industry calls “shadow AI”—unauthorized AI tools that employees use without IT’s knowledge.
The shadow AI problem is real
Here’s the thing about AI adoption in enterprises—it’s happening faster than security teams can track. Employees are spinning up unauthorized AI apps and workflows just to get work done faster. They’re not trying to cause problems, but they’re creating massive blind spots. Attackers love these open doors. And traditional security tools? They basically can’t see this stuff. SPLX launched AI Asset Management earlier this year specifically to find AI models and autonomous workflows that enterprises didn’t even know existed. That’s the scale of the visibility gap we’re talking about.
Why this acquisition actually matters
This isn’t just another security vendor buying a startup. Zscaler is building AI protection directly into the zero-trust architecture that thousands of enterprises already use. Think about that—instead of adding another standalone tool to your security stack, you’re getting AI security baked into the platform that already controls access to everything. The standout feature is SPLX’s automated red-teaming with over 5,000 purpose-built attack simulations. That flips the script from waiting for breaches to happen to constantly testing and hardening AI systems. As these systems become more autonomous and interconnected, that proactive approach becomes essential. Can you really afford to just hope your AI won’t leak sensitive data?
What this means for enterprises
Enterprises can’t just bolt old security playbooks onto AI systems. Traditional tools struggle with AI’s unique challenges—protecting sensitive data inside prompts, defending machine learning models from targeted attacks, governing who can use what and when. The acquisition addresses growing AI governance concerns by shifting toward proactive controls that move at the pace of AI adoption. SPLX CEO Kristian Kamber says joining forces will secure “AI innovation at the speed organizations are adopting it.” Basically, if your company is racing into AI (and let’s be honest, everyone is), this gives security teams a fighting chance to keep up without slowing innovation to a crawl.
