According to Thurrott.com, the privacy-focused Brave browser is launching a new AI browsing mode today as an early preview. The feature, which lets its built-in Leo assistant perform web-based tasks, is only available on the browser’s Nightly channel and must be manually enabled via a feature flag. When invoked, it opens in an isolated browsing profile with its own cookies and site data, completely separate from the user’s main session. The Brave team explicitly designed this in response to recent warnings from research firm Gartner, which labeled AI browsers a major security risk due to threats like prompt injection attacks. They’ve added protections like a second AI model to check the main agent’s actions and block access to non-HTTPS or flagged sites. Despite these safeguards, the team warns that risks like prompt injection are not eliminated, even for early testers.
Brave’s Security Play
Here’s the thing: Brave isn’t just jumping on the AI bandwagon. They’re trying to build a safer wagon. The entire architecture of this feature screams “damage control.” An isolated profile? A secondary AI watchdog? Blocking non-HTTPS sites? That’s all textbook containment. It shows they’ve actually read Gartner’s scary report and are trying to engineer around the nightmare scenarios. But let’s be real—this also makes the feature a bit clunky. You’re basically opening a separate, sanitized browser window just for the AI to play in. It’s secure, sure, but is it seamless? Probably not yet. This feels like a direct, calculated counter-argument to the likes of ChatGPT Atlas and Perplexity Comet: “You can have agentic AI, but *this* is how you do it without being reckless.”
The Agentic AI Dilemma
So what’s the big deal with “agentic” AI? Basically, it’s the difference between asking a question and giving a command. Asking “What’s the best laptop?” is one thing. Telling an AI, “Go find me the best price for this specific laptop model and summarize the reviews” is agentic. It has to click links, navigate pages, and extract data. That’s where the danger lies. A malicious prompt hidden on a webpage could hijack that process. Brave’s approach of sandboxing the whole operation is a logical first step, but it’s a mitigation, not a solution. I think the bigger question is whether users will tolerate the friction of a separate session for complex tasks. For quick research, maybe. For anything involving your logged-in accounts? That’s a much harder sell, even with these guards up.
A Cautious Roadmap
Now, by putting this only in the Nightly build and behind a flag, Brave is being incredibly cautious. And they should be. This isn’t a consumer feature; it’s a research project with real users. The warnings about rate limits interrupting complex tasks and the need to manually click “continue” show this is still very much in the plumbing stage. They’re soliciting feedback from security researchers, which is smart. You can check out their announcement blog, grab the Nightly build, and follow the support guide if you want to test it. But this rollout tells us the era of fully autonomous, trustworthy AI agents isn’t here. We’re still in the “walled garden with a security detail” phase. And honestly, for something that controls your browser, that’s exactly where we should be.
