ActiveFence Raises Red Flag on How AI Browsers Handle Trust
ActiveFence warns Perplexity's Comet browser could be manipulated by hidden prompts embedded in webpages.

In the arms race to make browsing smarter, AI is quickly becoming the web’s next default layer. From summarizing search results to auto-completing code, these models are moving beyond chatbots into the browsers we use every day. It’s the latest stage of “agentic computing,” where artificial intelligence doesn’t just answer questions, but acts on our behalf.
That shift is redefining what cybersecurity means. Traditional threats target users through malicious links and phishing emails. But when the software itself begins interpreting content and making decisions, attackers have a new opening: the AI layer. That’s the concern behind ActiveFence’s latest research, which spotlights how hidden prompts can manipulate Perplexity’s new AI-powered Comet browser to perform unintended actions.
The Promise and Peril of Comet
Perplexity built its reputation on credibility, providing summarized, source-backed answers that minimize hallucinations. Comet, its newly launched AI browser, extends that reliability into a more immersive experience. Users can browse, ask, and get context directly on a webpage, without switching tabs or copying links. It’s a smooth, hands-free approach to web exploration.
That convenience, however, comes with risk. ActiveFence’s research shows that the same trust users extend to Comet’s summaries can be exploited. Under certain conditions, the browser’s assistant can be influenced by external instructions embedded in the content it’s meant to interpret, effectively allowing hidden prompts to steer its behavior.
Testing the Boundaries of Trust
ActiveFence researchers conducted a controlled series of tests to understand how Comet processes and prioritizes instructions. Instead of deep technical exposition, their findings focused on outcomes: the browser’s AI assistant occasionally followed commands that originated from webpage content, rather than the user.
While that behavior might sound nice, the implications aren’t. In the hands of attackers, this could translate into subtle manipulations, like displaying misleading upgrade messages, redirecting users to imitation pages, or embedding persuasive phishing elements that appear as legitimate parts of the browsing experience.
When “Helpful” Becomes Handful
AI agents are built to be helpful. They’re trained to summarize, interpret, and assist. However, as ActiveFence notes, this helpfulness becomes dangerous when the AI cannot distinguish between what the user wants and what the content instructs it to do.
In this case, Comet wasn’t broken. It behaved the way it was designed to: reading the webpage, understanding it, and providing guidance. The flaw lies in design assumptions that all instructions within a page are trustworthy. However, in an era where every HTML element can convey meaning, that assumption is no longer valid.
Broader Implications for AI Platforms
The Comet findings underscore a challenge that extends far beyond Perplexity. As major tech companies embed AI into productivity tools, browsers, and even operating systems, the number of potential entry points for prompt manipulation grows exponentially.
ActiveFence highlights that such vulnerabilities aren’t always about malicious code. They’re about social engineering through the same language models are trained to understand and obey. That makes mitigation harder: security teams aren’t just defending against scripts or malware, but against cleverly crafted text.
Security Can’t Be a Premium Feature
Another takeaway from ActiveFence’s report is the disparity between user tiers. Comet’s paid “Pro” users appear to be less affected by the issue, potentially because their version uses models with stricter guardrails. Free-tier users, meanwhile, rely on configurations that may be less protected.
That gap raises a fundamental question for AI platform builders: should core security depend on subscription level? ActiveFence’s stance is that trust and protection should be baseline, not gated by pricing.
A Call for Skeptical Design
The discovery serves as a warning: as AI assistants take on greater autonomy, the security mindset must evolve with them. ActiveFence’s research reframes the growing reality of the AI era that trust is not a given, but rather an attack surface. Whether in browsers, email assistants, or document tools, the next frontier of digital safety won’t hinge on stronger passwords or faster updates. It will depend on designing AI systems that question the instructions they’re given because the next prompt might not be coming from you.
About the Creator
TVC
Tech Journalism, Product Reviews, Startups, Investing, FinTech



Comments
There are no comments for this story
Be the first to respond and start the conversation.