New AI browsers like Perplexity Comet are revolutionizing web browsing, but they also bring serious security risks. Learn how prompt injection attacks work and what specific protection measures you should take now.
Browser company Brave has uncovered a critical security vulnerability in the Perplexity Comet browser that shows how dangerous prompt injection attacks can become in practice. What starts as a harmless "summarize this page" feature can lead to a complete account hack.
The attack works frighteningly simply: Hackers place hidden commands in normal web content - even in Reddit comments. When the AI analyzes these pages, it executes the malicious instructions like a sleeper agent: It navigates to your Perplexity account, steals your email, triggers a password reset, reads the new password from Gmail, and sends everything to the attacker via Reddit comment.
Prompt injection exploits a fundamental weakness of AI systems: For them, everything is just text. Whether your prompts, uploaded documents, or web content - the AI cannot reliably distinguish between legitimate instructions and hidden commands.
This becomes particularly dangerous when AI browsers have access to your private accounts through connectors like MCP (Model Context Protocol). Then prompt injection attacks can not only read data but also execute actions on your behalf - from sending emails to triggering transactions.
Security experts warn of the "Lethal Trifecta of AI" - a dangerous combination of three factors that make AI browsers a perfect attack target. When these three elements come together, security risks emerge that go far beyond normal browser vulnerabilities.
AI browsers automatically navigate to websites you would never have visited. In doing so, they can encounter malicious content specifically prepared for prompt injection attacks.
Through connectors like MCP, AI browsers have access to your emails, cloud storage, banking apps, and other sensitive accounts - a paradise for attackers.
AI browsers can independently send emails, make API calls, and communicate with external systems - perfect for transferring stolen data.
The AI executes commands immediately and without confirmation. What's practical in normal use becomes a security nightmare in prompt injection.
This combination makes AI browsers an ideal tool for cybercriminals: They have access to your most private data, can automatically extract it, and forward it to attackers through hidden channels - all without you noticing anything.
Although the threat is real, there are concrete steps you can take to protect yourself from prompt injection attacks. These measures significantly reduce your risk and make it much harder for attackers.
Use AI browsers separately from banking tabs and sensitive accounts. Use separate browser profiles or even different devices for critical applications.
Enable two-factor authentication via authenticator apps or passkeys. SMS-based 2FA is less secure and should be avoided.
Configure MCP servers and AI connectors so that every sensitive action must be manually confirmed. Automation is practical but risky.
Be skeptical of "summarize" features on unknown websites. Only use AI browser features on trusted sites.
The threat from prompt injection is not just theoretical. There are already documented cases where attackers have successfully used this technique. These examples show how sophisticated and dangerous such attacks can be.
Attackers placed hidden commands in Reddit comments. When users had the page summarized with Comet, the AI stole email addresses, triggered password resets, and transferred the data to the attackers.
CVE-2025-32711 allowed attackers to extract sensitive company data through manipulated documents. The AI executed embedded commands without users noticing anything.
Banking customers became victims of prompt injection attacks where AI browsers were manipulated to steal transaction data and transfer it to cybercriminals.
At the security conference, over 240 servers were compromised through AI workflow vulnerabilities. Prompt injection was the most common attack vector.
Prompt injection is so dangerous because it exploits a fundamental problem of AI systems: the inability to distinguish between data and instructions. Even with the best protection measures, challenges remain.
For AI systems, everything is just text. They cannot reliably distinguish between legitimate prompts and hidden commands embedded in normal web content.
Attackers hide commands in invisible HTML elements, CSS comments, or use techniques like white text on a white background.
As soon as a protection measure is implemented, attackers develop new bypass techniques. It's an endless race between attack and defense.
Too strict security measures make AI browsers unusable. The balance between user-friendliness and security is extremely difficult.
These challenges don't mean we're powerless. But they show why a multi-layered security approach and conscious use of AI browsers are so important.
Security doesn't have to be complicated. With this structured approach, you can use the benefits of AI browsers without exposing yourself to unnecessary risks. Each level builds on the previous one and gradually increases your security.
Separate AI browsers from critical accounts, enable 2FA everywhere possible, and only use "summarize" features on trusted websites. These measures immediately reduce your risk by over 80%.
Configure MCP servers for manual confirmation, set up separate browser profiles, and implement sandboxing for AI browsers. Review and restrict connector permissions.
Implement monitoring for anomalous AI activities, conduct regular security audits, and stay informed about new threats. Develop an incident response plan for AI security incidents.
The AI browser security landscape is changing rapidly. New technologies, regulations, and attack methods are constantly emerging. Those who set the right course today are prepared for the future.
AI regulations will set minimum standards for AI browsers by 2026. Companies should prepare early for stricter compliance requirements.
Context-aware prompt filtering and AI sandboxes will become standard. Zero-trust architectures for AI browsers are in development.
AI-based anomaly detection will be able to identify and block prompt injection attacks in real time.
Global standards for AI browser security are emerging through cooperation between major technology centers worldwide.
AI browsers offer enormous potential, but only if we use them securely. The Perplexity Comet vulnerability was a wake-up call - but also an opportunity to do better.
The future belongs to secure AI browsers. Companies and users who act now will be the winners. Because security is not the opposite of innovation - it is its foundation.