Datamation content and product recommendations are
editorially independent. We may make money when you click on links
to our partners.
Learn More
Security researchers have uncovered alarming vulnerabilities in OpenAI’s new Atlas browser that could let hackers turn your AI assistant against you.
LayerX security experts found critical flaws that enable attackers to inject persistent malicious code directly into ChatGPT’s memory system.
The discovery follows Atlas launching last week, a speed that underscores how quickly the holes were spotted.
These are not ordinary browser bugs. They exploit Atlas’s AI-powered features to embed harmful instructions that survive browser restarts, device switches, and session changes. That code can quietly redirect you to phishing sites or siphon sensitive data.
The hidden attack
The vulnerability stems from a Cross-Site Request Forgery, or CSRF, flaw in how Atlas handles the browser’s omnibox feature, which integrates AI for summarizing web content and generating responses. Attackers can craft fake URLs that mimic legitimate ones, tricking the system into processing malicious prompts that lodge themselves in ChatGPT’s memory.
The real gut punch is persistence. Unlike hit-and-run exploits, these injected commands stay active across devices and browsers tied to your ChatGPT account. LayerX’s testing found Atlas blocked only 5.8% of phishing attempts, compared to 47–53% for Chrome and Edge, a gap that can make users up to 90% more vulnerable to attacks.
The attack also leans on Atlas’s default login to ChatGPT, which keeps credentials within reach and streamlines exploitation without extra token theft. Once the malicious directives are in, they trigger during normal queries and can push ChatGPT to generate harmful outputs, including remote code downloads from attacker-controlled servers.
The AI browser industry
OpenAI’s problem is not an isolated incident. It signals a broader security breakdown across AI-powered browsers. Research by Brave points to similar weaknesses in Perplexity’s Comet and other AI browsers.
At the core is how AI browsers interpret natural language prompts. Malicious actors can hide deceptive instructions in web content, leading to unauthorized transactions, data leaks, or accidental exposure of sensitive information.
Current defenses fall short. Script blockers and code analysis help with traditional threats, but they do little when the system reads and acts on human language.
Detection and protection
OpenAI has acknowledged the issue and is reportedly working on patches, but experts urge immediate caution. The safest move is to avoid AI browsers entirely until security measures improve.
If you must use Atlas, fence it off from sensitive accounts that hold personal health or financial data. Use OpenAI’s “logged out mode” to keep credentials out of reach, disable the memory feature, and clear any stored data.
OpenAI Chief Information Security Officer Dane Stuckey acknowledged that prompt injection remains “a frontier, unsolved security problem,” with adversaries poised to spend significant resources exploiting these gaps.
Until sturdier safeguards arrive, treat AI browsers as potential risks, not trusted copilots.
This article was reviewed by Antony Peyton.