Blog December 9, 2025

AI Browsers Are Here—But Enterprises Aren’t Ready. Why Obrela Advises Extreme Caution

George Daglas, EVP & Chief Strategy

The cybersecurity landscape is changing at a pace we haven’t experienced since the dawn of cloud computing. The newest disruptor, the rise of AI browsers such as Perplexity Comet and OpenAI’s ChatGPT Atlas, promises to revolutionize user interaction with the web. But behind the innovation lies a long list of risks that enterprises cannot afford to ignore.

At Obrela, where our mission is to keep your business in business by making cybersecurity predictable, we view AI browsers as an emerging capability with immense potential, yet carrying dangers significant enough to warrant a firm, caution-first stance.

What Makes AI Browsers Different and Risky?

Traditional browsers are passive windows into the internet. AI browsers, however, are active participants.

We are witnessing a shift from direct manipulation to Agentic AI. This transforms the browser from a tool you use into a worker you command. When a user tells an AI browser to “Book a flight” or “Summarize my emails,” the browser autonomously navigates portals, parses DOM elements, fills forms, and submits transactions.

This capability, known as agentic transactions, creates a “Black Box of Execution” where the browser acts on its own logic. This is groundbreaking but creates a “confused deputy” problem: a compromised agent doesn’t just leak data; it can execute unauthorized business logic on your behalf.

 

These browsers can:

  • Bypass traditional controls
  • Extract sensitive page content
  • Interact within authenticated sessions
  • Perform erroneous or unintended actions
  • Leak internal data to cloud-based AI back ends

In other words, your browser is no longer a viewer. It is an actor, and one you do not fully control.

Five Immediate Risks Enterprises Must Confront

Enterprises should treat AI browsers as high-risk “Shadow IT” due to the following:

1. They Misalign With Enterprise Risk Tolerance

AI browsers are early-stage technologies, immature, untested at scale, and lacking enterprise-grade guardrails like centralized Group Policy management. Unlike mature enterprise tools, they provide no administrative console for CISOs to audit activity. They effectively function as an unapproved operating system for the web. CISOs should block these consumer variants for the foreseeable future.

2. Sensitive Data Leakage Is Built Into Their Design

To function, AI browsers transmit page content – including text, emails, form fields, and browsing history- to cloud-based inference engines. For example, to “summarize” a page, the browser reads the context and sends it to third-party servers (e.g., Perplexity or OpenAI). This creates an uncontrolled, continuous flow of sensitive data outside the corporate “perimeter”, often without a Data Processing Agreement (DPA) in place.

3. Erroneous or Rogue Transactions Are Inevitable

AI reasoning is fallible and susceptible to manipulation. AI browsers can:

  • Submit incorrect internal forms
  • Order the wrong goods
  • Complete mandatory training on behalf of employees
  • Navigate to phishing pages and surrender credentials

Researchers have demonstrated attacks like CometJacking,” where a malicious webpage contains hidden instructions (indirect prompt injection) that hijack the browser’s agent.

Without the user knowing, the browser could be tricked into:

  • Reading sensitive data from other open tabs (e.g., email or CRM).
  • Base64-encoding that data to evade DLP.
  • Exfiltrating it to an attacker-controlled server.

While vendors are introducing reactive scanners like “BrowseSafe” to detect these prompts, these are probabilistic defenses in a game where attackers only need to win once.

Automation amplifies impact, errors happen faster and at scale.

4. Default Settings Prioritize User Experience, Not Security

Consumer AI browsers often retain user data to “improve models” or maintain “Browser Memories” indefinitely. Privacy is often an optional toggle hidden in settings, leading to “opt-out fatigue” among employees. In an enterprise environment, relying on individual users to manage privacy settings is a failed control strategy.

5. Critical Vulnerabilities Are Already Being Discovered

The rush to market has led to severe security regressions. Security researchers recently discovered that ChatGPT Atlas on macOS bypassed critical operating system sandboxing. It stored sensitive user conversations and OAuth access tokens in unencrypted plain-text files within the ~/Library/Application Support/ directory. This allowed any malware running on the device to harvest high-privilege credentials without user interaction. This flaw could allow account compromise at scale. Consumer AI browsers are moving fast, too fast for enterprise trust.

 

Obrela’s Position: Control First. Adoption Later.

As a Cyber Risk management provider, we view AI browsers through a lens of operational risk and adversarial opportunity. The “Browser Wars” have returned, but this time, the stakes are the integrity of your decision-making loop.

Immediate Recommendations

1. Block AI Browser Installation

Use endpoint detection and response (EDR) to flag installation binaries (e.g., Comet.exe, Atlas.app). Update Secure Web Gateways (SWG) to inspect User-Agent strings and block traffic to consumer AI API endpoints (e.g., api.perplexity.ai) originating from unmanaged processes. This is especially critical for organizations in:

  • Financial services
  • Healthcare
  • Critical national infrastructure
  • Government and defense

2. Update AI Usage and Browser Acceptable Use Policies

Explicitly prohibit the use of “Agentic” or “Autonomous” browsing tools that perform actions without per-click user confirmation. Clarify that “Shadow AI” tools violate data handling policies for internal classification levels. The risk is not worth the productivity gain.

3. Treat AI Browser Traffic as High-Risk

Update your SOC playbooks to:

  • Identify AI browser user agents
  • Detect anomalous automation patterns such as high-frequency requests to diverse domains (indicative of automated scraping)
  • Monitor for unexplained session actions, such as OAuth token generation linked to corporate emails on non-standard applications.
  • Flag unexpected outbound data transfers to AI inference clouds.
  • Enforce MFA challenges on suspicious browser-driven events

4. Maintain Human-in-the-Loop for Sensitive Actions

Obrela firmly believes that the fusion of Strength, Discipline, and Intelligence – our human-plus-AI philosophy – is essential to safe AI operationalization. Where browsers remove human oversight, risk skyrockets.

 

When Could AI Browsers Become Safe Enough?

AI browsers will eventually become a transformative enterprise tool. . In fact, enterprise-specific browsers like Microsoft Edge for Business are already bridging the gap by offering “Enterprise Data Protection” where data is not used for training and admins can manage “Agent Mode”.

However, for consumer tools (Atlas, Comet) to be ready, they will need:

  • Centralized policy control: GPO/MDM integration to enforce configuration settings.
  • Local Inference: Processing sensitive data on Local LLM inference not the cloud.
  • Proven Resistance to Prompt Injection: Deterministic isolation of untrusted content.
  • Auditable Agentic Workflows: Logs that distinguish between human and AI actions.
  • Zero-trust Integration: Seamless tie-ins with corporate Identity Providers (IdP).
  • Mature security baselines enforced at scale
  • Robust vendor security assurance and enterprise SLAs

We estimate this is at least 2 years away for regulated sectors—if vendors prioritize security over speed. Until then, the risk profile remains unacceptable.

 

How Obrela Helps Clients Navigate This New Threat Surface

Whether organizations choose to block AI browsers outright or pilot them in limited sandboxes, Obrela supports clients through:

  • MDR for Hybrid AI Environments
    Detecting anomalous browser behavior, automation patterns, and unauthorized AI integrations.
  • Managed Risk & Controls (MRC)
    Monitoring of third-party AI exposure, vendor risk, and policy compliance.
  • Incident Response Services
    Rapid response to credential leaks, API misuse, or automation-driven incidents originating from AI browsers.
  • SWORDFISH® Platform
    Unified visibility and real-time risk tracking across assets, identities, and AI-integrated workloads.

AI changes the battlefield. Obrela ensures you are not fighting blind.

 

Innovation Requires Discipline

AI browsers represent one of the most significant shifts in end-user computing since the introduction of JavaScript. But where technology leaps ahead, security must and will push back.

Today, unrestricted AI browser adoption is a strategic risk – a form of Shadow IT. Tomorrow, with the right controls, it may become an operational advantage.

At Obrela, our role is to ensure you operate in a world where cyber risk becomes predictable, not a by-product of unchecked innovation.