Anthropic’s Claude Chrome browser extension rolls out – how to get early access

orange light

DrPixel/Moment/Getty Images

Follow ZDNET: Add us as a preferred source on Google.

ZDNET’s key takeaways:

  • Claude is incorporating AI into a Chrome web browser extension.
  • The closed beta allows users to chat with Claude in a side panel.
  • Anthropic warned early users to use the extension carefully.

Claude, Anthropic’s AI model, is following Perplexity with its Comet web browser and Dia by incorporating AI into a web browser. Anthropic’s first effort is a closed beta of a Chrome web browser extension

With this extension, you’ll be able to chat with Claude in a persistent side panel that maintains context from active browser sessions. Beyond conversational AI, the extension can read, navigate, and take actions within websites. These actions can include tasks such as locating listings on Zillow, summarizing documents, or adding items to shopping carts — directly from the browser sidebar.

Also: Some teachers are using AI to grade their students, Anthropic finds – why that matters

The company said it’s taking this approach because it views “browser-using AI as inevitable: so much work happens in browsers that giving Claude the ability to see what you’re looking at, click buttons, and fill forms will make it substantially more useful.”

How to try it 

Most users won’t be able to use this extension anytime soon, though. In its initial release, “Claude for Chrome” will only be available to 1,000 Claude Max plan subscribers. Max has two levels of service, and they’re not cheap. The $100 plan gives subscribers five times more usage per session, while the $200 plan has 20 times more usage per session. Subscribers can also sign up for a waitlist to try Claude for Chrome. 

Also: Anthropic agrees to settle copyright infringement class action suit – what it means

While many users are eager to try Claude for Chrome, others are suspicious. As one Ycombinator commenter suggested:

Claude for Chrome seems to be walking right into the “lethal trifecta.” 

The lethal trifecta of capabilities is:

  • Access to your private data—one of the most common purposes of tools in the first place!
  • Exposure to untrusted content—any mechanism by which text (or images) controlled by a malicious attacker could become available to your LLM
  • The ability to externally communicate in a way that could be used to steal your data (I often call this “exfiltration,” but I’m not confident that term is widely understood.)

If your agent combines these three features, an attacker can easily trick it into accessing your private data and sending it to that attacker.

Proceed with caution

In fairness, Anthropic is aware of the dangers. Indeed, the company said: “We conducted extensive adversarial prompt injection testing, evaluating 123 test cases representing 29 different attack scenarios. Browser use without our safety mitigations showed a 23.6% attack success rate when deliberately targeted by malicious actors.” 

Also: The best VPN services (and how to choose the right one for you)

To block these attacks, Claude for Chrome implements a robust permission system to mitigate risks. Users must grant explicit permission for each website or specific actions, with heightened security when sensitive tasks, such as purchases or account changes, are involved. The extension offers customizable controls, letting users choose when Claude can act autonomously and when human approval is required. 

Additionally, Claude for Chrome can’t be used with websites from high-risk categories, such as financial services, adult content, and pirated content. 

Also: Claude wins high praise from a Supreme Court justice – is AI’s legal losing streak over?

However, even with all those defenses in place, attacks were still winning at a rate of 11.2%. That’s not good.

Therefore, Anthropic warned users to use the extension carefully and not to trust it with private information or real work. With all those caveats in mind, you can apply to give the extension a try. You can’t say that Anthropic didn’t warn you that you’ll be heading into dangerous territory. Good luck. 

Leave a Comment