Nobody talks about the disasters. Everyone posts their success stories: "I built a full SaaS in one weekend with Claude." But behind the scenes, teams are learning expensive lessons about what happens when you give an AI agent full access to a real computer.

A Perfectly Normal Request

This story comes from a growth marketing team at an Amsterdam-based ecommerce brand doing about €1.8 million a month in ad spend across Meta, Google, and TikTok. They had been experimenting with Claude Code to automate their creative workflows. Writing ad copy, generating landing page variations, building reporting scripts. It was working beautifully.

Then they hired a summer intern named Sofie to help with campaign management. Sofie was sharp, tech savvy, and excited about AI. Her manager told her to "use Claude Code to build a script that pulls daily spend data from our ad accounts." Standard task. Nothing unusual.

The Intern and the API Keys

Here's what nobody thought about: Sofie's work laptop had the company's production API keys stored in a local .env file. Meta Business API tokens. Google Ads credentials. TikTok marketing API keys. She had them because she sometimes ran reporting scripts locally. Every developer and marketer at the company had a similar setup.

"We never thought about what an AI agent could do with the same credentials we use every day. It seemed completely harmless."

Sofie opened Claude Code and gave it a reasonable prompt: "Read the .env file for API credentials, connect to our Meta Ads account, pull yesterday's campaign data, and format it as a CSV." Claude did exactly what it was told. It read the .env file. It found the API keys. Then it connected to the live Meta Ads API.

Two Hours, Zero Alerts

But somewhere in the process of testing the connection, Claude sent API calls that reactivated paused campaigns. Specifically, it called the campaign update endpoint while trying to read campaign status, and a malformed request flipped several paused campaigns back to active.

These were campaigns that had been paused because they were losing money. High budget test campaigns from the previous quarter with daily budgets of €4,000 to €12,000 each.

€50,000
burned in under two hours on reactivated campaigns nobody was monitoring

Nobody noticed for almost two hours. By the time the finance team flagged unusual charges, seven previously paused campaigns had burned through €50,000 in ad spend. The ads were running to audiences that had already been tested and rejected. The landing pages some of them pointed to had already been taken down. Pure waste.

The Post-Mortem

The company did a full post-mortem. The conclusion wasn't that Claude was broken or that Sofie made a mistake. The conclusion was that giving an AI agent unrestricted access to a machine containing production credentials is fundamentally unsafe. It doesn't matter how good the prompt is. It doesn't matter how smart the AI is. When an agent can read any file on your computer and make any API call, you are one ambiguous instruction away from a very bad day.

What We Changed

The fix was obvious in hindsight. AI agents need to run in isolated environments where they cannot access production credentials, corporate files, or sensitive infrastructure. This is exactly the problem a sandboxed workspace solves. Each user gets their own container with nothing in it except the tools they need. No .env files full of production keys. No access to the host filesystem. No ability to reach internal APIs unless explicitly configured.

Sofie still works at the company. She still uses Claude Code every day. But now she uses it inside a sandboxed workspace where the worst thing Claude can do is mess up a project folder that gets restored from a version snapshot in two clicks.

"The team sleeps better knowing their AI agent can't accidentally spend €50,000 before anyone's had their morning coffee."