stream
Nobody gets fired for buying Anthropic
"Nobody gets fired for buying IBM" used to mean: pick the vendor nobody will question. It was a decision driven by fear, not conviction. I think Anthropic is becoming the equivalent for AI, but for a better reason.
At Pactum, we didn't choose Anthropic through structured evaluation. We started with Gemini because we're on Google Workspace, and it was just there. It was underwhelming. I had been using ChatGPT, Claude, and Gemini side by side for months, so I had a baseline for relative strengths. When we decided to go all-in on AI agents for the company, Claude Code was the strongest coding agent, and that's where we started.
From there we kept going deeper. We built skills on top of Claude Code and wired them into internal systems, then created a shared company handbook that every agent session loads automatically. Once we centralized configuration, IT could set boundaries for what 150 people's agents can and cannot do. Each new layer of investment made the next one easier to justify, and harder to reverse.
Somewhere along the way, the governance question flipped. I had expected it to be a blocker: how do you get agents adopted if you can't audit what they do, constrain what they access, or defend the setup to security and the board? But the enterprise controls we needed were already in place or shipping fast. Observability that shows what agents are doing across the org. Hooks that intercept agent actions before they execute so IT can enforce policy. A managed product (Cowork) that wraps the whole experience in something easier to govern.
With the wrong vendor, your choices are bad: deploy agents you can't audit or constrain, or keep a holding pattern until you can. We picked Anthropic because it let us ship agents to 150 people without choosing between speed and governance.
OpenAI and Google may close this gap. But right now, when someone asks how you govern agents across your company, you can point to real controls, not a roadmap.