stream

My agent use cases, part 2

1 Mar 2026 · 4 min read

In part 1 I covered the straightforward agent use cases: copyediting, onboarding, expense reports, Jira management. Those are about doing existing tasks faster. The five below are different. They share a thread: the agent isn't replacing someone's work, it's gathering context I wouldn't have gathered on my own, so I can think better.

Weekly review as a chief of staff

Every Monday morning, I ask Claude to look through my daily notes, calendar, emails, Slack, and meeting transcripts from the past week, and give me an update on what I did. It pulls together threads I'd forgotten about, surfaces follow-ups I missed, and reminds me what's still open.

@Taivo: Look through my last 2 weeks of daily notes, CTO plans, maybe weekly reviews. I am considering what I have missed or not acted on, or what topics have been in the air, to inform decision of where I should focus.

From there, I review my open projects and priorities, and ask it to help me think through where to focus this week.

I also use a variation at the end of the day:

@Taivo: Out of what I worked on today, consider what I should be communicating to others.

It's a good forcing function for visibility.

The pattern here is using the agent as a chief of staff who has read access to all your systems. It doesn't make decisions for you, but it gathers the context you need to make good ones faster.

Meeting preparation

Before a day of meetings, I ask Claude to look through my calendar and consider how I might want to prepare for each conversation.

@Taivo: Look through my calendar for today, consider how I might want to prepare for each convo.

For a customer meeting, it'll pull recent activity from our CRM and call recordings. For a 1:1 with a direct report, it'll check our shared notes and recent Slack context.

The most useful part is when it surfaces connections I wouldn't have looked up myself: an attendee I haven't met before, a thread from two weeks ago that's relevant to today's topic, or a decision from last quarter that I'd forgotten about.

This takes about two minutes and often saves me from walking into a meeting cold.

Writing role descriptions interactively

Most people use AI to generate a first draft. For documents where I've done lots of thinking but haven't organized it, I flip the interaction: instead of asking the agent to produce content, I ask it to interview me.

Last month I was writing a role description for a new hire. I gave the agent the high-level framing and it asked me about 20 questions over the course of an hour.

It read our Confluence docs for context, and drafted sections as we went. The result was much more specific than what I'd have produced on my own, because the Q&A format forced me to articulate assumptions I'd been carrying around.

This works for any document where the knowledge exists in your head but hasn't been structured yet.

Building skills for your agents

I use agents to build their own capabilities. For example:

@Taivo: Look through emails I've sent; describe the tone / approach / communication, in roughly 5-10 bullets. Put these bullets into a new Claude skill, which is about speaking / communicating on my behalf.

I turned those patterns into a skill that helps the agent write messages in my voice. I built a skill that connects to our call recording platform, so agents can search customer conversations. I'm working on one for Salesforce, so account executives can keep CRM data updated just by chatting.

@Taivo: Following the approach in google-workspace skill, confluence and jira skills, I want to make a salesforce skill that uses its CLI to act in user permissions, to take actions in salesforce. Help me plan it out.

The interesting part is the feedback loop. I notice a gap ("I wish Claude could check Gong transcripts"), build the skill in an afternoon (often with Claude's help), and start using it the same day. A few iterations later it's ready to share with the team. The gap between "I wish it could do this" and "it can do this" has collapsed from months to hours.

Market intelligence

We needed to understand which of our prospective customers use certain procurement platforms. Customer lists aren't public, so you can't just look this up. But answers can be pieced together from public sources: press releases, supplier portals, job postings, conference presentations.

I had Claude systematically research about 50 companies, checking four or five data sources for each one and compiling the results into a table. What would have been a week of manual research by an analyst took 20 minutes of Claude running in the background.

The accuracy wasn't great on the first pass. It confidently stated things that turned out to be outdated, and missed signals that a domain expert would have caught. But it gave us a starting point good enough to prioritize outreach, and the method is repeatable. Each round gets better as we learn which sources are reliable and which aren't.

What ties these together

The common pattern across all five isn't "AI does my work for me." It's that agents are best at the parts I'd skip: the context gathering before a decision, the synthesis across systems, the structured thinking I'd shortcut under time pressure. The weekly review, the meeting prep, the interview-style drafting, even the market research all follow the same shape. The agent does the legwork that makes my thinking better.

The skills-building use case is the exception, and it's the one I'm most excited about. That's not about augmenting one task. It's about compounding: every skill I build makes all future interactions more capable.