A real-world case study

What I Learned Using Claude Code for a Month

212 messages. 29 sessions. Zero lines of code written by hand. What a business consultant learned about turning an AI coding tool into a full operations platform.

0
Messages Sent
0
Sessions
0
Browser Actions
0
Successful Sessions
↓ Scroll to explore

What I Actually Use It For

Claude Code isn't just for writing software. Here are the five categories of real business work I run through it every week — no IDE required.

🎬
AI Video Generation & Editing
Creating intro videos, editing AI-generated clips, and cutting long-form content into shorts using Veo, Sora, Riverside, and Seedance — all orchestrated through Claude controlling the browser.
~6 sessions
🌐
Web Development & Deployment
Building and deploying client websites on Replit, Vercel, and Netlify. Full Next.js sites, landing pages, and conference workshop sites — without writing code directly.
~7 sessions
✍️
Content Creation & Marketing
Newsletters in ConvertKit, marketing strategies with buyer personas, Notion database management, Google Docs editing, and Facebook posts — full content pipeline.
~8 sessions
📊
Business Strategy & Documents
Strategic plans, cap table models, client tutorials, and business launch strategies. Claude researches, writes, and formats comprehensive deliverables in single sessions.
~5 sessions
📁
File Management & Tech Support
Organizing 133 files into 14 subfolders, experimenting with AI-assisted document review workflows, debugging crashed MCP servers, and fixing Excel add-in issues.
~3 sessions
💡
The Key Insight
Claude Code is a browser-controlling virtual assistant that can operate any web application. If you can click it, Claude can automate it. The tool's power extends far beyond its name.
1,243 browser actions

How I Work

Not the way a developer uses it. This is the operational pattern of someone running client deliverables, not shipping code.

The Workflow Pattern

Give the full scope upfront
List every deliverable, every platform, every output format in one message — but describe each one as an outcome, not a procedure. Don't drip-feed tasks one at a time.
Let it run autonomously
Step away. Average response time: ~100 seconds. Long sessions run for hours with minimal check-ins. Claude uses TodoWrite to track its own progress.
Check in at milestones
Review outputs when Claude pauses for approval. Don't micromanage every click — trust the task queue and verify the results.
Pivot when things break
Browser disconnected? Platform misbehaving? Don't retry the same thing. Change tools, change approach, or take the manual step yourself.

What Makes It Work

TodoWrite as command center
186 task-tracking calls across 29 sessions. Every complex project gets broken into a checklist that Claude works through systematically. This is the single biggest factor in session success.
High-level goals, not specs
"Build Danny a restoration business website" beats a 50-line requirements doc. Describe the outcome you want and let Claude Code decide how to get there.
Evening work sessions
108 of 212 messages sent between 6 PM and midnight. The workflow fits around a busy schedule — launch tasks in the evening, review results by morning.
Best model, max effort
Boris Cherny (Claude Code's creator) always uses the most capable model with maximum effort enabled. A less capable model seems cheaper but often costs more tokens through extra correction and hand-holding.
"You're essentially using Claude as an intelligent virtual assistant to handle tedious multi-step browser workflows that would otherwise eat up your time." — from the /insights command in Claude Code
A note from the creator: Boris Cherny's biggest advice is don't box the model in. Strict step-by-step workflows made sense a year ago, but today's models do better when you give them tools and a goal, not a script. Use guardrails (CLAUDE.md rules, plan mode) to catch problems — not to micromanage execution.

What Went Wrong (And What I Learned)

31 wrong approaches. 11 buggy code instances. 8 tool failures. Every mistake became a rule. Here's what I'd tell past me.

Lesson 01

The Problem

Browser automation is your biggest asset and your biggest liability. Chrome extension disconnections, unclickable UI elements, and MCP tool failures stalled or killed entire sessions. One Riverside video editing session was almost entirely wasted as the extension disconnected multiple times.

The Fix

Verify the Chrome extension is connected before starting. If a tool fails twice consecutively, stop and refresh — don't let Claude retry 10 times. Break browser-dependent tasks into smaller sessions. Always have a fallback plan that doesn't require the browser.

Lesson 02

The Problem

Claude tries wrong approaches before the right one. This was the #1 friction source — 31 events. When an Excel add-in disappeared, Claude suggested several fixes that didn't apply before the simple cache clear that actually worked.

The Fix

Make Claude state its planned approach before executing. The "plan-first" pattern — tell me your approach in 2-3 bullets, what tools you'll use, and what you won't do — catches bad plans before they waste 20 minutes.

Lesson 03

The Problem

API and service failures block entire sessions with no recovery. Facebook post creation and markdown-to-Word conversion both produced zero output due to repeated ECONNRESET errors. A Canva session failed triply: Canva AI errors, Chrome disconnection, and Google Apps Script failures.

The Fix

Check connectivity before starting complex tasks. If two consecutive API errors occur, restart the session instead of retrying. Always have an offline or alternative workflow ready for deadline-critical work.

Lesson 04

The Problem

Claude removes or changes content you didn't ask it to touch. Table columns disappeared, presentation content was modified, and edits went beyond what was requested — requiring restoration work.

The Fix

Add an explicit rule to your CLAUDE.md: "Never remove, delete, or modify content that wasn't explicitly requested." This single instruction dramatically reduces unwanted changes across all sessions.

Lesson 05

The Problem

Replit Agent interference derails deployments. When using Claude to control Replit via browser, Replit's own AI agent would intercept and interfere. URL link previews blocked text input fields. Simple git operations became multi-retry ordeals.

The Fix

Disable Replit Agent before starting. Batch all Replit instructions into a single message instead of sending them piecemeal. Be aware of URL preview overlays blocking input fields and close them manually if needed.

The meta-lesson: Most friction comes not from Claude being incapable, but from the environment being unreliable. The fix is almost always a guardrail (a CLAUDE.md rule, a preflight check, a fallback plan) rather than a better prompt.

The Setup That Improves the Way I Work

These are the exact configurations, rules, and custom setups I now use. Copy what's useful, adapt the rest.

Browser Automation Guardrails

The #1 rule for anyone doing browser-heavy work. This alone saves hours of wasted retry loops.

When using browser automation (Claude in Chrome MCP), expect frequent disconnections. Always verify the extension is connected before starting multi-step browser workflows. If a tool fails twice consecutively, pause and ask the user to refresh the Chrome extension before continuing.

Content Protection

Prevents Claude from removing or modifying things you didn't ask it to change.

Never remove, delete, or modify content that wasn't explicitly requested to be changed. When editing documents, presentations, or web pages, preserve all existing content unless the user specifically asks for removal.

Error Handling

Stops Claude from silently retrying broken connections. Saves you from waiting on sessions that will never complete.

When encountering API connection errors (ECONNRESET, timeout), immediately inform the user rather than retrying silently. If two consecutive API errors occur, suggest the user restart the session rather than continuing to attempt work.

Replit Compatibility

Specific rules for working with Replit via browser automation.

When working with Replit via browser automation: 1) Disable or work around Replit Agent auto-interference by instructing the user to turn it off first, 2) Be aware that URL link previews can block text input fields, 3) Batch all instructions to Replit Agent in a single message rather than sending them piecemeal.

AI Video Generation

Prevents Claude from silently falling back to inferior tools when video generation fails.

For AI video generation tasks (Veo, Sora, Seedance): always confirm tool/MCP access works before promising AI-generated output. If AI video tools are unavailable, immediately tell the user and offer alternative approaches rather than falling back to ffmpeg without asking.

Content Defaults

Keeps professional deliverables focused and US-specific unless told otherwise.

When creating business or professional content, default to US-only compliance and regulations unless the user specifies otherwise. Do not add editorial opinions about how to frame topics internally within organizations. Keep content focused on what was requested.

What Are Custom Skills?

Reusable prompt workflows triggered by a single /command. If you have a process you repeat across sessions, turn it into a skill so Claude starts with the right context instead of guessing.

Example: /plan-first

Forces Claude to present its approach before executing. Catches wrong approaches before they waste time. Boris Cherny's full pattern: plan mode → iterate until the plan is solid → then auto-accept edits and let Claude one-shot the execution.

Before starting any work on this task, present your planned approach: 1. State your approach in 2-3 bullet points 2. List what tools you'll use 3. State what assumptions you're making 4. Explicitly state what you will NOT do 5. Wait for the user's approval before proceeding

Example: /multi-platform

Stage-gated workflow for tasks that span multiple web platforms.

This task involves multiple platforms. Work in phases and do not advance without user approval: Phase 1: Verify all tool access works (test each platform briefly) Phase 2: Do the core work on the primary platform Phase 3: Deploy/publish to destination platforms After each phase, summarize what's done and what's next. Don't move to the next phase without my go-ahead.

Example: /browser-task

Pre-flight checklist for any session involving browser automation.

Before starting this browser task, follow these steps: 1. Take a screenshot to verify Chrome MCP tools are connected 2. If they fail, tell me immediately 3. Break the task into checkpoints (every 3-4 browser actions) 4. After each checkpoint, confirm progress before continuing 5. If the extension disconnects, pause and ask me to reconnect

What Are Hooks?

Shell commands that auto-run at specific lifecycle events (pre-edit, post-edit, etc.). They catch errors automatically before they cascade into multi-step debugging sessions.

Auto-lint after edits

Runs type checking after every file edit for TypeScript projects. Catches errors before they compound.

Add to .claude/settings.json
{ "hooks": { "postToolUse": [ { "matcher": "Edit|Write", "hooks": [ { "type": "command", "command": "if [[ \"$CLAUDE_FILE\" == *.ts ]] || [[ \"$CLAUDE_FILE\" == *.tsx ]]; then npx tsc --noEmit 2>&1 | head -20; fi" } ] } ] } }

What Is Headless Mode?

Run Claude non-interactively from scripts for batch operations. Perfect for repetitive tasks like organizing files, processing documents in bulk, or generating reports — no babysitting required.

Batch Process Client Documents

Process an entire folder of documents with consistent review criteria and structured output.

for file in documents/*.pdf; do claude -p "Review this document. Summarize key points and flag any issues. Save the review to reviews/$(basename $file .pdf)_review.md" \ --allowedTools "Read,Write,Bash" \ < "$file" done

Batch File Organization

Organize a folder of files into a logical structure automatically.

claude -p "Organize all files in ~/Downloads/project-files into logical subfolders by type and topic. Create a manifest.md listing what was moved where." \ --allowedTools "Read,Write,Bash,Glob"

Workflow Templates You Can Copy

Three battle-tested prompts for the most common multi-step workflows. Paste them directly into Claude Code.

Browser Automation

Resilient Browser Automation

For any multi-step browser task. Builds in failure recovery, checkpointing, and automatic retry with alternative strategies.

I need you to complete a multi-step browser automation task with built-in resilience. Before starting, create a TodoWrite checklist of every discrete step. For each step: attempt the action, verify it succeeded by reading the page state, and if it fails, try up to 3 alternative approaches (different selectors, page reload, re-navigation). After each successful step, mark the todo complete. If the Chrome extension disconnects, pause and instruct me to reconnect, then resume from the last incomplete todo. The task is: [DESCRIBE YOUR BROWSER TASK] After completion, generate a summary of what succeeded, what required retries, and any steps that couldn't be completed.
Content Production

Document-to-Deployment Content Factory

For research-to-publish workflows. Stage-gated pipeline with approval checkpoints at every phase transition.

Run an autonomous content production pipeline for the following deliverable: [DESCRIBE THE CONTENT NEEDED] Follow this pipeline with stage gates where you pause for my approval: STAGE 1 - RESEARCH: Use browser tools to gather relevant context and reference material. Summarize findings and pause for approval. STAGE 2 - DRAFT: Create the full document in Markdown. Pause for review. STAGE 3 - EXPORT: Convert to all needed formats using pandoc and save to organized folders. STAGE 4 - DEPLOY: Push to destination platform via browser automation and verify it renders correctly. STAGE 5 - QA: Screenshot the final version, compare against the original spec, and report any discrepancies. Use TodoWrite to track every substep. If any stage fails, diagnose the issue and propose 2-3 alternatives before asking me to intervene.
Video Production

Parallel Video Production Director

For AI video generation sessions. Generates multiple prompt variations, scores them against a rubric, and narrows to the best candidate.

Act as my video production director. I need a short video clip with these specifications: [DESCRIBE SCENE, TEXT, CHARACTER DETAILS] Your workflow: 1) Write 5 different prompt variations optimized for the target platform (Veo/Sora/Seedance), each emphasizing different aspects. 2) Create a scoring rubric with weighted criteria based on my specs (text accuracy: 30%, character accuracy: 25%, visual quality: 25%, timing: 20%). 3) Generate all prompts and track which variations I should try in parallel. 4) After I share results, score each against the rubric and recommend the best one or suggest refined prompts for a second round. Track everything in a todo list so we never lose progress.

Claude Code vs. Cowork — Which One Do You Need?

Both run the same Claude agent underneath and both can automate browser tasks, manage files, and run multi-step workflows. The difference is in how much control you want vs. how quickly you want to get started. Here's how to choose.

The evolution: Claude Code launched as a terminal tool for developers. Then Anthropic added it as a tab in the Claude Desktop app — same agent, visual interface. Weeks later, Cowork arrived as a separate tab in Claude Desktop, designed for non-technical users with a sandboxed VM and pre-built plugins. Boris Cherny (Claude Code’s creator) now uses Cowork for project management, Slack messages, email responses, and even paying parking tickets. Everything in this guide was done through Claude Code Desktop’s visual interface — but today, much of it could also be done in Cowork.

Choose Claude Code Desktop when

You want deep customization — CLAUDE.md rules, custom slash commands (/plan-first, /multi-platform), and hooks that shape how Claude behaves across every session. Cowork doesn’t have this level of configuration.

You deploy websites or run code — Git, Replit, Vercel, and code execution happen natively on your host machine. Claude Code was built for this.

You need direct system access — Claude Code runs directly on your machine with full terminal access. No VM sandbox means faster execution and more flexibility.

You run complex multi-step workflows — Claude Code is more battle-tested and reliable on long, multi-step operations. Cowork can stall on very complex tasks.

Choose Cowork when

You want to start fast — Cowork is the most approachable entry point for non-technical users. Same powerful agent, friendlier interface, zero configuration required.

Your work is file and document-centric — processing receipts into spreadsheets, organizing directories, synthesizing reports from multiple PDFs. Built-in Skills handle Excel, PowerPoint, Word, and PDF natively.

You want pre-built integrations — Slack, Canva, Figma, Box, and Clay plugins are ready out of the box. Browser automation is also available via the Claude in Chrome extension.

You prefer sandboxed safety — Cowork runs in an isolated VM by default, so it can only touch the folder you grant access to. Less risk of unintended changes to your system.

Why I chose Claude Code Desktop

It shipped first
Claude Code Desktop was available weeks before Cowork launched. By the time Cowork arrived, I was already 20+ sessions deep with a working workflow. Timing made the choice for me.
First mover
Too invested to switch
29 sessions of CLAUDE.md rules, custom slash commands, and deployment pipelines. Switching to Cowork would mean rebuilding all of that infrastructure from scratch.
Momentum
Deployment built in
Git, Replit, and Vercel workflows run natively in Claude Code. Cowork is adding connectors like Vercel, but Claude Code’s deployment workflows are more mature and deeply integrated — and that’s a big part of what I do.
Right tool for the job
I didn’t evaluate both tools and pick one. Claude Code Desktop shipped first, I went deep, and by the time Cowork arrived I’d already built my whole workflow. Both tools run the same agent underneath — if you’re starting fresh today, try both and see which fits how you work.

The Numbers Behind the Workflow

All of this data came from running the /insights command in Claude Code — a built-in feature that analyzes your usage patterns and gives you a personalized report.

212
Total Messages
29
Sessions
10.6
Msgs / Day
19/29
Successful Sessions
113
Files Touched
31
Wrong Approaches

Tool Usage

What Claude actually does under the hood — overwhelmingly browser automation.

Browser (MCP)
1,243
Bash
371
TodoWrite
186
Edit
100
Read
98
Write
93

Session Outcomes

65% of sessions fully or mostly achieved their goals — despite heavy reliance on fragile browser automation.

Fully Achieved
10
Mostly Achieved
9
Partially Achieved
7
Not Achieved
3

Friction Sources

Where time gets wasted. Wrong approaches dominate — Claude guessing wrong costs more than tools breaking.

Wrong Approach
31
Buggy Code
11
Tool Failure
8
Tool Errors
5
API Errors
4

When I Work

Peak productivity happens in the evening. Launch tasks after dinner, review results before bed.

Morning (6-12)
54
Afternoon (12-18)
46
Evening (18-24)
108
Night (0-6)
4