Field Manual
/ v1.0 — March 2026

The AI Transformation Playbook

About the author

I've spent the last six months deploying Claude Code across a 15-person solar company — the largest in their regional market (end-to-end AI operations), a profitable 30-person seed-stage SaaS marketplace preparing for Series A (Head of Product + AI Transformation Office), and several 0-to-1 product builds. This is my field report — layered on top of Hilary Gosher's AI Transformation Office framework at Insight Partners — adapted for growth companies and SMBs doing this without a dedicated AI team.

— John Hashem / john@hashbuilds.com

88% of organizations say they're using AI. 6% are getting value from it (McKinsey, 2025). The gap isn't technology — BCG's 10-20-70 rule says people and processes account for 70% of AI success. This playbook is about that 70%.

The closest analogy I have is coaching football. I was part of a high school program that won five straight provincial championships and became the number two team in the country. On paper, our talent got worse each year — if you compared rosters to their eventual college and pro results, you'd expect a decline. Instead we averaged 10 yards per snap. The reason was one counterintuitive decision: our linebackers played offensive line. Other programs would move a linebacker to running back or fullback. We put them at tackle. Unheard of. But it meant we could run schemes that scaled — a single play concept that branched into seven progressions. The hard part wasn't the scheme. It was the buy-in. We were already winning. "Why change anything?" I had the head coach's support, but navigating the other coaches and players required earning trust through results, not authority. That's the exact dynamic in AI transformation. You probably won't have someone with built-in credibility walking in to push this change. So the plays in this playbook are designed for that reality — how you build trust tactically when you can't just decree it.

This playbook assumes everyone is aligned on the goal. If the goal is shared, all of this is just about how you get there without breaking trust along the way.

Before we go further

The terms you need

Most confusion in early AI deployment comes from people talking about different things. Several key terms here come from Gosher's Reimagine / Renew / Redo framework. I've adopted her language where it fits and added terms from my own work where it doesn't.

AI Transformation Office

Insight Partners' term for the dedicated function responsible for AI adoption across a company. Not a committee. Not a working group. A dedicated execution engine with its own mandate, reporting directly to leadership. At enterprise scale, this is a team. At seed-stage and SMBs, this is one person — but the function is the same. The key insight from Insight's framework: this cannot be bolted onto an existing role. It needs to be its own thing.

Reimagine / Renew / Redo

Gosher's three-phase framework for AI transformation. Reimagine: make hard strategic choices about where AI changes the game. Renew: upgrade people and capabilities — hiring, training, incentivizing experimentation. Redo: rebuild operations to decouple growth from headcount. This playbook follows the same progression adapted for companies without a 200-person engineering org.

Decouple growth from headcount

The core economic thesis of AI transformation — another term from the Insight Partners framework. Instead of hiring linearly as the company grows, AI makes existing people more productive and builds capabilities that scale without additional staff. This isn't about cutting jobs — it's about growing without proportionally growing the team. One person carrying two workstreams. Operations that used to scale linearly with headcount no longer needing to.

Claude Code

The system that gives you cross-company leverage. Not a great name — it sounds like it's only for writing code, but it's not. It's an AI operator that can read your entire codebase, connect to your existing tools, and execute work across any department. Whether you're a software company or not, this is the system that enables knowledge sharing, context management, and cross-functional execution. It's the base layer everything else in this playbook runs on.

Context files

Structured documents that capture how your company works — who does what, how decisions get made, what your brand sounds like, how your code is organized. These are the single most important asset you'll build. They're what makes AI output go from generic to company-specific. Think of them as the difference between hiring a smart stranger and hiring someone who's worked at your company for two years.

MCP (Model Context Protocol)

Claude Code's ability to work with your existing systems — Gmail, Notion, Slack, your CRM, your database, your HR platform. You've probably used products that are MCPs without knowing it. When we talk about MCPs in this playbook, we mean connecting Claude Code to the tools your team already uses so it can read, search, and act across them. Common first MCPs: Gmail (scan for receipts, invoices, client communication), Slack (search history, summarize channels), and your company's own API.

RAG system

A searchable knowledge base that AI can query. Most companies have enormous amounts of valuable information trapped in videos, documents, Slack threads, and people's heads. A RAG system makes all of that searchable and queryable — the CEO can ask 'what are our margins on project X?' and get an answer pulled from real data, with role-based access so sensitive information stays protected.

Sandbox environment

A controlled space where someone can use Claude Code without risk. They can experiment freely, but they can't touch production code or break anything outside their directory. All work that leaves the sandbox goes through a handoff document. This is how you give people the aha moment without the risk.

Context architecture

The overall system of organized knowledge across your company. Org charts, DRI databases, brand voice files, process documentation, codebase maps — all structured so any AI tool can reference them. This is the permanent foundation. The tools change every month. The context architecture stays.

"We went from eight different SaaS tools to one unified system in weeks. The AI knows our business now — it pulls real photos from our job sites, writes in our voice, and the CEO can ask it questions about margins. It changed how we operate as a company."

— SMB client, 15 employees, solar industry

Part 1

What you can do today

Before you hire anyone. Before you pick an AI provider. Before you set up a single tool. These are the things a CEO can start right now that will make every future AI deployment dramatically more effective. If someone asks me "what should we be doing now?" — this is the answer.

Exercise 1

The org chart that AI can read

This is the single thing I wish I had done first. I spent months without it and lost efficiency on everything.

This is not the org chart on your website. This is the org chart from the perspective of an AI system that needs to know: where do I find information about X? Who is the person responsible for Y? Who do I go to when I need context on Z?

Have each leader or manager provide the following for every person who reports to them:

For each person
  • Name and role
  • What they do day-to-day
  • What they do exceptionally well
  • What they do because they have to (not because they want to)
  • Who they report to and who reports to them

This seems simple. It's not. Most companies don't have this written down anywhere — it lives in people's heads. Getting it into a structured document gives any AI system (or any new hire, or any consultant) the ability to navigate the company in hours instead of months.

It also gives leadership a reason to involve their team early. The exercise itself is non-threatening — you're not asking about AI, you're asking about people. That matters.

Accelerator — how to do this with AI right now

If you already have Slack or email access, you can get a head start before anyone fills out a form. Connect the Slack MCP and prompt: "Go through the last 6 months of messages. Every time someone asks 'who handles X?' or 'who should I talk to about Y?' — log the question, the answer, and the person who answered. Build me an org chart from the perspective of who actually owns what."

This won't be perfect, but it gives you 70% of the picture in an hour. Cross-reference against the formal org chart and you'll immediately see where the real structure diverges from the official one. That gap is where the most valuable context lives.

Exercise 2

The DRI database

DRI = Directly Responsible Individual. For every function your company performs, who owns it?

Ask each department lead to do two things over the next week:

1. Audit every function

List everything your department does. Not job descriptions — actual functions. "Process refund requests." "Score brand health in dashboards." "Review campaign performance weekly." "Respond to inbound partnership inquiries." Every "hey, who handles this?" email that's ever been sent — that's a function that needs a DRI.

2. Assign a name to each function

Not a team — a person. "Marketing handles that" is not a DRI. "Sarah owns campaign performance reporting" is a DRI. If nobody owns it, that's important to know too.

What you get: a complete map of what your company does and who's responsible. When the AI Transformation Office shows up (whether that's a hire, a consultant, or an internal champion), this is the first document they read. It tells them where the leverage is — which functions are consuming the most time, which have no owner, which are the "important but never urgent" work that AI can unlock.

Why this works as a first move

It's not about AI at all. It's about organizational clarity. Any CEO can justify this exercise without even mentioning AI — it's just good management. But it happens to be the exact input that makes AI deployment 10x more effective. And it gives people a chance to participate in the process before anyone feels threatened.

I learned this the hard way. I focused too much on top-down work early on — vision docs, mission statements, OKRs. That's not wrong, but it's not where you start. The org chart and DRI database are the foundation everything else builds on. Every month I didn't have it was a month of less efficient work.

Accelerator — how to do this with AI right now

Connect the Gmail MCP and prompt: "Search the last 12 months for every email where someone asks who handles something, who owns something, or who to talk to about something. Also flag any email that delegates a task or assigns responsibility. Build me a DRI database from what you find."

Same with Slack: "Search all channels for responsibility assignments, role clarifications, and 'who does X' questions. Cross-reference with the org chart context file and flag any functions that have no clear owner."

A note on security: Connecting email and Slack to AI is a real decision. Some companies start with Slack only (lower sensitivity), some go straight to email (maximum speed). Know the tradeoff and make it consciously.

The bigger point

Context-first means documentation-first

Most companies are behind on documentation. This approach eliminates that problem entirely.

When the org chart, the DRI database, the brand voice, the scoring systems, the roadmap all live in structured context files — you've inherently become a documentation-first company. The context files that power your AI workflows are also the living documentation that new hires read, that leadership references, that the whole company navigates by. Most companies never get there because writing docs is tedious and always gets deprioritized. Context-first solves both problems at once.

Part 2

The big decisions

Gosher's Reimagine phase starts with hard strategic choices. At enterprise scale, that's platform strategy and pricing models. At your scale, the strategic choices are more structural — and getting them wrong means fighting the org chart for months.

Decision 1

Internal champion vs. external hire

Both work. Both have traps. Know which one you're dealing with.

Before choosing: be clear about the purpose. This role exists to decouple growth from headcount. Not to cut jobs. Not to replace people. To make the existing team capable of output that would have required twice the staff a year ago. Every conversation about this role should start there. If the team doesn't understand that framing, every win you deliver will feel like a threat instead of an unlock.

Before you start — a prerequisite

Decoupling growth from headcount only works when your roster is already your roster. You should be in the position of saying: we have a great team, these are A-players, and instead of expanding the team for growth we want this team to do more. That's when it's time.

If you have culture problems, underperformance issues, or people who aren't invested in the company succeeding — sprinkling AI on top is not going to fix it. Context-first development only works when you have people who are constantly learning, who care about finding the best ways to solve problems, and who want to be around long-term. This initiative is going to be the most collaborative thing your team has ever done. It requires people who are ready for that.

The message has to be clear: the whole point is to get the team using AI together — not individuals using AI in isolation. The Transformation Office unifies that into one system where everyone's contributions make everyone else's output better.

A note on the tool itself

I typically work with software companies or companies where the website is the center of the business. In that context, Claude Code is the superior tool because your app or website is enormous leverage — having your context files alongside your codebase, accessed through VS Code, is the most powerful setup I've found for getting A-players across the company to contribute to the roadmap. There is a product called Claude Coworker for non-technical users, but it's not nearly as powerful.

Right now I work across six live codebases — all generating revenue, all with real users — as a solo developer who hasn't written a line of code by hand. But the point isn't what I can do. It's that the people I've onboarded — a COO who used to be a roofer, a videographer, a growth marketer — are doing meaningful work in these codebases too. The barrier is dramatically lower than people think.

Brenden — COO of a solar company, former roofer

Three one-hour sessions. That's what it took. He's now using Claude Code comfortably — querying old sales data from Pipedrive, understanding the database schema, knowing what he can safely read versus what he shouldn't touch yet. He understands the importance of the context system and knows that in the future he'll be able to do even more with it. His entire workflow has changed.

A videographer with a leads website

One hour. One video session. Before the session was over he said: "I don't even need the second session. Once it's set up, I can just ask Claude Code for the things I need to know." He got it immediately.

People need the push to download VS Code, clone the repo, and write their first prompt. Once they do, they get it. The sandbox (Part 3) gives them guardrails. This is for anyone who understands the business — not just developers.

Internal champion

Someone already at the company who gets chosen (or volunteers) to lead AI adoption. They know the people, the politics, the history. Nobody's threatened by them.

Advantage: Trust is pre-built. People will help them because they're "one of us."
Trap: Less authority to push changes. Can get stuck in their existing role's gravity. Needs explicit air cover from the CEO.
External hire / consultant

Someone brought in specifically for AI transformation. They have the skills and the mandate. But they're the new person.

Advantage: Clear mandate. No existing role conflict. Can move fast because they were hired to.
Trap: Everyone thinks you're the Office Space consultant. They assume you're there to cut jobs. You have to overcome that before you can do anything.

I've been both. The plays are different. When you're internal, you can help people help you — the trust is there, you just need the mandate. When you're external, you need the big win fast. You need to prove you're there to make the team more powerful, not to replace them. The Eisenhower quadrant play (Part 3) is your best friend in that first week.

The person who runs it

What makes someone good at this role

This role is new. There's no credential for it, no career path that obviously leads here, no professional designation that signals "this person knows how to deploy AI across an organization." If someone has already built a massive AI-transformed company, they're probably the CTO or CEO of that company now — they're not looking for this job. So you're hiring for qualities, not a resume.

Trust — above everything

This is the single most important quality. The person running the AI Transformation Office is going to touch every department, access sensitive systems, and change how people work. If the team doesn't trust them, none of the plays in this playbook matter. They either need to come with trust already built (internal champion who's earned credibility) or be the kind of person who earns it fast (authentic, transparent about what they're doing and why).

Cross-functional fluency

They need to speak engineering, design, marketing, and ops — not fluently in every one, but enough to understand what each team values and what each team fears. The plays in Part 3 all require reading the room in a specific department. Someone who only understands one discipline will win in that department and stall everywhere else.

Builder instinct

The entire approach in this playbook is show, don't tell. That requires someone who can actually build — prototypes, context files, integrations, working demos. Not necessarily a senior engineer, but someone who can use Claude Code to produce real things in a real codebase. The person who writes a proposal when they could build a prototype is the wrong person.

Patience with people, impatience with process

They need to move fast on the technical side — building, prototyping, validating — while moving slowly on the human side. Rushing adoption breaks trust. Rushing builds creates momentum. The best person for this role is comfortable holding both speeds at once.

Systems thinking

The compound intelligence in Part 4 only works if each piece is built knowing it will feed the others. A context file isn't just for one department — it's part of a system that every future build will reference. The person who builds isolated tools that don't connect to anything will create AI islands, not AI transformation.

The trust problem unique to this moment

The closest analogy I have is coaching football. I played professionally and when I came back to coach high school, trust was immediate — everyone knew my background, nobody questioned whether I knew what I was talking about. That let me create massive change quickly because nobody was filtering my suggestions through skepticism.

AI transformation doesn't have that yet. There's no "professional credential" that instantly communicates "this person has done this before and knows what works." The field is too new. So the trust has to come from somewhere else — and the most reliable way to manufacture it is to use the tools themselves. Build something real in the first week. Show results that can't be ignored. The plays in Part 3 — the Eisenhower win, prototype don't present — are specifically designed to create trust fast when you don't have the luxury of a credential doing it for you.

In a few years there will be people with track records of doing this at multiple companies. Right now, you're hiring for the qualities above — and the best interview is asking them to build something in your codebase with Claude Code in front of you. If they can do that and explain what they're doing in terms the team would understand, they're the right person.

Decision 2

Keep the Transformation Office separate from people management

This is non-negotiable. I learned it the hard way.

The person running the AI Transformation Office cannot also be the person responsible for growing the team. As Head of Product, my job was to grow a team, build people up, make them feel invested. As the AI transformation person, I was in the next meeting talking about decoupling growth from headcount. People aren't stupid — they see both sides. And the moment they do, the trust you need for transformation evaporates.

The Transformation Office needs to be its own function — an execution engine focused purely on making the existing team more powerful. Not someone who also makes staffing or roadmap decisions. The two functions require fundamentally different kinds of trust.

Decision 3

What you're actually building

Most companies want "AI." That's not specific enough. Here are the actual categories.

1. Context architecture

The foundation. Everything else depends on this. A structured file system that tells AI how your company works.

CLAUDE.md                        ← Master index (AI reads this first)
│
├── context/
│   ├── org-chart.md             ← Who does what, reports to whom
│   ├── dri-database.md          ← Every function → responsible person
│   ├── brand-voice.md           ← How the company sounds
│   └── scoring-systems.md       ← How decisions get made
│
├── departments/
│   ├── engineering/
│   │   ├── DEPARTMENT.md        ← Index for engineering context
│   │   ├── architecture.md      ← System design, key services
│   │   ├── api-patterns.md      ← How APIs are built here
│   │   └── tech-debt.md         ← Known issues, priorities
│   │
│   ├── marketing/
│   │   ├── DEPARTMENT.md        ← Index for marketing context
│   │   ├── campaign-patterns.md ← Ad buyer's reusable structures
│   │   ├── content-calendar.md  ← What's planned, what's live
│   │   └── brand-guidelines.md  ← Design system, voice, assets
│   │
│   └── customer-success/
│       ├── DEPARTMENT.md
│       ├── health-scoring.md    ← How brand health is measured
│       └── escalation-paths.md  ← Who handles what, when
│
└── operations/
    ├── roadmap.md               ← Current priorities
    ├── hiring-criteria.md       ← What "good" looks like per role
    └── vendor-relationships.md  ← Key partners, contracts, contacts

Every file is living documentation. When the marketing lead updates their campaign patterns, the AI immediately produces better work. The context files ARE the documentation.

2. Internal RAG system

A searchable knowledge base built from everything your company has ever produced. Role-based access controls who sees what.

┌─────────────────┐     ┌──────────────┐     ┌─────────────────┐
│  SOURCE FILES   │     │  INGESTION   │     │  KNOWLEDGE BASE │
│                 │     │              │     │                 │
│ Videos ─────────┼────▶│ Transcribe   │     │ Chunks with     │
│ Documents ──────┼────▶│ Classify     ├────▶│ embeddings      │
│ Slack threads ──┼────▶│ Tag          │     │                 │
│ Meeting notes ──┼────▶│ Chunk        │     │ Semantic search │
│ Email history ──┼────▶│ Embed        │     │ enabled         │
└─────────────────┘     └──────────────┘     └────────┬────────┘
                                                      │
                                             ┌────────▼────────┐
                                             │  ROLE-BASED     │
                                             │  ACCESS LAYER   │
                                             │                 │
                                             │ CEO: full access│
                                             │ Mgr: dept only  │
                                             │ Team: filtered  │
                                             └────────┬────────┘
                                                      │
                                                      ▼
                                             "What are our margins
                                              on the Johnson project?"

Every video, document, and conversation becomes queryable. The CEO asks a question and gets an answer from real data — not from someone's memory of a meeting three months ago.

3. MCP integrations

Connections between Claude Code and your existing systems. These let AI read, search, and act across the tools your team already uses.

                    ┌──────────────────┐
                    │                  │
       ┌───────────▶│   Claude Code    │◀───────────┐
       │            │                  │            │
       │            │  + CLAUDE.md     │            │
       │            │  + Context files │            │
       │            └──┬────┬────┬──┘            │
       │               │    │    │               │
  ┌────┴────┐   ┌──────┴┐ ┌─┴──────┐  ┌────────┴───┐
  │  Gmail  │   │ Slack │ │ Notion │  │ Company    │
  │  MCP    │   │ MCP   │ │ MCP    │  │ API (MCP)  │
  └────┬────┘   └──┬────┘ └──┬─────┘  └─────┬──────┘
       │           │         │              │
  Scan emails  Search     Read docs     Query CRM,
  for receipts, history,  identify      read analytics,
  invoices,    map DRIs,  gaps,         act on company-
  client comms find       cross-ref     specific data
               patterns   vs code

  ┌─────────┐   ┌──────────┐   ┌────────────────┐
  │ Keywords│   │ BambooHR │   │ Google Search  │
  │ MCP     │   │ MCP      │   │ Console MCP    │
  └────┬────┘   └────┬─────┘   └──────┬─────────┘
       │             │                │
  Keyword        Org chart,       Indexing status,
  research,      employee data,   search performance,
  competitor     hiring pipeline  crawl monitoring
  analysis

Every MCP is a bridge to a system your team already uses. You're not replacing tools — you're connecting them to an intelligence layer that can work across all of them.

4. Automated pipelines

Scheduled systems that run without human intervention. These are the "set and forget" capabilities — cron jobs, loops, and event-triggered workflows.

Example: Daily SEO Article Generation

┌──────────┐     ┌──────────────┐     ┌──────────────┐
│ CRON JOB │     │ Keywords MCP │     │ Brand Voice  │
│ 2 AM     ├────▶│              ├────▶│ Context File │
│ daily    │     │ Research     │     │              │
└──────────┘     │ trending     │     │ Match tone,  │
                 │ keywords     │     │ style, pillar│
                 └──────────────┘     └──────┬───────┘
                                             │
                                    ┌────────▼────────┐
                                    │ GENERATE BRIEF  │
                                    │                 │
                                    │ Topic + keywords│
                                    │ + brand voice   │
                                    │ + target length │
                                    └────────┬────────┘
                                             │
                 ┌──────────────┐    ┌───────▼─────────┐
                 │ Media System │    │ WRITE ARTICLE   │
                 │              │◀───┤                 │
                 │ Find real    │    │ Pull matching   │
                 │ photos from  │    │ images from     │
                 │ job sites    │    │ media library   │
                 └──────────────┘    └────────┬────────┘
                                              │
                                     ┌────────▼────────┐
                                     │ PUBLISH / NOTIFY│
                                     │                 │
                                     │ → Auto-publish  │
                                     │ → Email PMM for │
                                     │   approval      │
                                     └─────────────────┘

Example: Interview Screening Pipeline

┌──────────┐     ┌──────────────┐     ┌──────────────┐
│ LOOP     │     │ BambooHR MCP │     │ Company      │
│ every    ├────▶│              ├────▶│ Context      │
│ 6 hours  │     │ Check for    │     │              │
└──────────┘     │ new          │     │ Role reqs,   │
                 │ applicants   │     │ team needs,  │
                 └──────────────┘     │ culture fit  │
                                      └──────┬───────┘
                                             │
                                    ┌────────▼────────┐
                                    │ SCREEN RESUMES  │
                                    │                 │
                                    │ Full company    │
                                    │ context, not    │
                                    │ keyword matching│
                                    └────────┬────────┘
                                             │
                                    ┌────────▼────────┐
                                    │ 60 → 5          │
                                    │ candidates      │
                                    │                 │
                                    │ → Ranked list   │
                                    │ → Send to       │
                                    │   hiring mgr    │
                                    └─────────────────┘

The job itself might not be Claude Code — it could be a cron job or a scheduled function. But it calls your MCPs, references your context files, and produces output that would have required a person to do manually every day.

5. Bespoke tools — playing offense

Categories 1–4 make your existing company smarter. Category 5 is about going on offense. Once you have context architecture and MCPs, the question becomes "what is the most amount of intelligence we can give to the greatest number of people?"

This is the ultimate decoupling of growth from headcount. Building things that wouldn't have existed at all — products that were never on the roadmap because they seemed too niche or "we don't have the bandwidth." Post-AI, you already have the distribution and the domain knowledge in your context files. The only thing that was missing was the ability to build fast enough to make the bet rational.

Example — A SaaS company with existing distribution

Before: Customers keep asking for a feature that's not on the roadmap. Engineering says it's a 6-month build with two dedicated engineers. PM says no. "We're focused on the core platform."

After: The transformation lead has the context architecture — domain knowledge, data schemas, requirements from support tickets via MCP. They build an MVP in two weeks. Real product, same database, same auth, same brand guidelines. Ships on a subdomain. Sales shows it to customers the same month.

Nothing about headcount changed. Nothing about the engineering roadmap. A new product line from distribution they already had.

This is why bespoke tools are category 5, not category 1. You need the context architecture, the MCPs, and the adoption foundation before you can play offense. But once you have it — you're not just making your company more efficient. You're leveraging your existing distribution to ship products that your competitors can't even resource.

Breaking "AI" into these categories changes how teams think about what to build. A CEO who says "we need AI" doesn't know what they're asking for. A CEO who says "we need a context architecture and two MCP integrations before we build the RAG system" — that person can sequence a deployment.

Decision 4

What the transformation lead needs on day one

When the consultant shows up or the internal champion gets the green light, these are the access requirements. Have them ready.

Codebase access
Risk: Low

Full clone, not read-only. AI tools need to read every file locally.

Slack access token
Risk: Medium

Search message history, identify DRIs, map how the company actually operates.

Notion / Confluence access token
Risk: Low

Read existing docs, identify gaps, cross-reference against the codebase.

Gmail / email access (OAuth)
Risk: High — discuss explicitly

Most powerful and most sensitive. Email reveals responsibility assignments, client communication, and internal priorities.

Company API access (if applicable)
Risk: Varies

Custom MCPs can be built for any system with an API — CRM, analytics, internal tools.

HR platform access (read-only)
Risk: Medium

BambooHR, Rippling, or equivalent. Automates the org chart exercise.

Every system you don't connect is context the AI doesn't have. Some companies give full access immediately. Others start with codebase and Slack, then expand as trust builds. The tradeoff is always speed vs. exposure.

Part 3 — Renew

The plays

Gosher's Renew phase is about people and capabilities — hiring AI natives, changing mindsets, incentivizing experimentation. At enterprise scale, that's training programs and hackathons. At your scale, it's personal. These are specific moves for specific situations — plays you run when you encounter them.

The Eisenhower win

First week. Especially if you're the external hire and need credibility fast.

The move

Find the feature or improvement that's been on the roadmap for quarters but nobody can justify pulling engineers for. Build it in the shadows. Demo it as a working prototype — not a deck, not a mockup. A functioning thing in the real codebase.

Why it works

You're not replacing anyone's work. You're doing work that wasn't getting done. That's the safest possible first win — nobody loses, the company gains, and everyone sees what's possible. After this, the question flips from 'should we use AI?' to 'where do we deploy it next?'

From the field

There was a critical feature at risk of never getting touched because leadership was focused on the main AI experience. I built the entire thing as a live prototype — front-end components, UI flows, mobile-responsive, all in the real codebase. Leadership could see it working, which resolved most open questions about what to ship. When it hits engineering, it's properly scoped. When it hits design, they focus on what matters for launch. It moved an entire quarter of discussion into a working thing people could react to.

Make it their idea

You need a skeptic to engage with AI but asking directly would trigger resistance.

The move

Set up a problem in their domain on your machine. Ask for their help casually — 'Hey, I'm trying to make these pages look better, you're the design expert, what would you change?' Let them start talking naturally. Then turn on the AI transcription so it captures their input. Show them the output. Watch the lightbulb turn on.

Why it works

The moment someone sees their own expertise producing better output through AI, the framing shifts from 'this replaces me' to 'this amplifies me.' You can't lecture someone into that realization. They have to experience it. And it works best when they think they were just helping you.

From the field

Need the designer to engage with prototyping? Don't schedule a training session. Set up a prototype on your screen, ask 'can you help me with something — I'm trying to make this layout work and I'm stuck.' They start giving feedback, you show them how the AI implements it in real-time. They walk away thinking about what else they could do with this.

Show it, then share it

You need the team's expertise flowing into the context system, but you can't just ask for it. You need to earn it.

The situation

You can get surprisingly far on your own. Claude Code can grep the entire codebase to identify patterns. Slack and email (via MCP) show how things actually work. You can reverse-engineer a decent baseline of any department's patterns from what already exists.

But the most valuable knowledge isn't in files. It's in how people think — the designer's philosophy, the engineer's vision for where the architecture is heading, the marketer's intuition about what converts. You have two layers: the baseline you can build yourself, and the depth that requires people to opt in.

The move

Pick a cross-functional project that proves the system works. The best first proof is something simple: generate a blog post that uses the engineering standards to build the page, the design principles to style it, and the brand voice to write it — all from context files you've assembled yourself. When the output looks right, reads right, and is built in the company's actual codebase using their actual patterns — that's your benchmark. You just proved, between you and the CEO, that the context system works.

Now show it. Not in an all-hands — in a one-on-one. "Look at this. It used your design patterns. It used your tone of voice. It was built in our codebase following our engineering standards. I assembled what I could from existing work. But imagine how much better this would be with your actual input on where things are heading."

The ethos is always the same: show results so good and so quickly that they cannot be ignored. That's the approach to every situation in this playbook. Not asking for permission. Not scheduling a training session. Showing something real and letting the quality of the output create the pull.

Why it works

Once people see the output, the conversation shifts. They stop thinking "this person is trying to automate my job" and start thinking "this system is good, but it's missing something I know." That's when they opt in — not because you asked, but because they want the output to be better. It becomes a barter: each role contributes their expertise, and in return gets a system that multiplies their own output. Everyone creates more value. Nobody loses anything.

The reframe that matters

If someone isn't engaging with the context system, that's rarely a people problem. It's almost always a positioning problem. The AI Transformation Office didn't put them in a position to see the upside. They didn't get their own win. They didn't see what they could build with the system that they couldn't build without it. The job is to figure out which play creates that moment for each person — and how the CEO wants to balance speed of adoption with the culture they're building.

What each role contributes — and gets back

The shape of the context file depends on the role, and the depth depends entirely on where the company is heading. What you can reverse-engineer from existing artifacts is the starting point. What the talented people contribute is what takes it from functional to genuinely intelligent.

Design & Brand

What only they can give you: The philosophy behind every design decision — the paradigm that guides which patterns get used and which get retired. That's what turns a context file from a style guide into a design intelligence system.

What they get back: The ability to prototype in working code. Their expertise becomes code without them writing a line.

Engineering

What only they can give you: Where the architecture is going. The vision that doesn't exist in the current code.

What they get back: Better alignment across the team on every commit. Back-end engineers moving to full-stack more confidently. Internal UI that actually matches the design system.

Product & Marketing

What only they can give you: The intuition. Why certain campaigns work and others don't in ways the data doesn't fully explain. The judgment calls that make the difference between competent output and output that actually converts.

What they get back: Content generation that's already on-brand. The ability to produce at a volume that would have required a full team — while they focus on strategic decisions.

The counterintuitive part

Most people's first instinct is: "Great, now our engineers can build nicer-looking software because the designer gave them a brand file." That's true, but it's not the highest leverage move.

Claude Code is better at writing code than it is at visual design right now. Engineers gain incrementally from context files — better alignment, better internal tooling, smoother cross-stack work. Those gains are real but predictable.

The highest leverage is the other direction. This is the offensive line analogy again. Developers are like offensive linemen in a company — they move everything forward. But you don't win championships by only investing in the biggest kids. The people with the most to gain are the non-technical people who deeply understand the business — product managers, designers, ops leads. They're the athletes. They're fast, flexible, and they understand the game. They know the customer. They know the market. They know what should exist. What they've never had is the ability to build it themselves. Context files plus Claude Code gives them that ability. You don't need them to write syntax or memorize frameworks — you need to put them in a position where they can do things they never thought possible. The person closest to the customer builds the thing the customer needs.

Example — the mobile app that wasn't on the roadmap

A SaaS company has a large front-end application and a separate API. One side of their marketplace has been asking for a mobile app for months. Engineering says it's a Q3 project at the earliest — they'd need to hire a mobile developer.

But the product manager who owns that side of the marketplace understands every user flow, every edge case, every business rule. With engineering context files that document the API patterns and auth system, plus design context files that capture the brand and interaction standards, that PM can prototype a working mobile app in a sandbox. Not a mockup — a real app that talks to the real API, follows the real design system, handles the real edge cases.

That's decoupling growth from headcount again. The intelligence was in the PM's head. The engineering standards were in the context files. The code was written by Claude Code. No mobile developer hired. No engineering roadmap disrupted. A new product surface shipped by the person who understood it best.

This is why the engineering context files need to be tight when non-engineers are building. If the goal is just rapid prototyping for engineering to take over, a loose context file works fine. But if the goal is for product managers and designers to build production-quality software — and it should be, because they're the ones with the domain knowledge — then the engineering standards need to be precise enough that anything built in a sandbox naturally aligns with the existing codebase. The prototype doesn't need to be rewritten. It's already built right.

What developers do instead — playing defense

This isn't all offense. As more non-technical people commit to the codebase, the developer's role shifts — and it shifts to higher-leverage work. More contributors means more need for automated test pipelines, code quality checks, security hardening, CI/CD infrastructure, and merge workflows. The PM prototyping in a sandbox can't set up AWS credentials, can't configure deployment pipelines, can't architect the security layer. That's the new bottleneck — and it's where developers become more valuable, not less.

Think of it as clearing capacity from feature work so developers can build the infrastructure that makes everyone else's contributions safe and shippable. More QA automation. Better DevOps. Stronger guardrails. The developer isn't doing less — they're playing defense so the rest of the team can play offense.

The typical path: product managers and designers become the first "product engineers." Not because they learned to code — because the context system made coding a byproduct of understanding the business. That's the knowledge-sharing flywheel. Everyone contributes their expertise, everyone gets more capability back. The system gets smarter with every contribution, and every person in the company becomes more dangerous.

Prototype, don't present

You've spotted an opportunity — a new product arm, a campaign type nobody's explored, a feature that could open a new market. This is especially powerful in your first weeks when you're looking at the business with fresh eyes and seeing things the team has been too busy to notice.

The move

Don't bring it up in a meeting. Don't write a proposal. Validate it yourself first — with Claude Code and the codebase, you can test whether the existing data supports it in hours. If it doesn't, find out what's missing and whether a third-party API can fill the gap. Get a test account, wrap the API, validate the data. Then build it as an actual button in the actual app — connected to real data, styled with the real design system, functioning inside the real product. When you demo it to the CEO, there are no questions about feasibility. They can see it. It's already built.

Why it works

The typical path: someone raises an idea, it gets discussed, a brief gets written, meetings get scheduled, a PRD gets drafted, it goes on the roadmap for next quarter. Six weeks before anyone sees anything real. The prototype path: you validate it in parallel with your other work, the CEO sees a working feature inside the product, and the conversation skips straight to cost versus return.

From the field

In my first week at a SaaS company, I noticed a campaign type we weren't pursuing. Our data couldn't support it — so I found a third-party provider, got a test account, wrapped their API, and validated the signals. Then I built it as an actual feature inside the platform — real button, real data, styled to match. When I demoed it to the CEO, his response wasn't 'let's schedule a meeting' — it was 'can you put this in our AI product?' The whole thing happened in parallel with everything else. That's decoupling growth from headcount.

The sandbox moment

Someone important (CEO, team lead, key contributor) is ready to try but you need to contain the risk.

The move

Set them up with Claude Code in a sandboxed environment — a dedicated directory with hard boundaries. They can experiment freely but can't touch anything outside their space. Every piece of work that leaves the sandbox goes through a handoff document that a developer reviews.

Why it works

This is the aha moment. The first time someone builds something useful with AI — something that would have taken them weeks or never happened at all — they become a permanent advocate. You need this to happen in a controlled space so the excitement doesn't create chaos. The sandbox is what lets you say 'yes' to everyone without risking the codebase.

From the field

A coaching client set up with a sandboxed environment literally told me: 'once I got going, I could just talk to Claude Code about the mentorship I needed.' Self-sufficient within a week. That's the trajectory you want for every person you onboard.

The first ceremony

You have a handful of people who want in. They've seen the demos, they've had their aha moment, and they're ready to try it themselves. This is the moment you formalize it.

The move

Prepare an onboarding guide first — GitHub account, VS Code, Claude Code extension, cloning the repo, running locally. You only need to get this working on one person's machine, then Claude Code helps you write the spec for everyone else. Test it clean: someone should be able to pull the code and see something working. Then run the ceremony. Everyone in a room with machines set up. Run the exercise live: "Marketing lead, change the brand voice file. Now." Everyone watches. "Designer, change this design token. Go." Pull the changes. See what happened. Not a lecture. Not a demo they watch. An exercise they do. The best version has real problems queued up from the roadmap. When someone solves a real problem in their first session, they're hooked.

Why it works

Individual aha moments are powerful but fragile. The ceremony turns scattered interest into shared momentum. Everyone sees everyone else doing it. The skeptic watches the designer change a component. The designer watches the marketing lead update brand voice and sees it ripple through generated content. It stops being 'that AI thing John is doing' and becomes 'how we work now.' The energy in the room when people realize they can actually do this — that's what you can't manufacture one-on-one.

From the field

The onboarding will hit snags — git authentication, environment variables, someone's machine missing a dependency. That's fine. That's why you do it together. The ATO is there to unblock in real-time. The goal isn't perfection — it's getting everyone past the setup friction in one session so they never have to face it alone. After the ceremony, they have a working environment and the muscle memory of having done something real. Everything after that is iteration.

Find your champions early

First two weeks. Before you try to go company-wide.

The move

Identify 2–3 people across different departments who are naturally curious. Give them the sandbox, the onboarding video, the context files. Let them run. Their enthusiasm will do more for adoption than any all-hands presentation you could give.

Why it works

Top-down mandates create compliance, not adoption. You want pull, not push. The champion in customer success who automates their dashboard review becomes the best advertisement for the marketing team to try it. Organic spread beats forced rollout every time.

From the field

The growth marketing lead saw the SEO content engine running — articles being generated on-brand, published automatically, ranking within weeks. He didn't ask 'how does it work?' He asked 'can you train me on Claude Code?' He realized that if the SEO side could be automated, he could redirect his time entirely to ad strategy — the high-leverage work he'd been wanting to focus on. Within a week he was using it himself. I didn't have to sell adoption. The output sold it for me.

Speak their language, not yours

You're meeting with a specific department for the first time about AI.

The move

Map AI capabilities to their specific pain points using their terminology. Don't talk about 'context files' and 'MCP integrations' — talk about 'making your best campaign patterns reusable' or 'automating the dashboard reviews that eat 5 hours a day.' Every department has different fears and different incentives. Learn them before you walk in.

Why it works

Every department has a different fear — and none of them sound like "I'm afraid of AI." They sound like "you need to involve [person] because that's their job." Each fear requires a different play.

From the field

When Customer Success was spending 5 hours a day manually scoring brand health, I didn't say 'let's build an AI automation.' I said 'I read the database, I think I understand your scoring criteria — can I show you something?' Built the algorithm, validated it matched their manual output, then handed engineering a working prototype. The team felt supported, not replaced.

Part 4 — Redo

Compound intelligence

Gosher's Redo phase: rebuild operations. At enterprise scale, she describes decoupling growth from headcount across entire business units. At your scale, this is where individual plays start feeding each other — and AI stops being a tool and becomes an operational capability.

Every piece you build should increase the intelligence of the overall system. The RAG system makes the SEO engine smarter because it has access to real company knowledge. The media pipeline makes the content engine better because it can pull real photos. The CRM feeds the knowledge base which improves the AI's answers. You get two-for-ones constantly.

The progression is always the same: system of record → system of intelligence → system of action. (Insight Partners calls a similar progression "Augment → Automate → Anticipate" — AI as thought partner, then AI running processes in the background, then AI constantly optimizing and learning from its own performance. Same idea, different vocabulary.)

Record → Intelligence → Action breakdown
Record

CRM, project management, documents, media files. Where data lives today — probably scattered across 8 different SaaS tools.

Intelligence

RAG systems, knowledge bases, context files. Data becomes queryable, searchable, and AI-accessible. The CEO can ask questions and get real answers with role-based access.

Action

Content engines, automation pipelines, reporting systems. The intelligence layer feeds automated outputs — articles written in the brand voice using real company photos, dashboards updated automatically, reports generated from live data.

For a 15-person solar company, this replaced eight separate SaaS tools — about $10K/month — with a unified system running on usage-based infrastructure at roughly $300/month. That's not optimization. That's a category change.

The old advice was "focus on one thing." That was pre-AI survival advice. Post-AI, the math flips. You can build a CRM, a RAG system, an SEO engine, and a media pipeline — four products that would have required four dev teams. Maximum value decoupled from maximum overhead. Each piece makes the others more valuable.

Part 5

What not to do

Everything here is something I either did wrong or watched someone else do wrong. The pattern is almost always the same: moving too fast on the technical side without reading the room on the human side.

Don't start with vision and mission.

Vision and mission are actively being revised — that's the most unstable input. Start with the org chart and DRI database. Those are stable and immediately useful.

Don't ask people to document their expertise.

The moment you ask, you've made it about replacement. Reverse-engineer it from their existing work instead. Or create situations where great work naturally produces the documentation as a side effect.

Don't hand out AI access without the sandbox.

A non-technical person building in isolation — disconnected from the codebase, creating work that can't be integrated — is worse than no AI at all. The sandbox should exist before anyone touches the tool. Guardrails first, keys second.

Don't lead with a diagnosis.

When you have more AI experience than the team, you see problems they don't. But "here's what's wrong" lands very differently than "here's what it could look like" with a working prototype. You'll have credibility to diagnose later. Earn it with demos first.

Don't combine the Transformation Office with people management.

This one is structural, not tactical. The person running AI transformation cannot be the person who grows the team. People see both roles — and the trust required for transformation evaporates. Dedicated function or nothing.

Don't invest in tools. Invest in skills.

The tooling will change three times this quarter. The context files — brand voice, campaign patterns, design systems, workflow documentation — those are permanent. Build the foundation, swap the furniture. Skills are the asset. Tools are the rental.

Don't do an all-hands rollout.

People need to experience a win before they'll believe it's a multiplier and not a replacement. Find 2–3 champions, let them spread the word organically. Pull beats push.

Don't let everyone run their own AI workflow.

This is probably happening at your company right now: people take meeting transcripts, paste them into ChatGPT, update their own Notion, manage their own action items. It feels productive. It's actually creating busywork and burning tokens on redundant prompts. Worse — token costs are subsidized right now. When pricing normalizes, a company with 30 people running ad-hoc AI prompts against the same information will get crushed on cost. The context architecture approach is the hedge: structured files that the AI reads once, shared across every workflow, instead of 30 people independently re-prompting the same context. Organized context isn't just better output — it's cost protection.

Part 6

How you know it's working

"Someone submits a request unprompted"

The intake process is working. People are pulling, not being pushed.

"Someone updates their own context file"

They've internalized the system. The skill is theirs now, not yours.

"A skeptic asks "can it do my thing?""

Curiosity replacing defensiveness. The champion network is working.

""We've been wanting that for months""

The Eisenhower win landed. The aha moment is spreading.

"You can't remove one system without degrading three others"

Compound intelligence is working. The systems are feeding each other.

""Wait, we can just... build that?""

The cost-of-building collapse has clicked. The team is thinking in new product arms.

This playbook is a living document. It was built from engagements across a one-person operation, a 15-person SMB, and a 30-person seed-stage SaaS company. The patterns scale — the difference at a bigger company isn't complexity, it's surface area. More teams, more expertise to capture, more systems to connect. The playbook is the same.

This framework builds on Hilary Gosher's work at Insight Partners: How Cloud-Native CEOs Can Pivot to Move at the Speed of AI.

Questions or want to deploy this at your company? john@hashbuilds.com

John Hashem builds AI systems and deploys AI workflows for teams ranging from solo founders to growth-stage SaaS companies.