The terms you need
AI transformation has a vocabulary problem. People hear "AI" and think one thing. These are the terms this playbook uses — get these straight before going further, because most of the confusion in early deployment comes from people talking about different things.
Several of the most useful terms here come from Hilary Gosher and the Insight Partners team — specifically their Reimagine / Renew / Redo framework and their work on AI Transformation Offices across 550+ portfolio companies. They're probably the furthest ahead on the terminology for this. I've adopted their language where it fits and added terms from my own work where theirs doesn't cover the ground level.
Insight Partners' term for the dedicated function responsible for AI adoption across a company. Not a committee. Not a working group. A dedicated execution engine with its own mandate, reporting directly to leadership. At enterprise scale, this is a team. At seed-stage and SMBs, this is one person — but the function is the same. The key insight from Insight's framework: this cannot be bolted onto an existing role. It needs to be its own thing.
Gosher's three-phase framework for AI transformation. Reimagine: make hard strategic choices about where AI changes the game. Renew: upgrade people and capabilities — hiring, training, incentivizing experimentation. Redo: rebuild operations to decouple growth from headcount. This playbook follows the same progression adapted for companies without a 200-person engineering org.
The core economic thesis of AI transformation — another term from the Insight Partners framework. Instead of hiring linearly as the company grows, AI makes existing people more productive and builds capabilities that scale without additional staff. This isn't about cutting jobs — it's about growing without proportionally growing the team. One person carrying two workstreams. Operations that used to scale linearly with headcount no longer needing to.
The system that gives you cross-company leverage. Not a great name — it sounds like it's only for writing code, but it's not. It's an AI operator that can read your entire codebase, connect to your existing tools, and execute work across any department. Whether you're a software company or not, this is the system that enables knowledge sharing, context management, and cross-functional execution. It's the base layer everything else in this playbook runs on.
Structured documents that capture how your company works — who does what, how decisions get made, what your brand sounds like, how your code is organized. These are the single most important asset you'll build. They're what makes AI output go from generic to company-specific. Think of them as the difference between hiring a smart stranger and hiring someone who's worked at your company for two years.
Claude Code's ability to work with your existing systems — Gmail, Notion, Slack, your CRM, your database, your HR platform. You've probably used products that are MCPs without knowing it. When we talk about MCPs in this playbook, we mean connecting Claude Code to the tools your team already uses so it can read, search, and act across them. Common first MCPs: Gmail (scan for receipts, invoices, client communication), Slack (search history, summarize channels), and your company's own API.
A searchable knowledge base that AI can query. Most companies have enormous amounts of valuable information trapped in videos, documents, Slack threads, and people's heads. A RAG system makes all of that searchable and queryable — the CEO can ask 'what are our margins on project X?' and get an answer pulled from real data, with role-based access so sensitive information stays protected.
A controlled space where someone can use Claude Code without risk. They can experiment freely, but they can't touch production code or break anything outside their directory. All work that leaves the sandbox goes through a handoff document. This is how you give people the aha moment without the risk.
The overall system of organized knowledge across your company. Org charts, DRI databases, brand voice files, process documentation, codebase maps — all structured so any AI tool can reference them. This is the permanent foundation. The tools change every month. The context architecture stays.
"We went from eight different SaaS tools to one unified system in weeks. The AI knows our business now — it pulls real photos from our job sites, writes in our voice, and the CEO can ask it questions about margins. It changed how we operate as a company."
— SMB client, 15 employees, solar industry
What you can do today
Before you hire anyone. Before you pick an AI provider. Before you set up a single tool. These are the things a CEO can start right now that will make every future AI deployment dramatically more effective. If someone asks me "what should we be doing now?" — this is the answer.
The org chart that AI can read
This is the single thing I wish I had done first. I spent months without it and lost efficiency on everything.
This is not the org chart on your website. This is the org chart from the perspective of an AI system that needs to know: where do I find information about X? Who is the person responsible for Y? Who do I go to when I need context on Z?
Have each leader or manager provide the following for every person who reports to them:
- —Name and role
- —What they do day-to-day
- —What they do exceptionally well
- —What they do because they have to (not because they want to)
- —Who they report to and who reports to them
This seems simple. It's not. Most companies don't have this written down anywhere — it lives in people's heads. Getting it into a structured document gives any AI system (or any new hire, or any consultant) the ability to navigate the company in hours instead of months.
It also gives leadership a reason to involve their team early. The exercise itself is non-threatening — you're not asking about AI, you're asking about people. That matters.
If you already have Slack or email access, you can get a head start before anyone fills out a form. Connect the Slack MCP and prompt: "Go through the last 6 months of messages. Every time someone asks 'who handles X?' or 'who should I talk to about Y?' — log the question, the answer, and the person who answered. Build me an org chart from the perspective of who actually owns what."
This won't be perfect, but it gives you 70% of the picture in an hour. Cross-reference against the formal org chart and you'll immediately see where the real structure diverges from the official one. That gap is where the most valuable context lives.
The DRI database
DRI = Directly Responsible Individual. For every function your company performs, who owns it?
Ask each department lead to do two things over the next week:
List everything your department does. Not job descriptions — actual functions. "Process refund requests." "Score brand health in dashboards." "Review campaign performance weekly." "Respond to inbound partnership inquiries." Every "hey, who handles this?" email that's ever been sent — that's a function that needs a DRI.
Not a team — a person. "Marketing handles that" is not a DRI. "Sarah owns campaign performance reporting" is a DRI. If nobody owns it, that's important to know too.
What you get: a complete map of what your company does and who's responsible. When the AI Transformation Office shows up (whether that's a hire, a consultant, or an internal champion), this is the first document they read. It tells them where the leverage is — which functions are consuming the most time, which have no owner, which are the "important but never urgent" work that AI can unlock.
It's not about AI at all. It's about organizational clarity. Any CEO can justify this exercise without even mentioning AI — it's just good management. But it happens to be the exact input that makes AI deployment 10x more effective. And it gives people a chance to participate in the process before anyone feels threatened.
I learned this the hard way. I focused too much on top-down work early on — vision docs, mission statements, OKRs. That's not wrong, but it's not where you start. The org chart and DRI database are the foundation everything else builds on. Every month I didn't have it was a month of less efficient work.
Connect the Gmail MCP and prompt: "Search the last 12 months for every email where someone asks who handles something, who owns something, or who to talk to about something. Also flag any email that delegates a task or assigns responsibility. Build me a DRI database from what you find."
Same with Slack: "Search all channels for responsibility assignments, role clarifications, and 'who does X' questions. Cross-reference with the org chart context file and flag any functions that have no clear owner."
A note on security: Connecting email and Slack to AI is a real decision. Some companies start with Slack only (lower sensitivity), some go straight to email (maximum speed). Know the tradeoff and make it consciously.
Context-first means documentation-first
Most companies are behind on documentation. This approach eliminates that problem entirely.
When the org chart, the DRI database, the brand voice, the scoring systems, the roadmap all live in structured context files — you've inherently become a documentation-first company. The context files that power your AI workflows are also the living documentation that new hires read, that leadership references, that the whole company navigates by. Most companies never get there because writing docs is tedious and always gets deprioritized. Context-first solves both problems at once.
The big decisions
Gosher's Reimagine phase starts with hard strategic choices. At enterprise scale, that's platform strategy and pricing models. At your scale, the strategic choices are more structural — and getting them wrong means fighting the org chart for months.
Internal champion vs. external hire
Both work. Both have traps. Know which one you're dealing with.
Before choosing: be clear about the purpose. This role exists to decouple growth from headcount. Not to cut jobs. Not to replace people. To make the existing team capable of output that would have required twice the staff a year ago. Every conversation about this role should start there. If the team doesn't understand that framing, every win you deliver will feel like a threat instead of an unlock.
Decoupling growth from headcount only works when your roster is already your roster. You should be in the position of saying: we have a great team, these are A-players, and instead of expanding the team for growth we want this team to do more. That's when it's time.
If you have culture problems, underperformance issues, or people who aren't invested in the company succeeding — sprinkling AI on top is not going to fix it. Context-first development only works when you have people who are constantly learning, who care about finding the best ways to solve problems, and who want to be around long-term. This initiative is going to be the most collaborative thing your team has ever done. It requires people who are ready for that.
The message has to be clear: the whole point is to get the team using AI together — not individuals using AI in isolation. The Transformation Office unifies that into one system where everyone's contributions make everyone else's output better.
We need to address something that holds companies back before they even start: the name "Claude Code" makes it sound like a developer tool. It's not. There is a product called Claude Coworker that's designed for non-technical users, but it's not nearly as powerful. Claude Code is the system that reads your entire codebase, connects to your tools via MCP, and executes work across every department. It's for anyone who cares about the business succeeding — not just people who write code.
I've built seven applications this year without writing a single line of code myself. Zero production errors. The barrier is lower than people think.
Three one-hour sessions. That's what it took. He's now using Claude Code comfortably — querying old sales data from Pipedrive, understanding the database schema, knowing what he can safely read versus what he shouldn't touch yet. He understands the importance of the context system and knows that in the future he'll be able to do even more with it. His entire workflow has changed.
One hour. One video session. Before the session was over he said: "I don't even need the second session. Once it's set up, I can just ask Claude Code for the things I need to know." He got it immediately.
People need the push to download VS Code, clone the repo, and write their first prompt. Once they do, they get it. The sandbox (Part 3) gives them guardrails. This is for anyone who understands the business — not just developers.
Someone already at the company who gets chosen (or volunteers) to lead AI adoption. They know the people, the politics, the history. Nobody's threatened by them.
Someone brought in specifically for AI transformation. They have the skills and the mandate. But they're the new person.
I've been both. The plays are different. When you're internal, you can help people help you — the trust is there, you just need the mandate. When you're external, you need the big win fast. You need to prove you're there to make the team more powerful, not to replace them. The Eisenhower quadrant play (Part 3) is your best friend in that first week.
What makes someone good at this role
This role is new. There's no credential for it, no career path that obviously leads here, no professional designation that signals "this person knows how to deploy AI across an organization." If someone has already built a massive AI-transformed company, they're probably the CTO or CEO of that company now — they're not looking for this job. So you're hiring for qualities, not a resume.
This is the single most important quality. The person running the AI Transformation Office is going to touch every department, access sensitive systems, and change how people work. If the team doesn't trust them, none of the plays in this playbook matter. They either need to come with trust already built (internal champion who's earned credibility) or be the kind of person who earns it fast (authentic, transparent about what they're doing and why).
They need to speak engineering, design, marketing, and ops — not fluently in every one, but enough to understand what each team values and what each team fears. The plays in Part 3 all require reading the room in a specific department. Someone who only understands one discipline will win in that department and stall everywhere else.
The entire approach in this playbook is show, don't tell. That requires someone who can actually build — prototypes, context files, integrations, working demos. Not necessarily a senior engineer, but someone who can use Claude Code to produce real things in a real codebase. The person who writes a proposal when they could build a prototype is the wrong person.
They need to move fast on the technical side — building, prototyping, validating — while moving slowly on the human side. Rushing adoption breaks trust. Rushing builds creates momentum. The best person for this role is comfortable holding both speeds at once.
The compound intelligence in Part 4 only works if each piece is built knowing it will feed the others. A context file isn't just for one department — it's part of a system that every future build will reference. The person who builds isolated tools that don't connect to anything will create AI islands, not AI transformation.
The closest analogy I have is coaching football. I played professionally and when I came back to coach high school, trust was immediate — everyone knew my background, nobody questioned whether I knew what I was talking about. That let me create massive change quickly because nobody was filtering my suggestions through skepticism.
AI transformation doesn't have that yet. There's no "professional credential" that instantly communicates "this person has done this before and knows what works." The field is too new. So the trust has to come from somewhere else — and the most reliable way to manufacture it is to use the tools themselves. Build something real in the first week. Show results that can't be ignored. The plays in Part 3 — the Eisenhower win, prototype don't present — are specifically designed to create trust fast when you don't have the luxury of a credential doing it for you.
In a few years there will be people with track records of doing this at multiple companies. Right now, you're hiring for the qualities above — and the best interview is asking them to build something in your codebase with Claude Code in front of you. If they can do that and explain what they're doing in terms the team would understand, they're the right person.
Keep the Transformation Office separate from people management
This is non-negotiable. I learned it the hard way.
The person running the AI Transformation Office cannot also be the person responsible for growing the team. As Head of Product, my job was to grow a team, build people up, make them feel invested. As the AI transformation person, I was in the next meeting talking about decoupling growth from headcount. People aren't stupid — they see both sides. And the moment they do, the trust you need for transformation evaporates.
The Transformation Office needs to be its own function — an execution engine focused purely on making the existing team more powerful. Not someone who also makes staffing or roadmap decisions. The two functions require fundamentally different kinds of trust.
What you're actually building
Most companies want "AI." That's not specific enough. Here are the actual categories.
The foundation. Everything else depends on this. A structured file system that tells AI how your company works.
CLAUDE.md ← Master index (AI reads this first)
│
├── context/
│ ├── org-chart.md ← Who does what, reports to whom
│ ├── dri-database.md ← Every function → responsible person
│ ├── brand-voice.md ← How the company sounds
│ └── scoring-systems.md ← How decisions get made
│
├── departments/
│ ├── engineering/
│ │ ├── DEPARTMENT.md ← Index for engineering context
│ │ ├── architecture.md ← System design, key services
│ │ ├── api-patterns.md ← How APIs are built here
│ │ └── tech-debt.md ← Known issues, priorities
│ │
│ ├── marketing/
│ │ ├── DEPARTMENT.md ← Index for marketing context
│ │ ├── campaign-patterns.md ← Ad buyer's reusable structures
│ │ ├── content-calendar.md ← What's planned, what's live
│ │ └── brand-guidelines.md ← Design system, voice, assets
│ │
│ └── customer-success/
│ ├── DEPARTMENT.md
│ ├── health-scoring.md ← How brand health is measured
│ └── escalation-paths.md ← Who handles what, when
│
└── operations/
├── roadmap.md ← Current priorities
├── hiring-criteria.md ← What "good" looks like per role
└── vendor-relationships.md ← Key partners, contracts, contactsEvery file is living documentation. When the marketing lead updates their campaign patterns, the AI immediately produces better work. The context files ARE the documentation.
A searchable knowledge base built from everything your company has ever produced. Role-based access controls who sees what.
┌─────────────────┐ ┌──────────────┐ ┌─────────────────┐
│ SOURCE FILES │ │ INGESTION │ │ KNOWLEDGE BASE │
│ │ │ │ │ │
│ Videos ─────────┼────▶│ Transcribe │ │ Chunks with │
│ Documents ──────┼────▶│ Classify ├────▶│ embeddings │
│ Slack threads ──┼────▶│ Tag │ │ │
│ Meeting notes ──┼────▶│ Chunk │ │ Semantic search │
│ Email history ──┼────▶│ Embed │ │ enabled │
└─────────────────┘ └──────────────┘ └────────┬────────┘
│
┌────────▼────────┐
│ ROLE-BASED │
│ ACCESS LAYER │
│ │
│ CEO: full access│
│ Mgr: dept only │
│ Team: filtered │
└────────┬────────┘
│
▼
"What are our margins
on the Johnson project?"Every video, document, and conversation becomes queryable. The CEO asks a question and gets an answer from real data — not from someone's memory of a meeting three months ago.
Connections between Claude Code and your existing systems. These let AI read, search, and act across the tools your team already uses.
┌──────────────────┐
│ │
┌───────────▶│ Claude Code │◀───────────┐
│ │ │ │
│ │ + CLAUDE.md │ │
│ │ + Context files │ │
│ └──┬────┬────┬──┘ │
│ │ │ │ │
┌────┴────┐ ┌──────┴┐ ┌─┴──────┐ ┌────────┴───┐
│ Gmail │ │ Slack │ │ Notion │ │ Company │
│ MCP │ │ MCP │ │ MCP │ │ API (MCP) │
└────┬────┘ └──┬────┘ └──┬─────┘ └─────┬──────┘
│ │ │ │
Scan emails Search Read docs Query CRM,
for receipts, history, identify read analytics,
invoices, map DRIs, gaps, act on company-
client comms find cross-ref specific data
patterns vs code
┌─────────┐ ┌──────────┐ ┌────────────────┐
│ Keywords│ │ BambooHR │ │ Google Search │
│ MCP │ │ MCP │ │ Console MCP │
└────┬────┘ └────┬─────┘ └──────┬─────────┘
│ │ │
Keyword Org chart, Indexing status,
research, employee data, search performance,
competitor hiring pipeline crawl monitoring
analysisEvery MCP is a bridge to a system your team already uses. You're not replacing tools — you're connecting them to an intelligence layer that can work across all of them.
Scheduled systems that run without human intervention. These are the "set and forget" capabilities — cron jobs, loops, and event-triggered workflows.
Example: Daily SEO Article Generation
┌──────────┐ ┌──────────────┐ ┌──────────────┐
│ CRON JOB │ │ Keywords MCP │ │ Brand Voice │
│ 2 AM ├────▶│ ├────▶│ Context File │
│ daily │ │ Research │ │ │
└──────────┘ │ trending │ │ Match tone, │
│ keywords │ │ style, pillar│
└──────────────┘ └──────┬───────┘
│
┌────────▼────────┐
│ GENERATE BRIEF │
│ │
│ Topic + keywords│
│ + brand voice │
│ + target length │
└────────┬────────┘
│
┌──────────────┐ ┌───────▼─────────┐
│ Media System │ │ WRITE ARTICLE │
│ │◀───┤ │
│ Find real │ │ Pull matching │
│ photos from │ │ images from │
│ job sites │ │ media library │
└──────────────┘ └────────┬────────┘
│
┌────────▼────────┐
│ PUBLISH / NOTIFY│
│ │
│ → Auto-publish │
│ → Email PMM for │
│ approval │
└─────────────────┘
Example: Interview Screening Pipeline
┌──────────┐ ┌──────────────┐ ┌──────────────┐
│ LOOP │ │ BambooHR MCP │ │ Company │
│ every ├────▶│ ├────▶│ Context │
│ 6 hours │ │ Check for │ │ │
└──────────┘ │ new │ │ Role reqs, │
│ applicants │ │ team needs, │
└──────────────┘ │ culture fit │
└──────┬───────┘
│
┌────────▼────────┐
│ SCREEN RESUMES │
│ │
│ Full company │
│ context, not │
│ keyword matching│
└────────┬────────┘
│
┌────────▼────────┐
│ 60 → 5 │
│ candidates │
│ │
│ → Ranked list │
│ → Send to │
│ hiring mgr │
└─────────────────┘The job itself might not be Claude Code — it could be a cron job or a scheduled function. But it calls your MCPs, references your context files, and produces output that would have required a person to do manually every day.
Categories 1–4 make your existing company smarter. Category 5 is about going on offense. Once you have context architecture and MCPs, the question becomes "what is the most amount of intelligence we can give to the greatest number of people?"
This is the ultimate decoupling of growth from headcount. Building things that wouldn't have existed at all — products that were never on the roadmap because they seemed too niche or "we don't have the bandwidth." Post-AI, you already have the distribution and the domain knowledge in your context files. The only thing that was missing was the ability to build fast enough to make the bet rational.
Core product: Fleet management SaaS for trucking companies.
Pre-AI reality: The sales team keeps hearing from customers that they need a driver compliance tracker. It's not on the roadmap — the engineering team is 18 months out, and it looks like a 6-month build with two dedicated engineers. PM says no. CEO agrees. "We're focused on the core platform."
Post-AI reality: The transformation lead already has the context architecture — company domain knowledge, customer data schemas, compliance requirements from support tickets ingested via MCP. They build a compliance tracker MVP in two weeks. Not a toy. A real product that reads from the same database, uses the same auth, follows the same brand guidelines (from the context files), and ships on a subdomain.
Sales starts showing it to customers the same month. Three sign up. It's not a side project anymore — it's a new product line that came from the distribution they already had.
What changed: Nothing about the company's headcount. Nothing about the engineering roadmap. The intelligence was already there in the context files. The distribution was already there in the customer base. The only thing that was missing was the ability to build fast enough to make the bet rational.
This is why bespoke tools are category 5, not category 1. You need the context architecture, the MCPs, and the adoption foundation before you can play offense. But once you have it — you're not just making your company more efficient. You're leveraging your existing distribution to ship products that your competitors can't even resource.
Breaking "AI" into these categories changes how teams think about what to build. A CEO who says "we need AI" doesn't know what they're asking for. A CEO who says "we need a context architecture and two MCP integrations before we build the RAG system" — that person can sequence a deployment.
What the transformation lead needs on day one
When the consultant shows up or the internal champion gets the green light, these are the access requirements. Have them ready.
Full clone, not read-only. The transformation lead needs to run the codebase locally with AI tools that can read every file. This is how specs reference reality instead of memory.
For the Slack MCP integration. Allows AI to search message history, identify DRIs, map communication patterns, and understand how the company actually operates vs. how the org chart says it does.
For the documentation MCP. Allows AI to read existing docs, identify gaps, and cross-reference against what's actually in the codebase.
For the Gmail MCP. This is the most powerful and most sensitive access. Email reveals what's really happening — responsibility assignments, client communication, vendor relationships, internal priorities. Requires Google OAuth setup (a GCP project with OAuth consent screen).
If the company has its own API or internal tools, MCP integration lets AI read and act on company-specific data. Custom MCPs can be built for any system with an API.
For building org context automatically. BambooHR, Rippling, or equivalent. Enables the org chart exercise to be partially automated rather than manual.
The security conversation matters. Some companies will give full access immediately — those are the ones that move fastest. Others will start with codebase and Slack only, then expand as trust builds. Both approaches work. But know that every system you don't connect is context the AI doesn't have, which means slower ramp-up and less accurate output. The tradeoff is always speed vs. exposure.
The plays
Gosher's Renew phase is about people and capabilities — hiring AI natives, changing mindsets, incentivizing experimentation. At enterprise scale, that's training programs and hackathons. At your scale, it's personal. These are specific moves for specific situations — plays you run when you encounter them.
The Eisenhower win
First week. Especially if you're the external hire and need credibility fast.
Find the feature or improvement that's been on the roadmap for quarters but nobody can justify pulling engineers for. Build it in the shadows. Demo it as a working prototype — not a deck, not a mockup. A functioning thing in the real codebase.
You're not replacing anyone's work. You're doing work that wasn't getting done. That's the safest possible first win — nobody loses, the company gains, and everyone sees what's possible. After this, the question flips from 'should we use AI?' to 'where do we deploy it next?'
There was a critical feature — not the flagship AI experience, but something equally important for the product to feel complete. Lots of design decisions, lots of validation needed, and it was at risk of never getting touched because leadership's focus was always on the main agentic experience. I built the entire thing as a live prototype — front-end components, UI flows, mobile-responsive design, all wired together in the real codebase. Leadership could actually see it working, which resolved most of the open questions about what to ship. They could see the grand vision as a functioning product, then easily decide which pieces were essential for launch. Now when it hits engineering, it's properly scoped. When it hits design, they spend time on the components that matter most for launch instead of guessing at screens that might not ship. It moved an entire quarter of discussion — business case debates, stakeholder alignment, scope negotiation — into a working thing people could react to. In a typical flow, we'd only be starting those conversations this quarter.
Make it their idea
You need a skeptic to engage with AI but asking directly would trigger resistance.
Set up a problem in their domain on your machine. Ask for their help casually — 'Hey, I'm trying to make these pages look better, you're the design expert, what would you change?' Let them start talking naturally. Then turn on the AI transcription so it captures their input. Show them the output. Watch the lightbulb turn on.
The moment someone sees their own expertise producing better output through AI, the framing shifts from 'this replaces me' to 'this amplifies me.' You can't lecture someone into that realization. They have to experience it. And it works best when they think they were just helping you.
Need the designer to engage with prototyping? Don't schedule a training session. Set up a prototype on your screen, ask 'can you help me with something — I'm trying to make this layout work and I'm stuck.' They start giving feedback, you show them how the AI implements it in real-time. They walk away thinking about what else they could do with this.
Show it, then share it
You need the team's expertise flowing into the context system, but you can't just ask for it. You need to earn it.
You can get surprisingly far on your own. Claude Code can grep the entire codebase to identify patterns. Slack and email (via MCP) show how things actually work. You can reverse-engineer a decent baseline of any department's patterns from what already exists.
But the most valuable knowledge isn't in files. It's in how people think — the designer's philosophy, the engineer's vision for where the architecture is heading, the marketer's intuition about what converts. You have two layers: the baseline you can build yourself, and the depth that requires people to opt in.
Pick a cross-functional project that proves the system works. The best first proof is something simple: generate a blog post that uses the engineering standards to build the page, the design principles to style it, and the brand voice to write it — all from context files you've assembled yourself. When the output looks right, reads right, and is built in the company's actual codebase using their actual patterns — that's your benchmark. You just proved, between you and the CEO, that the context system works.
Now show it. Not in an all-hands — in a one-on-one. "Look at this. It used your design patterns. It used your tone of voice. It was built in our codebase following our engineering standards. I assembled what I could from existing work. But imagine how much better this would be with your actual input on where things are heading."
The ethos is always the same: show results so good and so quickly that they cannot be ignored. That's the approach to every situation in this playbook. Not asking for permission. Not scheduling a training session. Showing something real and letting the quality of the output create the pull.
Once people see the output, the conversation shifts. They stop thinking "this person is trying to automate my job" and start thinking "this system is good, but it's missing something I know." That's when they opt in — not because you asked, but because they want the output to be better. It becomes a barter: each role contributes their expertise, and in return gets a system that multiplies their own output. Everyone creates more value. Nobody loses anything.
If someone isn't engaging with the context system, that's rarely a people problem. It's almost always a positioning problem. The AI Transformation Office didn't put them in a position to see the upside. They didn't get their own win. They didn't see what they could build with the system that they couldn't build without it. The job is to figure out which play creates that moment for each person — and how the CEO wants to balance speed of adoption with the culture they're building.
The shape of the context file depends on the role, and the depth depends entirely on where the company is heading. What you can reverse-engineer from existing artifacts is the starting point. What the talented people contribute is what takes it from functional to genuinely intelligent.
What only they can give you: The philosophy behind every design decision — the paradigm that guides which patterns get used and which get retired. That's what turns a context file from a style guide into a design intelligence system.
What they get back: The ability to prototype in working code. Their expertise becomes code without them writing a line.
What only they can give you: Where the architecture is going. The vision that doesn't exist in the current code.
What they get back: Better alignment across the team on every commit. Back-end engineers moving to full-stack more confidently. Internal UI that actually matches the design system.
What only they can give you: The intuition. Why certain campaigns work and others don't in ways the data doesn't fully explain. The judgment calls that make the difference between competent output and output that actually converts.
What they get back: Content generation that's already on-brand. The ability to produce at a volume that would have required a full team — while they focus on strategic decisions.
Most people's first instinct is: "Great, now our engineers can build nicer-looking software because the designer gave them a brand file." That's true, but it's not the highest leverage move.
Claude Code is better at writing code than it is at visual design right now. Engineers gain incrementally from context files — better alignment, better internal tooling, smoother cross-stack work. Those gains are real but predictable.
The highest leverage is the other direction. This is the offensive line analogy again. Developers are like offensive linemen in a company — they move everything forward. But you don't win championships by only investing in the biggest kids. The people with the most to gain are the non-technical people who deeply understand the business — product managers, designers, ops leads. They're the athletes. They're fast, flexible, and they understand the game. They know the customer. They know the market. They know what should exist. What they've never had is the ability to build it themselves. Context files plus Claude Code gives them that ability. You don't need them to write syntax or memorize frameworks — you need to put them in a position where they can do things they never thought possible. The person closest to the customer builds the thing the customer needs.
A SaaS company has a large front-end application and a separate API. One side of their marketplace has been asking for a mobile app for months. Engineering says it's a Q3 project at the earliest — they'd need to hire a mobile developer.
But the product manager who owns that side of the marketplace understands every user flow, every edge case, every business rule. With engineering context files that document the API patterns and auth system, plus design context files that capture the brand and interaction standards, that PM can prototype a working mobile app in a sandbox. Not a mockup — a real app that talks to the real API, follows the real design system, handles the real edge cases.
That's decoupling growth from headcount again. The intelligence was in the PM's head. The engineering standards were in the context files. The code was written by Claude Code. No mobile developer hired. No engineering roadmap disrupted. A new product surface shipped by the person who understood it best.
This is why the engineering context files need to be tight when non-engineers are building. If the goal is just rapid prototyping for engineering to take over, a loose context file works fine. But if the goal is for product managers and designers to build production-quality software — and it should be, because they're the ones with the domain knowledge — then the engineering standards need to be precise enough that anything built in a sandbox naturally aligns with the existing codebase. The prototype doesn't need to be rewritten. It's already built right.
The typical path: product managers and designers become the first "product engineers." Not because they learned to code — because the context system made coding a byproduct of understanding the business. That's the knowledge-sharing flywheel. Everyone contributes their expertise, everyone gets more capability back. The system gets smarter with every contribution, and every person in the company becomes more dangerous.
Prototype, don't present
You've spotted an opportunity — a new product arm, a campaign type nobody's explored, a feature that could open a new market. This is especially powerful in your first weeks when you're looking at the business with fresh eyes and seeing things the team has been too busy to notice.
Don't bring it up in a meeting. Don't write a proposal. Validate it yourself first — with Claude Code and the codebase, you can test whether the existing data supports it in hours. If it doesn't, find out what's missing and whether a third-party API can fill the gap. Get a test account, wrap the API, validate the data. Then build it as an actual button in the actual app — connected to real data, styled with the real design system, functioning inside the real product. When you demo it to the CEO, there are no questions about feasibility. They can see it. It's already built.
The typical path for a new idea at a company: someone raises it in a meeting, it gets discussed, someone writes a brief, a meeting gets scheduled with a potential vendor, another meeting to review the data, a PRD gets drafted, it goes on the roadmap for next quarter. Six weeks of process before anyone sees anything real. The prototype path: you validate it yourself in parallel with your other work, and the CEO sees a working feature inside the product. The conversation skips straight to cost versus return. And when it does hit the roadmap, it's a small body of work to wire up in production — the prototype already proved the concept and most of the front-end work is done.
In my first week at a SaaS company, I noticed a campaign type we weren't pursuing. I checked whether our existing data could support it — it couldn't. Found a third-party data provider, got a startup test account, used Claude Code to wrap their API and validate we could generate the signals we needed. Then I built it as an actual feature inside the platform — a real button, real data flowing through it, styled to match. When I demoed it to the CEO, his response wasn't 'let's schedule a meeting to discuss' — it was 'can you put this in our AI product?' The whole thing happened in parallel with everything else I was doing. It didn't slow anything down. That's decoupling growth from headcount — the company got a validated new product direction without pulling a single person off the core roadmap.
The sandbox moment
Someone important (CEO, team lead, key contributor) is ready to try but you need to contain the risk.
Set them up with Claude Code in a sandboxed environment — a dedicated directory with hard boundaries. They can experiment freely but can't touch anything outside their space. Every piece of work that leaves the sandbox goes through a handoff document that a developer reviews.
This is the aha moment. The first time someone builds something useful with AI — something that would have taken them weeks or never happened at all — they become a permanent advocate. You need this to happen in a controlled space so the excitement doesn't create chaos. The sandbox is what lets you say 'yes' to everyone without risking the codebase.
A coaching client set up with a sandboxed environment literally told me: 'once I got going, I could just talk to Claude Code about the mentorship I needed.' Self-sufficient within a week. That's the trajectory you want for every person you onboard.
The first ceremony
You have a handful of people who want in. They've seen the demos, they've had their aha moment, and they're ready to try it themselves. This is the moment you formalize it.
Before the meeting, prepare an onboarding guide — a clear, tested document that walks someone from zero to running. GitHub account, VS Code installed, Claude Code extension, cloning the repo, running the app locally if they'll be prototyping, Homebrew and npm if your stack needs it. You only have to get this working on one person's machine — then Claude Code can help you write the spec for everyone else. Test it clean. Make sure someone can pull the code, run a command, and see something working. That's your prerequisite. Then you run the ceremony. Get everyone in a room — or on a call — with their machines set up. Walk through the basics together. Then run the exercise: "Marketing lead, I want you to change the brand voice file. Change it now." Everyone watches the screens. "Designer, change this design token. Go." Pull the changes. See what happened. Give people real, powerful things they can do immediately — things that make them feel like they can do their job dramatically better right now. Not a lecture. Not a demo they watch. An exercise they do. The best version of this has real problems queued up — ideally pulled from the actual roadmap. Things that have been sitting there because nobody had bandwidth. Let people take a crack at them in their sandboxes during the ceremony. When someone solves a real problem in their first session, they're not just bought in — they're hooked.
Individual aha moments are powerful but fragile. The ceremony turns scattered interest into shared momentum. Everyone sees everyone else doing it. The skeptic watches the designer change a component. The designer watches the marketing lead update brand voice and sees it ripple through generated content. It stops being 'that AI thing John is doing' and becomes 'how we work now.' The energy in the room when people realize they can actually do this — that's what you can't manufacture one-on-one.
The onboarding will hit snags — git authentication, environment variables, someone's machine missing a dependency. That's fine. That's why you do it together. The ATO is there to unblock in real-time. The goal isn't perfection — it's getting everyone past the setup friction in one session so they never have to face it alone. After the ceremony, they have a working environment and the muscle memory of having done something real. Everything after that is iteration.
Find your champions early
First two weeks. Before you try to go company-wide.
Identify 2–3 people across different departments who are naturally curious. Give them the sandbox, the onboarding video, the context files. Let them run. Their enthusiasm will do more for adoption than any all-hands presentation you could give.
Top-down mandates create compliance, not adoption. You want pull, not push. The champion in customer success who automates their dashboard review becomes the best advertisement for the marketing team to try it. Organic spread beats forced rollout every time.
The growth marketing lead saw the SEO content engine running — articles being generated on-brand, published automatically, ranking within weeks. He didn't ask 'how does it work?' He asked 'can you train me on Claude Code?' He realized that if the SEO side could be automated, he could redirect his time entirely to ad strategy — the high-leverage work he'd been wanting to focus on. Within a week he was using it himself. I didn't have to sell adoption. The output sold it for me.
Speak their language, not yours
You're meeting with a specific department for the first time about AI.
Map AI capabilities to their specific pain points using their terminology. Don't talk about 'context files' and 'MCP integrations' — talk about 'making your best campaign patterns reusable' or 'automating the dashboard reviews that eat 5 hours a day.' Every department has different fears and different incentives. Learn them before you walk in.
Engineers fear the PM is in their territory. Designers fear their taste is being commoditized. Marketers fear losing creative control. Ops fears being made redundant. Customer success fears losing the human touch. Each fear requires a different play — and none of them sound like "I'm afraid of AI." They sound like "you need to involve [person] because that's their job."
When Customer Success was spending 5 hours a day manually scoring brand health, I didn't say 'let's build an AI automation.' I said 'I read the database, I think I understand your scoring criteria — can I show you something?' Built the algorithm, validated it matched their manual output, then handed engineering a working prototype. The team felt supported, not replaced.
Compound intelligence
Gosher's Redo phase: rebuild operations. At enterprise scale, she describes decoupling growth from headcount across entire business units. At your scale, this is where individual plays start feeding each other — and AI stops being a tool and becomes an operational capability.
Every piece you build should increase the intelligence of the overall system. The RAG system makes the SEO engine smarter because it has access to real company knowledge. The media pipeline makes the content engine better because it can pull real photos. The CRM feeds the knowledge base which improves the AI's answers. You get two-for-ones constantly.
The progression is always the same: system of record → system of intelligence → system of action. (Insight Partners calls a similar progression "Augment → Automate → Anticipate" — AI as thought partner, then AI running processes in the background, then AI constantly optimizing and learning from its own performance. Same idea, different vocabulary.)
CRM, project management, documents, media files. Where data lives today — probably scattered across 8 different SaaS tools.
RAG systems, knowledge bases, context files. Data becomes queryable, searchable, and AI-accessible. The CEO can ask questions and get real answers with role-based access.
Content engines, automation pipelines, reporting systems. The intelligence layer feeds automated outputs — articles written in the brand voice using real company photos, dashboards updated automatically, reports generated from live data.
For a 15-person solar company, this replaced eight separate SaaS tools — about $10K/month — with a unified system running on usage-based infrastructure at roughly $300/month. That's not optimization. That's a category change.
The old advice was "focus on one thing." That was pre-AI survival advice. Post-AI, the math flips. You can build a CRM, a RAG system, an SEO engine, and a media pipeline — four products that would have required four dev teams. Maximum value decoupled from maximum overhead. Each piece makes the others more valuable.
What not to do
Everything here is something I either did wrong or watched someone else do wrong. The pattern is almost always the same: moving too fast on the technical side without reading the room on the human side.
Don't start with vision and mission.
It's tempting to go top-down — "let's define the company vision in a context file first." But when you're brought in to do this work, things are already changing. The vision and mission are actively being revised. Starting there is starting with the most unstable input. Start with the org chart and DRI database — those are stable and immediately useful.
Don't ask people to document their expertise.
The moment you ask, you've made it about replacement. Even if that's not the intent, that's what they hear. Reverse-engineer it from their existing work — read every deck, article, and asset they've produced and extract the patterns. Or create situations where doing great work naturally produces the documentation as a side effect.
Don't hand out AI access without the sandbox.
A non-technical person building in isolation — disconnected from the codebase, creating work that can't be integrated — is worse than no AI at all. The sandbox should exist before anyone touches the tool. Guardrails first, keys second.
Don't lead with a diagnosis.
When you have more AI experience than the team, you see problems they don't. But "here's what's wrong" lands very differently than "here's what it could look like" with a working prototype. You'll have credibility to diagnose later. Earn it with demos first.
Don't combine the Transformation Office with people management.
This one is structural, not tactical. The person running AI transformation cannot be the person who grows the team. People see both roles — and the trust required for transformation evaporates. Dedicated function or nothing.
Don't invest in tools. Invest in skills.
The tooling will change three times this quarter. The context files — brand voice, campaign patterns, design systems, workflow documentation — those are permanent. Build the foundation, swap the furniture. Skills are the asset. Tools are the rental.
Don't do an all-hands rollout.
You can't communicate your way to adoption with a memo or a presentation. People need to experience a win with the tool before they'll believe it's a multiplier and not a replacement. Find 2–3 champions, let them spread the word organically. Pull beats push every time.
How you know it's working
The intake process is working. People are pulling, not being pushed.
They've internalized the system. The skill is theirs now, not yours.
Curiosity replacing defensiveness. The champion network is working.
The Eisenhower win landed. The aha moment is spreading.
Compound intelligence is working. The systems are feeding each other.
The cost-of-building collapse has clicked. The team is thinking in new product arms.
This playbook is a living document. It was built from engagements across a one-person operation, a 15-person SMB, and a 50-person SaaS company. The patterns scale — the difference at a bigger company isn't complexity, it's surface area. More teams, more expertise to capture, more systems to connect. The playbook is the same.
This framework builds on Hilary Gosher's work at Insight Partners: How Cloud-Native CEOs Can Pivot to Move at the Speed of AI.
Questions or want to deploy this at your company? john@hashbuilds.com