AI Adoption Sprint Plans — Every Inc.

Date: 2026-04-01 Input: maturity-ladder-2026-04-01-1440.md, VALUES.md


Sprint 1: Internal — “Level 2 to 3 Sprint”

Premise

Seven Every team members sit at Level 2 (Adoptive): they delegate execution to AI by default and have reusable personal workflows, but none has yet built a tool, skill, or workflow that another person on the team actually uses. That single threshold — someone else adopted what you built — is what separates Level 2 from Level 3 (Transformative). This sprint exists to collapse the distance between “works for me” and “works for us.”

The maturity ladder identifies a structural risk: the Level 2 Plateau. Level 2 feels productive enough that the motivation to share, generalize, and document diminishes. The sprint applies a concentrated burst of social pressure, time-boxing, and buddy mentorship to push past that plateau.


Objectives

# Objective Measurable Outcome
1 Each participant produces one reusable artifact Artifact exists in a form another person can install, fork, or follow (repo, CLAUDE.md skill, documented workflow, prompt chain, CLI tool, or template)
2 Each artifact is adopted by at least one other person Adopter demonstrates live usage during demo day — not a hypothetical, an actual run
3 Cross-pollination between roles At least 3 of the 8 artifacts are adopted by someone in a different function (e.g., an engineering tool adopted by a non-engineer)
4 Every values embodied Sprint feels like play, ships v1 by end of Day 2, and everything built is real (builder credibility)

Participant Selection

Level 2 Participants (8 people):

Participant Role Current Level Primary Barrier (from maturity ladder) Sprint Target Artifact
Brandon Gell CTO 2, trending 3 Authority distribution — tools must be adopted voluntarily by GMs Shared operational tool or infrastructure extension adopted by at least 1 GM
Andrey Galko Engineering Lead 2 Self-enhancing bias — simplicity value may resist custom tooling Shared engineering tool embodying simplicity (deployment, testing, or code review extension)
Nityesh Agarwal Engineer, Cora 2 Self-enhancing bias — pre-research discipline is Level 2 pattern Pre-research template or Claude Code workflow extension usable by other engineers
Lucas Crespo Creative Director 2 Identity threat (moderate) — design taste feels threatened by tooling Design request intake or cross-product context-switching tool for the 3-person design team
Austin Tedesco Head of Growth 2 Self-enhancing bias — AI at task level, not system level Growth experimentation system or tool extending Montaigne for team use
Anukshi Mittal Product Marketing 2 Self-enhancing bias — marketing AI plateaus at “faster drafts” Marketing tool or framework adoptable by team or consulting clients
Rachel Braun Podcast Producer 2 Opacity — podcast execution is the most manual workflow at Every Podcast production pipeline tool (show notes, guest prep, or distribution automation)
Brooker Belcourt Finance Lead 2 Opacity — finance precision requirements make AI delegation feel risky Auditable finance analysis workflow with built-in verification steps

Buddy Pairing

Each Level 2 participant is paired with a Level 3 person. Pairings are designed for maximum relevance: shared domain knowledge, complementary skills, or direct mentorship alignment.

Level 2 Participant Level 3 Buddy Pairing Rationale
Brandon Gell (CTO) Kieran Klaassen (GM, Cora) Kieran invented compound engineering — the methodology Brandon’s infrastructure supports. Kieran can help Brandon build a tool GMs will voluntarily adopt because Kieran is a GM.
Andrey Galko (Eng Lead) Danny Aziz (GM, Spiral) Danny built Droid CLI from scratch and abandoned Cursor when it stopped working. He embodies “build what you need, drop what you don’t.” Andrey’s simplicity value and Danny’s radical tool evaluation are natural complements.
Nityesh Agarwal (Eng, Cora) Naveen Naidu (GM, Monologue) Naveen manages 143K lines solo with a Linear-centric workflow. He can show Nityesh how disciplined specification scales into system-level tooling. Both are engineers; the domain transfer is direct.
Lucas Crespo (Creative Director) Yash Poojary (GM, Sparkle) Yash built AgentWatch — a monitoring tool that crossed the “others use it” threshold. He can mentor Lucas through the identity-threat barrier: the tool protects design time rather than replacing taste. Yash’s learnings-after-each-push discipline also models how to document for others.
Austin Tedesco (Head of Growth) Dan Shipper (CEO) Dan built Proof and R2-C2. He can help Austin evolve Montaigne from a personal agent to a system-level growth tool. CEO pairing also signals organizational importance of growth’s Level 3 transition.
Anukshi Mittal (Product Marketing) Katie Parrott (AI Editorial Lead) Katie’s “AI-native writing is sculpture” framework is the exact model for marketing AI: shape AI output with taste rather than generating from scratch. Katie can help Anukshi move Iris from personal assistant to shared marketing tool.
Rachel Braun (Podcast Producer) Natalia Quintero (Head of Consulting) Natalia built Claudie (saves 14 hrs/week) for a role with high coordination overhead — exactly Rachel’s situation. Natalia can help Rachel identify the coordination vs. execution split and build the pipeline tool that automates the coordination portion.
Brooker Belcourt (Finance Lead) Anthony Scarpulla (Social Media) Anthony built a custom Claude Code + X API integration — a purpose-built tool from scratch. Brooker’s barrier is opacity (hard to articulate finance expertise for AI). Anthony can mentor the “just build the integration” approach: start with one specific API connection, not the whole workflow.

Schedule

Duration: 3 days (Tuesday through Thursday)

Rationale for 3 days over 2: The maturity ladder identifies opacity and identity-threat barriers for 5 of the 8 participants. These barriers require more incubation time than pure self-enhancing bias. Day 1 is barrier-breaking and scoping. Day 2 is building. Day 3 is polishing, adoption testing, and demo.

Day 1 — “Unblock” (Tuesday)

Time Activity Purpose Values Alignment
9:00-9:30 Kickoff: “The Level 3 Line” — Dan frames the sprint. Shows the maturity ladder. Makes the threshold concrete: “By Thursday, someone else uses what you built.” Set the bar. Make it real, not abstract. Builder credibility (we measure ourselves honestly)
9:30-10:30 Buddy Breakout 1: “What’s Your Barrier?” — Each pair has a 1-on-1 conversation. The Level 3 buddy shares what they had to overcome to cross the threshold. The Level 2 person names their specific blocker out loud. Surface resistance explicitly. The maturity ladder identified each barrier; now the person owns it. Play as strategy (be sincere, not serious)
10:30-11:00 Break    
11:00-12:00 Lightning Demos — Each Level 3 buddy demos their Level 3 artifact in 5 minutes. Not a polished presentation — a live walkthrough of the actual tool/workflow. Questions encouraged. Show, don’t tell. Make Level 3 tangible and achievable. Builder credibility (show the real thing)
12:00-1:00 Lunch   Play as strategy (eat together, talk freely)
1:00-2:30 Buddy Breakout 2: “Scope Your v1” — Each pair defines a single artifact with a 1-sentence description, a target adopter (specific person), and a Day 2 ship deadline. The buddy’s job is to ruthlessly narrow scope. Prevent over-scoping. The biggest Level 2-to-3 failure mode is building something too ambitious to finish. The artifact must be shippable in one day. Ship and iterate (v1 in one day, not a perfect tool)
2:30-3:00 Scope Shareout — Each person shares their 1-sentence artifact description and target adopter. Group gives thumbs-up or flags scope concerns. Public commitment. Social accountability. Generalist advantage (cross-functional feedback)
3:00-5:00 Build Session 1 — Start building. Buddy is available for questions but does not pair-program. The builder must own the artifact. Begin the work. Having 2 hours on Day 1 prevents the “I’ll start tomorrow” trap. Ship and iterate

Day 2 — “Ship” (Wednesday)

Time Activity Purpose Values Alignment
9:00-9:15 Standup — Each person: “What I built yesterday. What I’m building today. Where I’m stuck.” 1 minute per person. Surface blockers early. Ship and iterate
9:15-12:00 Build Session 2 — Heads down. Buddy available on Slack/in-person for questions. One hard rule: v1 must be functionally complete by noon. Not polished, not documented, but working. The noon deadline forces shipping behavior. If someone is still designing at 11:00, the buddy intervenes with scope cuts. Ship and iterate (ship v1 by noon)
12:00-1:00 Working Lunch + v1 Check — Each person shows their v1 to their buddy over lunch. Buddy confirms: “This works” or “Cut X to make it work.” Hard checkpoint. No one leaves lunch without a working artifact. Taste over process (buddy uses judgment, not a checklist)
1:00-3:00 Adoption Testing — Each builder hands their artifact to their target adopter (the specific person identified on Day 1). The adopter attempts to use it without the builder present for the first 15 minutes. Then they reconnect and the builder fixes whatever broke. The “stranger test” for tools. If the adopter can’t figure it out alone, the artifact needs work. This is where documentation, naming, and interface quality get real feedback. Builder credibility (it must actually work for someone else)
3:00-5:00 Build Session 3: Iteration — Fix what broke during adoption testing. Add minimal documentation. Prepare for demo day. Ship and iterate — respond to real user feedback. Ship and iterate

Day 3 — “Demo and Adopt” (Thursday)

Time Activity Purpose Values Alignment
9:00-10:00 Final Build Session — Last-touch fixes, documentation polish, demo prep.   Ship and iterate
10:00-10:30 Break + Setup    
10:30-12:30 Demo Day — Each Level 2 participant demos their artifact (7 minutes per person, 15 minutes). Format: (1) What it does — 1 sentence. (2) Live demo — build/run it in front of everyone. (3) Adopter testimonial — the person who used it says what it did for them. (4) Q&A — 3 minutes. The public demonstration creates social proof and makes Level 3 status visible. The adopter testimonial is the evidence that the threshold was crossed. Play as strategy (demos are show-and-tell, not presentations)
12:30-1:30 Lunch + Voting — Every team member (not just sprint participants) votes on: “Most Useful” (which artifact would you adopt?), “Most Surprising” (which artifact came from an unexpected direction?), “Most Fun” (play as strategy award). Gamification creates energy and makes adoption desirable rather than obligatory. Play as strategy
1:30-2:30 Retro: “What Crossed the Line” — Facilitated discussion. What made Level 3 different from Level 2? What was harder than expected? What was easier? What would you tell a Level 2 person at a client company? Capture practitioner knowledge for Sprint 2 (consulting template). Every’s consulting credibility depends on having experienced the transition, not just theorized about it. Builder credibility (we’ll use these stories with clients)
2:30-3:00 Level Assignment Update — Dan and Brandon confirm which participants crossed the Level 3 threshold based on demo evidence. Updated maturity ladder published internally. Make the transition official and visible. Taste over process (judgment call, not checkbox)

Measurement Framework

Primary Success Metric

Reusable artifact produced and adopted by at least 1 other person.

“Adopted” means: the adopter has used the artifact for a real task (not a demo scenario) and can describe what it changed about their workflow.

Measurement Protocol

Metric How Measured When Measured Target
Artifacts completed Count of working artifacts demoed on Day 3 Demo Day 8/8 (100%)
Artifacts adopted (Day 3) Count of artifacts with confirmed adopter testimonial at demo Demo Day 8/8 (100%)
Artifacts adopted (Day 10) Count of artifacts still in active use 1 week post-sprint Day 10 follow-up check 6/8 (75%)
Artifacts adopted (Day 30) Count of artifacts still in active use 1 month post-sprint Day 30 follow-up check 5/8 (62%)
Cross-functional adoption Count of artifacts adopted by someone in a different function Demo Day 3/8 (37%)
Level 3 confirmations Count of participants who crossed the Level 3 threshold Day 3 retro 6/8 (75%)
Consulting stories generated Count of participant transition stories usable in consulting Day 3 retro 4+

Activity-Based Measurement (Leading Indicators)

These are measured during the sprint, not after. They predict whether the primary metric will be hit.

Indicator When Checked Intervention if Missing
Artifact scoped and 1-sentence description written Day 1, 3:00 PM Buddy escalates to Dan for scope help
Building started (not still designing/researching) Day 1, 5:00 PM Buddy forces a “just start” session — build the smallest possible version
v1 functionally complete Day 2, 12:00 PM (hard deadline) Buddy applies scope cuts. If artifact is >50% done, cut features. If <50% done, pivot to a simpler artifact entirely
Adopter has attempted to use artifact Day 2, 3:00 PM Builder hands artifact to adopter immediately. No more building until adoption test happens
Adoption issues identified and fixes started Day 2, 5:00 PM Builder prioritizes adoption-blocking issues over nice-to-haves

Leadership Strategy

Dan Shipper (CEO) — Role During Sprint

  1. Day 1 Kickoff: Frame the sprint as a builder credibility exercise, not a management initiative. “We tell consulting clients to build AI tools. Seven of us haven’t crossed that line yet. Let’s fix that this week.” The framing must be honest and non-judgmental — Level 2 is already exceptional relative to most companies.

  2. During Build Sessions: Available but not hovering. Dan’s role is Austin’s buddy, so he focuses there. He should not do a walk-around checking on everyone — that turns play into surveillance.

  3. Demo Day: Attend and participate enthusiastically. Ask genuine questions. Vote in the awards. Dan’s visible excitement about what people built is the highest-impact cultural signal available.

  4. Post-Sprint: Update the maturity ladder publicly. Reference sprint artifacts in future articles and consulting conversations. This completes the builder credibility loop: “Our CTO built X during our internal sprint” is a consulting proof point.

Brandon Gell (CTO) — Role During Sprint

Brandon is both a participant and a technical leader. This dual role is deliberate: having the CTO participate (not just sponsor) demonstrates that Level 3 is a stretch for everyone, not just junior team members. Brandon should be visibly working on his own artifact, asking his buddy (Kieran) for help, and struggling publicly. This normalizes the difficulty.

Buddy Responsibilities

Each Level 3 buddy has four responsibilities:

  1. Scope enforcer: Their primary job is preventing over-scoping. The most common Level 2-to-3 failure is building something too ambitious. The buddy’s mantra: “What’s the smallest version that someone else would actually use?”

  2. Barrier coach: Each participant has a specific barrier (identified in the maturity ladder). The buddy names the barrier and helps navigate it. For identity-threat barriers (Lucas), the reframe is: “The tool protects your time for taste work.” For self-enhancing bias (Andrey, Nityesh, Austin, Anukshi), the reframe is: “The tool changes the system, not just the speed.” For opacity barriers (Rachel, Brooker), the reframe is: “Start with one specific step, not the whole workflow.”

  3. Available, not hovering: Buddies respond to questions within 30 minutes but do not pair-program or co-build. The artifact must be the participant’s own work — otherwise it doesn’t feel like a real Level 3 achievement.

  4. Adoption broker: If the participant’s target adopter isn’t available or isn’t the right fit, the buddy helps find an alternative adopter from the team.


Follow-Up Protocol

When Action Owner
Day 3, end of sprint Maturity ladder updated with confirmed Level 3 transitions Dan + Brandon
Day 5 (Monday) Each participant posts a 3-sentence “what I built, who uses it, what I learned” in #general Slack Each participant
Day 10 Adoption check: is each artifact still in use? Quick Slack poll to adopters Brandon
Day 14 Sprint retro notes compiled into consulting template (feeds Sprint 2) Natalia (Head of Consulting)
Day 30 Second adoption check. Artifacts still in use are added to the Monthly Workflow Show-and-Tell rotation Dan
Day 30 Participants who did not cross the Level 3 threshold get a 1-on-1 with their buddy to redesign the approach Respective buddy

Risk Mitigation

Risk Likelihood Mitigation
Participant builds something nobody actually wants to use MEDIUM Day 1 scoping requires naming a specific target adopter. Day 2 adoption testing catches this early.
Identity-threat participants (Lucas, Brooker) disengage LOW-MEDIUM Buddy pairing addresses this directly. Sprint framing emphasizes “tools that protect your time for judgment work,” not “automate your expertise.”
Sprint feels like management surveillance LOW Dan participates as a buddy, not a reviewer. No one “grades” anyone. The demo is show-and-tell, not a performance evaluation. Awards are playful, not ranked.
Artifacts are one-time demos, not real tools MEDIUM Day 10 and Day 30 follow-up checks. The measurement framework tracks sustained adoption, not just demo-day performance.
Level 3 buddies are too busy to engage LOW Sprint is only 3 days. Buddy time commitment is ~4 hours total across 3 days. Buddies are not building — they are coaching.


Sprint 2: Consulting Client Template — “AI Adoption Sprint”

Premise

Every Inc. has run 8 AI-focused camps with 5,395 participants. This sprint template extends that camp format into a structured 2-day adoption experience designed to move client teams from Level 1-2 to Level 2-3 on the maturity ladder. The differentiator from generic AI training: Every practitioners lead by showing their own tools first (builder credibility), then participants build for their actual jobs (not toy examples).

Sprint 1 (internal) feeds Sprint 2 directly: the transition stories, barrier-navigation techniques, and buddy-pairing methods tested internally become the consulting methodology delivered to clients.


Objectives

# Objective Measurable Outcome
1 Each participant builds one AI workflow for their actual job Working workflow demonstrated live on Day 2
2 Workflow adopted into daily practice within 1 week Participant confirms active use in Day 7 follow-up
3 Client leadership sees tangible ROI At least 2 workflows produce measurable time savings identified during the sprint
4 Every demonstrates builder credibility Every consultants show their own tools before asking participants to build

Target Client Profile

Dimension Specification
Company size 20-200 people (small enough for sprint-level impact, large enough to have workflow diversity)
Current AI maturity Majority of participants at Level 1 (Capable) or Level 2 (Adoptive). Some may be at Level 0 (pre-work required).
Industry Finance or tech (Every’s consulting focus areas, where builder credibility is strongest)
Sprint cohort size 8-16 participants per sprint. Larger groups run parallel cohorts with separate Every consultants.
Participant selection Not volunteers — leadership selects people whose workflows would benefit most. Must include at least 1 senior leader as participant (not observer).
Pre-qualification Each participant completes a 15-minute pre-sprint workflow inventory identifying their highest-frequency AI-delegable task

Participant Selection Protocol

The client selects participants. Every provides the selection criteria and conducts a pre-sprint assessment.

Selection Criteria (provided to client):

  1. Workflow clarity: The participant must be able to describe at least one repeating workflow in their job (weekly report, data analysis, content creation, client communication, etc.). People in purely ad-hoc or creative roles are harder to sprint with — they benefit more from extended coaching.

  2. Tool access: The participant must have access to at least one AI tool (ChatGPT, Claude, Copilot, etc.) and have used it at least once. Pure Level 0 participants need a pre-sprint onboarding session (see Pre-Work below).

  3. Seniority mix: The cohort should include a mix of individual contributors and at least one manager/leader. The leader’s participation — as a builder, not an observer — signals organizational commitment and prevents the “management made us do this” dynamic.

  4. Cross-functional representation: Include people from at least 2 different functions. Cross-functional buddy pairing and demo day are more valuable when workflows are diverse.

Pre-Sprint Assessment (conducted by Every):

Every consultant conducts a 20-minute 1-on-1 with each participant before the sprint:

  • Current AI tool usage (frequency, tools, tasks)
  • Highest-frequency repeating workflow
  • Attitude toward AI (enthusiasm, skepticism, fear — all valid)
  • Specific barrier type (self-enhancing bias, identity threat, opacity, authority threat)
  • Maturity level assignment (Level 0, 1, or 2)

This assessment maps directly to the maturity ladder’s barrier taxonomy and enables targeted buddy pairing.


Buddy Pairing

Each client participant is paired with an Every consultant. The consultant-to-participant ratio is 1:2 (each Every consultant buddies with 2 participants).

Pairing Logic:

Participant Barrier Every Consultant Match Rationale
Self-enhancing bias (“AI makes me faster but I haven’t changed my workflow”) Every consultant who built a personal tool that evolved into a shared one (e.g., Natalia/Claudie story) Show the transition from “faster at my job” to “changed how the job works”
Identity threat (“AI threatens my expertise”) Every consultant who navigated the same tension (e.g., Lucas/design, Kate/editorial) Peer credibility from someone who felt the same threat and resolved it
Opacity (“I can’t articulate my expertise for AI”) Every consultant experienced in specification writing (use the specification-writer skill’s stranger test framework) Provide structured methods for encoding tacit knowledge
Authority threat (“AI reduces my influence”) Senior Every consultant (Dan, Natalia, Brandon) Senior-to-senior pairing; address the political dimension directly
Level 0 (no AI usage) Most patient Every consultant with strong onboarding skills This person needs fundamentals, not advanced methodology

Schedule

Duration: 2 days (Every’s standard camp format)

Rationale for 2 days over 3: Client teams have less flexibility than internal teams. Two full days is the maximum most organizations will commit for an offsite-style sprint. The pre-sprint assessment compensates for the shorter duration by scoping each participant’s artifact before Day 1.

Pre-Work (1-2 Weeks Before Sprint)

Activity Owner Purpose
Participant selection using Every’s criteria Client leadership + Every consultant Ensure right people are in the room
Pre-sprint 1-on-1 assessment (20 min each) Every consultant Maturity assessment, barrier identification, workflow inventory
Workflow documentation template Each participant Participant writes a 1-page description of their highest-frequency workflow: steps, tools, time spent, pain points
AI tool access verification Client IT + Every consultant Confirm every participant has a working AI tool (Claude, ChatGPT, etc.) on their laptop, logged in, with appropriate permissions
Level 0 onboarding session (if needed) Every consultant 1-hour session for Level 0 participants: basic AI tool usage, prompt writing fundamentals, Every’s “AI as a tool” framing

Day 1 — “See It, Then Scope It”

Time Activity Purpose Every’s Role
9:00-9:30 Kickoff: “Builder Credibility” — Every consultant opens with a live demo of their own AI workflow. Not slides. Not theory. A real tool doing real work. “This is how we actually work at Every. Today, you build yours.” Establish credibility and set the tone. The client sees that Every practices what it preaches. Dan or lead Every consultant presents. Shows Proof, compound engineering, or another internal tool in live use.
9:30-10:15 “Show Your Workflow” Round — 3-4 Every consultants each show a different personal AI workflow in 10 minutes. Emphasis on: what it does, how long it took to build, what it replaced, what still requires human judgment. Give participants multiple models of what “AI workflow” means. Prevent the assumption that “AI workflow” means “chatbot.” Every consultants demo real artifacts: Claudie (project management), Droid (CLI automation), AgentWatch (monitoring), compound engineering (software development).
10:15-10:30 Break    
10:30-11:30 Buddy Breakout 1: “Your Workflow, Decoded” — Each participant meets their Every buddy. They walk through the workflow documentation template the participant prepared. The buddy helps identify the highest-impact AI intervention point — not the whole workflow, just one step. Scope ruthlessly. The single biggest risk is participants trying to automate their entire job in 2 days. The buddy finds the one thing that will make the biggest difference. Every consultant applies the specification-writer skill’s stranger test: “Could someone with zero context evaluate this workflow’s output?”
11:30-12:00 Artifact Definition — Each participant writes a 1-sentence artifact description: “I am building [X] that will [Y] for [specific task] so that [measurable outcome].” Buddy approves. Public commitment to a concrete deliverable. Every consultant ensures scope is achievable in ~4 hours of build time.
12:00-1:00 Lunch   Every consultants eat with participants. Informal knowledge transfer.
1:00-1:30 “How to Build” Workshop — 30-minute practical session: prompt engineering for workflows, chaining prompts, using AI tools for structured output, saving and reusing prompts. Taught at the level of the audience. Bridge the “I want to build this” to “I know how to start” gap. Many Level 1 participants know how to use AI but not how to design a reusable workflow. Every consultant teaches from their own building experience. Live-codes or live-prompts, not slides.
1:30-4:30 Build Session 1 — Three hours of focused building. Buddy available for questions. One hard rule: by 4:30, there must be something that runs. It doesn’t have to be good. It has to work once. Force a working prototype. The biggest failure mode is participants spending all Day 1 “planning” and all Day 2 panicking. Every consultant checks in at 2:30 (halfway). If a participant doesn’t have a working prototype started, the consultant helps them simplify aggressively.
4:30-5:00 Day 1 Closeout: “Show Your Prototype” — Each participant shows their work-in-progress to 1-2 other participants (not a formal demo, just “look at this”). Buddy confirms prototype exists. Social accountability. Participants who show something feel committed. Participants who see others’ work feel motivated. Every consultants facilitate small-group showings. No one presents to the whole room — it’s too early and too vulnerable.

Day 2 — “Build It, Ship It, Demo It”

Time Activity Purpose Every’s Role
9:00-9:15 Standup — Each person: 30 seconds. “What works. What doesn’t. What I need.” Surface blockers. Every consultant takes notes on who is stuck and where.
9:15-12:00 Build Session 2 — Finish the artifact. Hard deadline: working artifact by noon. Same forcing function as Sprint 1. Noon is the ship deadline. Every consultant applies scope cuts for anyone not on track by 10:30.
12:00-12:30 Adoption Testing — Each participant hands their artifact to one other participant (cross-buddy pairing: you test someone else’s workflow, they test yours). 15 minutes of solo use, then 15 minutes of feedback. The “stranger test” for workflows. If someone else can’t use it, it’s not reusable yet. Every consultants facilitate the swap and observe — they take notes on what confuses the adopter, which feeds the afternoon iteration.
12:30-1:30 Working Lunch + Iteration — Fix what broke during adoption testing. Respond to real feedback. Every consultants help prioritize fixes.
1:30-3:00 Demo Prep + Final Build — Prepare a 5-minute demo. Format: (1) “Before AI” — describe the old workflow in 1 sentence. (2) “After AI” — live demo of the new workflow. (3) “What changed” — time saved, quality improved, or capability added. (4) “What’s still human” — what judgment remains yours. The “what’s still human” element is critical. It addresses identity threat directly: the demo explicitly honors the human judgment that AI does not replace. Every consultants coach demo format. The “what’s still human” section is non-negotiable — it reframes AI as augmentation, not replacement.
3:00-4:30 Demo Day — All participants demo to the full group including client leadership. 5 minutes per person + 2 minutes Q&A. Public demonstration of capability. Leadership sees concrete ROI. Participants see the breadth of what their peers built. Every lead consultant MC’s. Keeps energy high, asks good questions, connects dots between demos.
4:30-5:00 Closeout: “Your Next 7 Days” — Each participant writes a 1-sentence commitment: “In the next 7 days, I will use [artifact] for [specific task] at least [N] times.” Buddy signs off. Every consultant collects commitments. Convert sprint energy into sustained behavior. The written commitment with a specific number creates accountability. Every consultant explains the Day 7 follow-up: “We’re going to check in. Not to judge — to help. If it’s not working, we’ll help you fix it.”

Measurement Framework

Primary Success Metric

Working AI workflow demonstrated at demo day, adopted into daily practice within 1 week.

“Adopted” means: the participant uses the workflow for real work (not a demo) at least twice in the 7 days after the sprint.

Sprint Metrics (Day 2)

Metric How Measured Target
Artifacts completed Count of working demos on Day 2 100% of participants
Adoption test passed Count of artifacts successfully used by another participant during Day 2 adoption testing 80% of participants
Time savings identified Count of artifacts with a quantified time-saving estimate during demo 75% of participants
Leadership engagement Client leadership asks questions during demo day, expresses intent to continue Qualitative — captured in post-sprint report
Net Promoter Score Post-sprint survey: “How likely are you to recommend this sprint to a colleague?” 8+ average (on 0-10 scale)

Follow-Up Metrics

Metric When Measured How Measured Target
Day 7 adoption 1 week post-sprint Every consultant contacts each participant: “Have you used your workflow this week? How many times?” 80% of participants used workflow at least twice
Day 7 iteration 1 week post-sprint “Have you modified or improved your workflow since the sprint?” 40% of participants have iterated
Day 30 adoption 1 month post-sprint Client leadership reports or participant survey 60% of participants still using workflow
Day 30 propagation 1 month post-sprint “Has anyone else on your team started using your workflow or built their own?” 25% of participants’ workflows adopted by a colleague
Maturity level change 1 month post-sprint Every consultant re-assesses maturity level for each participant 60% of participants moved up at least 1 level

Leadership Strategy

For Client Leadership (Before Sprint)

Every consultant prepares client leadership with three messages:

  1. “You participate, you don’t observe.” At least one senior leader must go through the sprint as a builder. This is non-negotiable for engagement acceptance. When the VP builds a workflow alongside the analyst, it signals that AI adoption is a company priority, not a training checkbox.

  2. “Expect rough edges.” The artifacts built in 2 days will not be polished. They are v1 prototypes. The value is not the artifact itself but the capability the person developed by building it. Leadership should celebrate the building, not critique the output.

  3. “Your role at demo day is to ask genuine questions.” Not evaluative questions (“Is this good enough?”) but curious questions (“How did you figure out that prompt?” “What surprised you?”). The demo day tone should match Every’s “be sincere, not serious” culture.

For Every Consultants (During Sprint)

  1. Show your own tools first, always. Every demo, every teaching moment, every “here’s how to do that” starts with “here’s how I actually do it at Every.” This is the builder credibility value in action and the single biggest differentiator from generic AI training.

  2. Name the barrier. Every pre-sprint assessment identifies each participant’s barrier type. The consultant addresses it directly: “I notice you’re concerned about [X]. Here’s how I navigated that same concern.” Do not ignore resistance — it is diagnostic information.

  3. Scope is your weapon. The number one sprint failure mode is over-scoping. Your job is to find the smallest useful artifact that crosses the threshold from “I use AI” to “I built an AI workflow.” If a participant wants to automate their entire reporting pipeline, help them automate one report’s data-pull step first.

  4. “What’s still human” is mandatory. Every demo must include what remains human judgment. This is not a feel-good addition — it is a precise reframe that addresses identity threat at the structural level. Participants who can articulate what remains human are more likely to sustain adoption because they are not threatened by it.


Follow-Up Protocol

When Action Owner Deliverable
Day 2, end of sprint Post-sprint report drafted Every lead consultant 2-page summary: participant count, artifacts built, key themes, leadership observations, photos/screenshots
Day 3 Post-sprint survey sent Every ops NPS + qualitative feedback (3 questions: what worked, what didn’t, what would you change)
Day 7 Adoption check-in Every consultant (each contacts their 2 buddied participants) Adoption data: usage frequency, modifications, blockers
Day 7 Blocker resolution session (if needed) Every consultant (30-minute call with any participant whose workflow isn’t sticking) Revised workflow or scope adjustment
Day 14 Summary report to client leadership Every lead consultant Adoption data, ROI estimates, recommendation for next cohort or deeper engagement
Day 30 Final adoption assessment Every consultant Maturity level re-assessment, propagation data, case study draft (with client permission)
Day 30 Case study Every content team Publishable case study for Every’s content and consulting pipeline (with client approval) — closes the builder credibility loop

Pricing and Packaging Guidance

This sprint template is designed to fit Every’s existing camp format and consulting model:

  • Standard Sprint: 2 days, up to 12 participants, 4 Every consultants (1:3 ratio), includes pre-sprint assessment and 30-day follow-up. Priced as a premium consulting engagement.
  • Extended Sprint: 3 days (adds a Day 3 iteration day for clients with more Level 0-1 participants), up to 16 participants, 5 Every consultants.
  • Sprint Series: 3 sprints over 3 months, each with a different cohort. Includes cross-cohort demo day at the end. Designed for organizations rolling out AI adoption department by department.

Every’s 8-camp, 5,395-participant track record is the proof point. This sprint is a more intensive, higher-touch version of the camp format — smaller cohort, personalized buddy pairing, sustained follow-up.


Appendix: Sprint 1 to Sprint 2 Knowledge Transfer

Sprint 1 (internal) generates the following assets that directly feed Sprint 2 (consulting):

Sprint 1 Output Sprint 2 Usage
8 participant transition stories (barrier navigated, artifact built, adoption achieved) Every consultants share these stories during Day 1 “Show Your Workflow” round and buddy breakouts
Buddy pairing effectiveness data (which pairings worked best, why) Refines the buddy matching logic for client sprints
Barrier-specific coaching techniques (what reframes worked for identity threat, opacity, self-enhancing bias) Every consultants use tested reframes, not theoretical ones
Scope-cutting patterns (how v1 artifacts were narrowed to shippable scope) Every consultants apply proven scope-cutting heuristics during Day 1 scoping
Demo day format (what worked: adopter testimonials, live demos, awards) Client demo day uses the same format, adapted for client culture
Day 10 and Day 30 adoption data from Sprint 1 Establishes baseline expectations for client adoption rates; Every can say “In our own sprint, X% sustained adoption at 30 days”
Retro insights (“What crossed the line”) Directly informs the “How to Build” workshop content on Day 1 of Sprint 2

This knowledge transfer loop is the structural embodiment of builder credibility: Every runs the sprint on itself first, learns what works, then brings that tested methodology to clients.


Generated by adoption-sprint-designer skill, ai-first-org-design-kit. Input data: maturity-ladder-2026-04-01-1440.md, VALUES.md, sprint brief. Next recommended action: Run Sprint 1 internally, then use retro outputs to finalize Sprint 2 template for first consulting client.