Role-Value Map — Every Inc (Every Media, Inc.)

Date: 2026-04-01 Skill: role-value-mapper Depends on: coordination-audit (2026-04-01), org-genome-builder


Model: Three-Variable Decomposition

Every role is decomposed into four time-allocation modes derived from the Three-Variable Model:

Mode Target (AI-First Org) Description
Specification 40-50% Defining intent, success criteria, quality standards, judgment boundaries
Coordination Design 15-20% Designing meetings, approvals, handoffs, alignment rituals
Execution Oversight 10-15% Monitoring AI/agent output, intervening on exceptions
System Evolution 15-20% Improving processes, encoding new patterns, compounding knowledge

Every’s current aggregate: Specification 40% / Coordination 27% / Execution 33%. The role map below targets the AI-first allocation per role.


Structural Context: The Two-Slice Team

Every operates as a “Two-Slice Team” — a ~20-person organization that delivers the output of a much larger company by treating AI agents as the execution layer. The organizational model has two “slices”:

  1. Specification Slice (Humans): Every human defines what should exist, what quality looks like, and what judgment calls require human taste.
  2. Execution Slice (AI + Agents): Claude Code, compound engineering agents, Claudie, Descript, custom API integrations, and 14-agent code review pipelines produce the actual artifacts.

Roles are designed around specification responsibility — what each person uniquely defines that others (human or AI) cannot. Job titles are secondary to value flows.


Role Decompositions


1. CEO / Allocator-in-Chief — Dan Shipper

Three-Variable Decomposition:

Activity Spec Coord Exec AI Delegation
Company direction & strategy 80% 15% 5% AI drafts scenario analyses, market maps; Dan sets direction
Chain of Thought column writing 60% 5% 35% AI assists drafting, research synthesis; Dan provides thesis and voice
Proof (product) building 50% 10% 40% Compound engineering — 99% AI-written code; Dan specifies features, reviews output
Investor relations & fundraising 40% 45% 15% AI drafts updates and decks; Dan owns relationships and narrative
AI & I podcast (host) 50% 20% 30% AI handles research prep, show notes; Dan drives conversation direction
Hiring & team design 70% 20% 10% AI screens, summarizes; Dan makes taste-based hiring calls
Cross-function resource allocation 65% 25% 10% AI surfaces dashboards and signals; Dan decides allocation tradeoffs

Aggregate target: Specification 60% / Coordination 18% / Execution 15% / System Evolution 7%

Specification responsibility: Company narrative (“what is Every and where is it going”), allocation of human attention across media/software/consulting, hiring taste bar, and the meta-question of what AI-first organizational design looks like. Dan’s weekly column IS specification — it forces him to articulate the thesis the company operates from.

Coordination responsibility: Runs weekly all-hands/demo day (cultural ritual — never encode). Final approval on major hires, product launches, and consulting engagements. Designs the coordination architecture itself — decides what meetings exist and which get eliminated.

Execution oversight: Monitors Proof product metrics, reviews his own column quality (self-editing with AI assistance), spot-checks consulting deliverables.

Unique judgment: Narrative taste — the ability to identify “what’s actually happening with AI” vs. hype. The allocator’s eye: knowing when to add a product, kill a product, hire a person, or let AI handle it. The integration insight that comes from being simultaneously a writer, builder, and CEO. No one else at Every holds all three perspectives.

AI amplification: R2-C2 (Dan’s named AI agent). Compound engineering for Proof. AI-assisted writing workflow for Chain of Thought. The CEO role is the highest-impact specification role because every specification Dan produces cascades through the entire organization.


2. CTO — Brandon Gell

Three-Variable Decomposition:

Activity Spec Coord Exec AI Delegation
Technical direction & architecture 75% 15% 10% AI generates architecture options; Brandon selects and sets standards
Studio oversight (4 GMs) 40% 35% 25% AI monitors build health, test coverage; Brandon reviews architectural decisions
Operations systematization 55% 25% 20% AI implements automation; Brandon defines what to systematize
Compound engineering methodology 70% 10% 20% Brandon evolves the methodology; AI executes and documents patterns
Infrastructure & DevOps 30% 20% 50% AI handles routine infra; Brandon handles novel scaling problems
Cross-GM knowledge transfer design 60% 30% 10% AI aggregates learnings; Brandon designs sharing mechanisms

Aggregate target: Specification 55% / Coordination 20% / Execution 15% / System Evolution 10%

Specification responsibility: Technical quality bar across all four products. The compound engineering methodology — Plan/Work/Review/Compound — was co-architected by Dan (philosophy and article series) and systematized into the open-source plugin by Kieran (definitive guide, implementation). Brandon maintains and evolves it as operational infrastructure. Defines what “good architecture” looks like for a solo-GM product. Sets the 14-agent code review configuration and standards.

Coordination responsibility: Primary technical coordinator across GMs who otherwise operate autonomously. Designs the coordination architecture for engineering: what’s shared (infra, design system, code review agents) vs. what’s GM-autonomous (product decisions, feature prioritization). Manages design team allocation with Lucas.

Execution oversight: Reviews architectural decisions across products. Monitors the compound engineering pipeline’s health. Intervenes when a GM’s technical choices create cross-product risk.

Unique judgment: Systems taste — knowing when a technical approach is “too clever” or “not clever enough” for a 1-GM product. The former-founder perspective: understanding the full lifecycle of a technical decision from architecture to maintenance to scaling. The “be sincere, not serious” ethos applied to engineering culture.

AI amplification: Milo (Brandon’s named AI agent). Oversees the compound engineering plugin (12K+ GitHub stars) that IS the execution layer for all products. Brandon’s specification quality directly determines the quality of AI-generated code across the entire product portfolio.


3. Editor in Chief / Quality Architect — Kate Lee

Three-Variable Decomposition:

Activity Spec Coord Exec AI Delegation
Editorial taste standard-setting 90% 5% 5% Kate defines the standard; it cannot be delegated
Three rigor tests application 70% 15% 15% AI pre-screens for obvious failures; Kate makes final judgment
Writer development & feedback 50% 30% 20% AI drafts feedback templates; Kate provides taste-specific guidance
Article final review 60% 20% 20% AI flags structural issues; Kate judges voice, originality, insight
Editorial strategy & calendar 65% 25% 10% AI drafts calendar proposals; Kate decides what Every should say and when
Brand voice governance 85% 10% 5% Kate is the voice; AI can check consistency but not define it

Aggregate target: Specification 70% / Coordination 15% / Execution 5% / System Evolution 10%

Specification responsibility: The highest specification-concentration role at Every. Kate defines what “good” means for Every’s editorial output — the three rigor tests are her encoded specification:

  1. Does the piece have a genuine thesis? (Not a recap, not a listicle — a real argument.)
  2. Does the writer’s voice come through? (Not AI-generic, not corporate, authentically theirs.)
  3. Would this make someone smarter about how AI changes their work? (The Every mission test.)

These tests are the quality gate for 272 articles/year across 40 writers.

Coordination responsibility: Participates in editorial standups (pipeline management). Coordinates with Dan on editorial strategy alignment with company direction. Manages the writer-editor relationship that IS Every’s editorial culture.

Execution oversight: Final review of articles before publication. Spot-checks AI tells that Katie Parrott’s criteria might miss. Monitors overall editorial quality trends.

Unique judgment: Editorial taste at the Stripe Press level. The ability to read a piece and know whether it “sounds like Every” — a judgment that encompasses voice, intellectual rigor, originality, and practical value simultaneously. This is the role that most purely embodies Value #1 (Taste Over Process). Kate’s taste cannot be encoded into a checklist; the three rigor tests are necessary but not sufficient approximations.

AI amplification: AI handles pre-screening, structural analysis, fact-checking, and AI-tells detection. This frees Kate to spend nearly all her time on the irreducible judgment calls — is this piece genuinely good? Kate is the throughput bottleneck identified in the audit; AI amplification is about expanding her specification bandwidth, not replacing her judgment.


4. Managing Editor — Eleanor Warnock

Three-Variable Decomposition:

Activity Spec Coord Exec AI Delegation
Pipeline coordination & triage 25% 55% 20% AI automates Kanban tracking, deadline nudges, bottleneck alerts
First-pass editorial review 55% 15% 30% AI pre-screens; Eleanor applies first-pass quality judgment
Writer development 50% 25% 25% AI tracks writer patterns; Eleanor provides developmental feedback
Editorial scheduling & logistics 15% 60% 25% AI handles scheduling automation; Eleanor manages exceptions
Kate’s bandwidth management 20% 60% 20% Eleanor gates what reaches Kate — protecting the bottleneck

Aggregate target: Specification 35% / Coordination 40% / Execution 10% / System Evolution 15%

Specification responsibility: Defines the triage criteria — what gets fast-tracked to Kate, what needs revision, what gets killed. Specifies the editorial pipeline’s operational rules. Develops writers by specifying what “good enough for first pass” looks like, calibrated to Kate’s standards.

Coordination responsibility: The primary coordination role in editorial. Eleanor is the pipeline’s operating system — she designs the flow of work from pitch to publication. This is the role with the highest encodable coordination (audit identified ~25 hrs/month of pipeline management as encodable). Target: encode logistics, preserve the writer-development conversations.

Execution oversight: First-pass review of all incoming drafts. Monitors pipeline health metrics (throughput, time-to-publish, revision cycles). Catches problems before they reach Kate.

Unique judgment: Pipeline taste — knowing which pieces are ready for Kate’s time and which need more work. Writer development judgment — understanding what feedback will help a specific writer grow vs. what will frustrate them. The human relationship layer of editorial management that scheduling software cannot replicate.

AI amplification: Highest coordination-encoding opportunity in editorial. AI should handle: Kanban automation, deadline tracking, bottleneck surfacing, writer status pings, and scheduling. Eleanor’s freed time should shift toward specification (writer development, triage criteria refinement) and system evolution (improving the pipeline itself).


5. Staff Writer / AI Editorial Lead — Katie Parrott

Three-Variable Decomposition:

Activity Spec Coord Exec AI Delegation
Writing articles 45% 10% 45% AI drafts, researches, structures; Katie provides thesis, voice, judgment
AI tells detection criteria 85% 10% 5% Katie defines what “AI slop” looks like; criteria are partially encoded
Editorial quality specification 65% 15% 20% Katie specifies quality patterns; AI checks against them
Writer training on AI-assisted writing 50% 30% 20% Katie defines best practices; trains others to maintain voice with AI
Style guide maintenance 70% 15% 15% Katie evolves the criteria; AI applies them at scale

Aggregate target: Specification 60% / Coordination 15% / Execution 15% / System Evolution 10%

Specification responsibility: Specification Authority for a critical quality dimension: the boundary between “AI-assisted writing that sounds human” and “AI slop that damages credibility.” Katie’s AI tells detection criteria are a living specification — they evolve as AI output improves and as new patterns of AI-generic writing emerge. This directly protects Value #3 (Builder Credibility): Every cannot preach AI adoption while publishing content that reads as AI-generated.

Coordination responsibility: Coordinates with Kate on evolving quality standards. Trains other writers on AI-assisted writing workflows that preserve voice. Participates in editorial standups.

Execution oversight: Applies AI tells detection to incoming articles. Reviews her own AI-assisted drafts against her own criteria (dogfooding the specification).

Unique judgment: The meta-judgment of AI writing quality — knowing what “sounds AI” before readers do. This requires constant recalibration as models improve. Katie’s named agent Margot assists her writing, and the gap between Margot’s output and Katie’s final version IS the specification being refined in real-time.

AI amplification: Margot (Katie’s named AI agent) for drafting and research. The AI tells detection criteria are themselves partially encoded and applied by AI — Katie specifies the frontier, AI enforces the established patterns. This is a pure example of the specification-execution split.


6. Product GM (Template Role) — Kieran/Cora, Naveen/Monologue, Yash/Sparkle, Danny/Spiral

Note on Proof: Proof (agent-native document editor) is maintained directly by Dan Shipper as a CEO-level project, not through the GM model. It does not have a dedicated GM. Proof follows the same compound engineering workflow but Dan acts as its GM alongside his CEO responsibilities.

Three-Variable Decomposition:

Activity Spec Coord Exec AI Delegation
Product strategy & roadmap 80% 10% 10% AI generates market analysis; GM sets product direction
PRD / plan creation 75% 15% 10% GM writes the plan; AI helps structure and check completeness
Feature development (compound engineering) 15% 5% 80% 99% AI-written code; GM specifies, reviews, compounds
14-agent code review 50% 10% 40% Agents review; GM makes final merge decisions
User research & support 40% 25% 35% AI aggregates feedback; GM interprets and prioritizes
Marketing copy & growth 55% 20% 25% AI drafts copy; GM sets positioning and voice
Metrics & analytics 30% 15% 55% AI generates dashboards; GM interprets and acts
Design direction (with Lucas’s team) 60% 30% 10% GM specifies design intent; design team + AI execute
Compounding / documentation 60% 10% 30% AI drafts docs; GM curates what compounds

Aggregate target: Specification 50% / Coordination 15% / Execution 20% / System Evolution 15%

Specification responsibility: The GM is the full-stack specification authority for their product. They define: what the product does, who it’s for, what quality looks like, what gets built next, what the marketing says, and what success metrics matter. This is the purest expression of Value #4 (Generalist Advantage) — one person specifying across every domain of a product.

The Plan step in compound engineering (40% of cycle time) IS specification. The Review step (40% of cycle time) IS specification evaluation. Only 20% of the GM’s compound engineering cycle is execution or coordination.

Coordination responsibility: Minimal by design. The GM autonomy model (identified as a cultural red flag to protect in the audit) means GMs coordinate primarily with: the design team (request intake), Brandon (architectural decisions), and Dan (product strategy alignment). Cross-GM coordination is intentionally low — each product is an independent venture.

Execution oversight: Reviews all AI-generated code via the 14-agent pipeline. Monitors product metrics. Handles customer support escalations. The GM sees every line of code that ships — not to write it, but to judge it.

Unique judgment: Product taste specific to their domain. Each GM has developed product intuition through building and iterating:

  • Kieran (Cora): Plan-first methodology — deep specification before any code
  • Naveen (Monologue): Linear-centric workflow — tight feedback loops with user research
  • Yash (Sparkle): Parallel Claude+Codex — maximum execution throughput with dual-agent approach
  • Danny (Spiral): Droid CLI — custom tooling optimized for Spiral’s specific patterns

These workflow variations are features, not bugs. Each GM optimizes the compound engineering methodology for their product’s unique needs.

AI amplification: The GM role is the ultimate demonstration of AI amplification. One person produces the output of a traditional 5-10 person product team. The compound engineering plugin (12K+ GitHub stars) is the amplification infrastructure. The GM’s amplification comes entirely from specification quality — better specs produce exponentially better AI output.

GM-Specific Variants:

GM Product Specification Focus Workflow Style
Kieran Cora Deep upfront planning, thorough PRDs Plan-first, specification-heavy
Naveen Monologue User research integration, rapid iteration Linear-centric, feedback-loop-heavy
Yash Sparkle Parallel execution, throughput optimization Dual-agent (Claude + Codex)
Danny Spiral Custom tooling, workflow innovation Droid CLI, tool-builder approach

7. Engineering Lead — Andrey Galko

Three-Variable Decomposition:

Activity Spec Coord Exec AI Delegation
Technical direction across products 65% 20% 15% AI surfaces cross-product patterns; Andrey sets technical standards
Code review architecture 70% 15% 15% Andrey designs the 14-agent review pipeline; agents execute
Infrastructure & shared services 35% 20% 45% AI handles routine infra; Andrey handles novel problems
GM technical support 30% 40% 30% Andrey supports GMs with cross-cutting technical decisions
Security & reliability 55% 20% 25% AI monitors and alerts; Andrey sets security standards and responds to incidents

Aggregate target: Specification 50% / Coordination 22% / Execution 18% / System Evolution 10%

Specification responsibility: Defines technical standards that apply across all four products — shared infrastructure specifications, security requirements, performance benchmarks. Works with Brandon on evolving the compound engineering methodology’s technical layer. Specifies the 14-agent code review pipeline’s review criteria.

Coordination responsibility: The technical bridge between autonomous GMs. When one GM’s architectural decision affects shared infrastructure, Andrey coordinates the resolution. Manages technical dependency across products without constraining GM autonomy.

Execution oversight: Monitors infrastructure health across all products. Reviews cross-cutting technical decisions. Responds to production incidents that affect multiple products.

Unique judgment: Deep technical judgment applied across multiple product contexts simultaneously. The ability to see when a GM’s locally-optimal technical decision creates globally-suboptimal outcomes. Security and reliability judgment — knowing when “ship and iterate” needs to defer to “get this right first.”

AI amplification: AI handles monitoring, alerting, routine infrastructure maintenance, and code review execution. Andrey’s specification of review criteria and infrastructure standards is what makes the AI execution layer reliable.


8. Head of Consulting / Practice Architect — Natalia Quintero

Three-Variable Decomposition:

Activity Spec Coord Exec AI Delegation
Practice architecture & methodology 75% 15% 10% Natalia designs the consulting methodology; AI drafts frameworks
Client relationship management 30% 50% 20% Claudie handles status reporting; Natalia owns strategy conversations
Engagement scoping & pricing 65% 25% 10% AI analyzes comparable engagements; Natalia sets scope and price
Delivery oversight 40% 35% 25% AI tracks milestones; Natalia intervenes on quality and relationship issues
Team training & development 55% 25% 20% AI generates training materials; Natalia specifies what “good consulting” looks like
Claudie development & evolution 70% 10% 20% Natalia specifies Claudie’s behaviors; AI builds features
Pipeline development 35% 40% 25% AI qualifies leads; Natalia makes engagement decisions

Aggregate target: Specification 50% / Coordination 30% / Execution 10% / System Evolution 10%

Specification responsibility: Defines what Every’s consulting practice IS — the methodology, the quality bar, the engagement model. Natalia’s core specification is: “We’re builders who consult, not consultants who sometimes build” (directly from MISSION.md). She specifies the line between engagements Every should take (where practitioner experience gives a real edge) and engagements Every should decline (where they’d be “advisors” not practitioners). Built Claudie, the AI PM agent that saves 14 hrs/week — this is builder credibility made tangible.

Coordination responsibility: Highest coordination-allocation role at Every (audit found consulting at 35% coordination). Client-facing work inherently requires more coordination — calls, alignment, status updates. Target: encode logistics (extend Claudie to all client status reporting — audit Quick Win #2), preserve strategy and relationship conversations.

Execution oversight: Monitors all active consulting engagements. Reviews deliverable quality before client delivery. Ensures consulting work reflects Every’s builder credibility standard.

Unique judgment: Client relationship taste — knowing when a client needs a strategy conversation vs. a tactical session vs. reassurance. Engagement scoping judgment — the boundary between “we can help” and “this isn’t our expertise.” The practitioner’s credibility that makes clients trust recommendations — Natalia doesn’t just advise, she shows them Claudie.

AI amplification: Claudie (Natalia’s AI PM agent) is the model for AI amplification in consulting. Already saves 14 hrs/week on onboarding and weekly updates. The audit recommends extending Claudie to all consulting client status reporting. Natalia’s role is shifting from coordination-heavy to specification-heavy as Claudie absorbs more logistics.


9. Finance Vertical Lead — Brooker Belcourt

Three-Variable Decomposition:

Activity Spec Coord Exec AI Delegation
Finance vertical strategy 70% 20% 10% AI analyzes market trends; Brooker sets vertical direction
Client engagement (finance) 35% 40% 25% AI drafts materials; Brooker brings domain credibility
Workflow building (finance-specific) 60% 15% 25% AI generates workflow templates; Brooker specifies finance-domain requirements
Finance AI use case identification 75% 15% 10% AI surfaces possibilities; Brooker judges which are real vs. hype
Client training delivery 40% 30% 30% AI generates training materials; Brooker delivers with domain authority

Aggregate target: Specification 55% / Coordination 25% / Execution 10% / System Evolution 10%

Specification responsibility: Specifies how AI adoption works in finance specifically — a domain with unique regulatory, compliance, and risk management requirements. Brooker’s Goldman and Citadel experience means he specifies from first-hand knowledge of what finance teams actually do, not from outside observation. He defines the finance-specific consulting methodology: which AI use cases are real for hedge funds, banks, and asset managers, and which are premature.

Coordination responsibility: Manages finance client relationships jointly with Natalia. Coordinates between Every’s general consulting methodology and finance-specific requirements. Participates in client pipeline reviews.

Execution oversight: Reviews finance-specific deliverables for domain accuracy. Ensures consulting recommendations are credible to finance professionals (ex-Goldman/Citadel standard).

Unique judgment: Finance domain taste — the ability to know what will work in a hedge fund’s actual workflow vs. what sounds good in a demo. Compliance sensitivity — understanding where AI can and cannot be applied in regulated finance environments. The credibility that comes from having been on the buy side, not just consulting to it.

AI amplification: AI generates finance-specific workflow templates, training materials, and use case analyses. Brooker’s domain specification makes these outputs credible. Without his finance expertise specifying the constraints, AI would generate generic consulting materials that fail Value #3 (Builder Credibility).


10. Creative Director — Lucas Crespo

Three-Variable Decomposition:

Activity Spec Coord Exec AI Delegation
Visual taste & brand standard-setting 85% 10% 5% Lucas defines the aesthetic; AI cannot replace this judgment
Design team management (3-person) 30% 45% 25% AI handles scheduling; Lucas manages taste calibration and mentorship
Product design direction 60% 25% 15% Lucas specifies design intent; team + AI execute
Cross-product design consistency 65% 25% 10% AI checks consistency; Lucas defines what “consistent” means for Every
Request intake & prioritization 20% 55% 25% AI should automate intake (audit Quick Win #1); Lucas prioritizes
Figma MCP handoff management 25% 45% 30% Figma MCP reduces handoff friction; Lucas ensures design intent survives translation
Marketing & consulting design 50% 30% 20% AI generates variants; Lucas selects and refines

Aggregate target: Specification 50% / Coordination 30% / Execution 10% / System Evolution 10%

Specification responsibility: Defines Every’s visual language across all surfaces — products, marketing, articles, consulting decks, podcast assets. Lucas is the visual equivalent of Kate Lee: where Kate defines what Every sounds like, Lucas defines what Every looks like. His specification governs a 3-person design team that rotates across 4 products, consulting, and marketing as an internal agency.

Coordination responsibility: Highest coordination overhead in the product org (audit found design rotation at 40% coordination). The 3-person team context-switches across 4+ products constantly. Request intake and handoff are coordination-heavy. The audit’s #1 Quick Win is formalizing design request intake in Linear — this directly reduces Lucas’s coordination burden.

Execution oversight: Reviews all design output from his team before handoff to GMs. Monitors how designs survive implementation (Figma MCP helps but doesn’t eliminate this). Ensures visual quality across Every’s entire surface area.

Unique judgment: Visual taste at a level that unifies a media brand, four software products, a consulting practice, and a podcast under one coherent aesthetic. The ability to context-switch across radically different design problems (editorial illustrations vs. SaaS UI vs. consulting decks) while maintaining brand coherence. Named his AI agent Alfredo — even the agent naming reflects design personality.

AI amplification: AI generates design variants, checks consistency, and assists with production work. Figma MCP integration reduces handoff friction between design and GMs. The audit’s recommendation to formalize intake in Linear would shift ~40 hrs/month from coordination to specification and system evolution. Lucas’s freed time should go toward evolving Every’s design system, not managing logistics.


11. Head of Growth — Austin Tedesco

Three-Variable Decomposition:

Activity Spec Coord Exec AI Delegation
Growth strategy & subscriber architecture 70% 15% 15% AI models scenarios; Austin sets strategy
Growth infrastructure & tooling 40% 15% 45% AI handles implementation; Austin specifies what to measure and optimize
Subscriber journey design 65% 20% 15% AI generates journey variants; Austin selects based on data + taste
Cross-channel attribution & analytics 35% 20% 45% AI runs analytics; Austin interprets and acts
Experimentation framework 60% 15% 25% AI runs tests; Austin designs experiment architecture
Content distribution strategy 55% 25% 20% AI executes distribution; Austin defines channel strategy

Aggregate target: Specification 55% / Coordination 18% / Execution 17% / System Evolution 10%

Specification responsibility: Defines the growth architecture — how 100K+ subscribers discover, engage with, and deepen their relationship with Every. Austin’s ex-Substack/ESPN experience means he specifies growth from the platform perspective, not just the content perspective. He defines: what to measure, what experiments to run, what subscriber segments to target, and what “healthy growth” looks like (taste over scale — Every prefers 100K deeply engaged over 10M casual).

Coordination responsibility: Coordinates with editorial (content distribution), product GMs (product-led growth), marketing (campaigns), and social (distribution channels). Growth touches every function, making coordination design a significant part of the role.

Execution oversight: Monitors growth metrics, experiment results, and channel performance. AI handles the execution of experiments and analytics; Austin interprets results.

Unique judgment: Growth taste — knowing the difference between growth that builds a durable audience and growth that inflates vanity metrics. The ability to balance “ship and iterate” (Value #2) with “taste over process” (Value #1) in the growth context: test aggressively on internal infrastructure, but never compromise subscriber experience for a metric bump. Named his AI agent Montaigne — growth with intellectual curiosity.

AI amplification: AI handles analytics execution, experiment operations, distribution automation, and subscriber journey implementation. Austin’s specification of what to measure and what “good growth” looks like is the irreducible human layer. AI amplifies throughput of experiments; Austin amplifies quality of growth strategy.


12. Product Marketing Lead — Anukshi Mittal

Three-Variable Decomposition:

Activity Spec Coord Exec AI Delegation
Event strategy & execution 50% 30% 20% AI handles logistics; Anukshi defines event experience and positioning
Product launch copy & positioning 65% 15% 20% AI drafts copy variants; Anukshi selects based on product narrative
Campaign design 60% 20% 20% AI generates campaign frameworks; Anukshi specifies messaging
Cross-product marketing coordination 25% 50% 25% AI tracks timelines; Anukshi aligns launches across products
Brand storytelling 70% 10% 20% AI assists research and drafting; Anukshi shapes the narrative

Aggregate target: Specification 55% / Coordination 25% / Execution 12% / System Evolution 8%

Specification responsibility: Defines how Every’s products are positioned and communicated to the market. Specifies event experiences, launch narratives, and campaign messaging. The bridge between Dan’s company narrative and how it manifests in specific product marketing.

Coordination responsibility: Coordinates launch timing across products and editorial calendar. Aligns marketing activities with product roadmaps (GMs), editorial themes (Kate/Eleanor), and growth strategy (Austin). Events are coordination-heavy by nature.

Execution oversight: Reviews all marketing copy for brand consistency. Monitors campaign performance. Ensures event execution matches specification.

Unique judgment: Marketing narrative taste — the ability to translate Every’s “practitioners, not observers” identity into compelling product positioning. Event design judgment — creating experiences that feel like Every (playful, substantive, not corporate). Named her AI agent Iris — the marketing perspective requires seeing the full spectrum.

AI amplification: Iris (Anukshi’s named AI agent). AI generates copy variants, campaign frameworks, and event logistics plans. Anukshi’s specification of “what story are we telling” is the irreducible layer. AI produces more variants faster; Anukshi’s taste selects the right one.


13. Podcast Producer — Rachel Braun

Three-Variable Decomposition:

Activity Spec Coord Exec AI Delegation
Topic/guest selection & research 60% 25% 15% AI researches guests, generates topic briefs; Rachel curates
Episode production planning 40% 35% 25% AI generates run-of-show templates; Rachel adapts per episode
Recording session management 15% 35% 50% Real-time production — minimal AI delegation during recording
Post-production (Descript) 20% 10% 70% Descript + AI handle editing; Rachel directs cuts and pacing
Distribution & promotion 25% 30% 45% AI automates posting; Rachel manages channel-specific adaptation
Show notes & companion content 30% 15% 55% AI generates drafts from transcripts; Rachel edits for accuracy

Aggregate target: Specification 30% / Coordination 25% / Execution 30% / System Evolution 15%

Specification responsibility: Defines the AI & I podcast’s production quality standard — pacing, narrative arc, guest preparation, audio quality. Specifies which topics and guests align with Every’s editorial mission (41 episodes/year requires consistent curation). Co-specifies with Dan (as host) the show’s editorial direction.

Coordination responsibility: Guest scheduling and logistics are the primary coordination cost (audit finding). Coordinates with Dan’s schedule, guest availability, and editorial calendar for tie-in content. Distribution logistics across podcast platforms.

Execution oversight: This is the most execution-heavy role at Every (audit found podcast at 45% execution). Post-production, distribution, and show note creation are execution-intensive. AI (Descript, StreamYard) already handles significant execution, but human oversight of audio quality and timing remains necessary.

Unique judgment: Production taste — knowing when an episode flows well vs. when it needs restructuring. Guest-host chemistry judgment — preparing Dan with the right context to have genuine conversations, not scripted interviews. The audio equivalent of editorial taste: knowing what sounds like Every (curious, substantive, playful) vs. what sounds generic.

AI amplification: Descript for post-production, StreamYard for recording, AI for show notes and research. The audit’s Quick Win #5 (podcast logistics encoding) targets Rachel’s coordination overhead. The most impactful AI amplification would be in post-production (reducing the 70% execution there) and distribution automation.


14. Social Media Manager — Anthony Scarpulla

Three-Variable Decomposition:

Activity Spec Coord Exec AI Delegation
Social strategy & voice 60% 15% 25% AI generates strategy options; Anthony selects and adapts
Content creation & scheduling 35% 15% 50% Custom Claude+X API integration handles posting; Anthony specifies content
Community engagement 40% 20% 40% AI drafts responses; Anthony adds personality and judgment
Analytics & optimization 30% 15% 55% AI runs analytics; Anthony interprets and adjusts strategy
Custom API integration development 50% 10% 40% Anthony built the Claude+X integration — builder credibility in action
Cross-content distribution 25% 35% 40% AI distributes; Anthony coordinates timing with editorial and products

Aggregate target: Specification 40% / Coordination 18% / Execution 27% / System Evolution 15%

Specification responsibility: Defines how Every’s voice manifests on social platforms — a translation layer between the editorial voice (Kate’s domain) and platform-specific norms. Specifies the Claude+X API integration’s behavior: what the AI posts, how it responds, what tone it uses. This is a direct specification of an AI agent’s behavior in a public-facing context.

Coordination responsibility: Coordinates social distribution timing with editorial calendar (when articles publish), product launches (when to amplify), and podcast episodes (cross-promotion). Acts as distribution channel for all content functions.

Execution oversight: Monitors social engagement metrics. Reviews AI-generated social content before posting (or configures the Claude+X API to post within specified parameters). Responds to community interactions that require human judgment.

Unique judgment: Platform voice taste — knowing what works on X/Twitter vs. other platforms while maintaining Every’s brand identity. Community management judgment — when to engage, when to ignore, what tone to strike with different audiences. The builder’s advantage: Anthony built the Claude+X API integration himself, giving him direct control over the AI execution layer.

AI amplification: The custom Claude+X API integration IS the AI amplification. Anthony built the tool, specifies its behavior, and iterates on its output. This is the purest expression of “social media manager as specification authority for an AI agent” — the agent posts, Anthony specifies what and how.


Cross-Role Specification Authority Map

This table shows who holds specification authority for each key domain. Specification authority means: this person defines what “good” looks like, and others (human and AI) execute against that standard.

Domain Primary Spec Authority Secondary AI Enforcement
Company direction & narrative Dan Shipper AI drafts; Dan decides
Technical architecture Brandon Gell Andrey Galko 14-agent code review
Editorial quality (taste) Kate Lee Katie Parrott AI tells detection
Editorial pipeline (flow) Eleanor Warnock Kate Lee Kanban automation (target)
AI writing quality Katie Parrott Kate Lee Partially encoded criteria
Product: Cora Kieran Lucas (design) Compound engineering
Product: Monologue Naveen Lucas (design) Compound engineering
Product: Sparkle Yash Lucas (design) Compound engineering
Product: Spiral Danny Lucas (design) Compound engineering
Visual brand & design Lucas Crespo Daniel Rodrigues Figma MCP, design system
Consulting methodology Natalia Quintero Claudie
Finance vertical Brooker Belcourt Natalia Quintero Finance-specific templates
Growth architecture Austin Tedesco Dan Shipper Analytics automation
Product marketing Anukshi Mittal Dan Shipper Campaign frameworks
Podcast production Rachel Braun Dan Shipper Descript, StreamYard
Social distribution Anthony Scarpulla Austin Tedesco Claude+X API

Aggregate Organizational Time Allocation

Current State (from audit)

| Mode | Allocation | |——|———–| | Specification | 40% | | Coordination | 27% | | Execution | 33% |

Target State (after encoding candidates implemented)

| Mode | Allocation | |——|———–| | Specification | 45% | | Coordination Design | 18% | | Execution Oversight | 13% | | System Evolution | 24% |

Shift Required

| Shift | From | To | Hours/month freed | |——-|——|—–|——————-| | Design request intake | Coordination (40 hrs) | System Evolution | ~32 hrs | | Consulting status reporting | Coordination (30 hrs) | Specification | ~22 hrs | | Editorial pipeline management | Coordination (25 hrs) | Specification + System Evolution | ~17 hrs | | Cross-GM knowledge transfer | Coordination (20 hrs) | System Evolution | ~12 hrs | | Podcast logistics | Coordination (15 hrs) | System Evolution | ~12 hrs | | Total freed | | | ~95 hrs/month |


Hiring Criteria (Derived from Role-Value Map)

Every’s hiring criteria derive directly from the specification-authority model. For each role, the irreducible requirement is:

What to hire for:

  1. Specification taste — Can this person define what “good” looks like in their domain? Not execute it — define it.
  2. AI fluency — Can this person direct AI agents to execute against their specification? Not code — specify.
  3. Generalist range — Can this person operate across multiple domains? (Value #4)
  4. Builder credibility — Have they built things themselves? Do they practice what they’d preach? (Value #3)
  5. Judgment over process — Do they rely on taste or checklists? (Value #1)

What NOT to hire for:

  1. Execution speed — AI handles execution. Hiring for fast hands is hiring for a deprecated skill.
  2. Process compliance — Every values taste over process. People who need clear processes to function will struggle.
  3. Specialist depth alone — Deep expertise is valuable only when combined with breadth. Pure specialists underutilize the GM model.
  4. Management experience (traditional) — Managing humans in a traditional org is a different skill than specifying for AI agents. The skills overlap but are not identical.

Transition Pathways

For roles shifting from execution-heavy to specification-heavy:

  1. Current state: Person does the work (writes the code, writes the copy, designs the mockup)
  2. Transition: Person does the work WITH AI (AI drafts, person edits — learning to specify by seeing what specifications produce)
  3. Target: Person specifies the work, AI executes, person reviews (the GM model applied to every role)

For roles shifting from coordination-heavy to specification-heavy:

  1. Current state: Person manages flow (schedules, tracks, follows up, aligns)
  2. Transition: Person encodes flow into systems (builds the Kanban automation, designs the intake form, configures the agent)
  3. Target: Person designs coordination systems, AI operates them, person handles exceptions (Eleanor’s trajectory)

For roles already specification-heavy:

  1. Current state: Person defines quality standards (Kate, Katie, Dan)
  2. Evolution: Person encodes more specification into AI-enforceable criteria while pushing the frontier of what “good” means
  3. Target: Person holds the irreducible taste frontier, AI enforces everything below the frontier (Kate’s three rigor tests, Katie’s AI tells criteria — the frontier moves but human judgment stays at the edge)

Cultural Guardrails for Role Design

Derived from audit cultural red flags and VALUES.md:

  1. Never encode GM autonomy away. The Two-Slice Team model works because GMs own their products completely. Any role redesign that constrains GM autonomy contradicts the model.

  2. Protect taste conversations. When encoding coordination out of roles (design intake, pipeline management), always preserve the conversations where taste is developed and transmitted. Encode logistics, not judgment.

  3. Builder credibility is non-negotiable. Every role must maintain a direct connection to building. If a role becomes pure management/coordination with no building component, it has drifted from Every’s identity.

  4. Play is structural, not decorative. Named AI agents (R2-C2, Iris, Montaigne, Margot, Alfredo, Milo) are not cute — they’re how Every maintains personality in an increasingly AI-mediated workflow. Role design should preserve space for play.

  5. Specification quality compounds. Every hour invested in better specifications produces returns across every subsequent AI execution cycle. Roles should be designed to maximize specification quality, not specification quantity.


  1. specification-writer — Take the specification responsibilities identified above and write formal, Stranger-Test-passing specifications for the highest-impact ones (editorial quality gates, compound engineering plan templates, consulting methodology).

  2. quality-gate-designer — Convert Kate Lee’s three rigor tests and Katie Parrott’s AI tells criteria into formal quality gates with pass/fail criteria.

  3. agent-builder — Generate agent configurations for the highest-impact AI amplification opportunities identified above (Claudie expansion, editorial pipeline automation, design intake agent).