GTM: Getting the Most out of Eko GTD
Advanced patterns, best practices, and domain extensions for power users.
Real-World Use Cases
Software Feature Development
The Eko MVP itself uses Eko GTD for its challenge system. Example from the codebase:
docs/challenges/
├── TODO.md # Tracking Wave 1: Shell & Organization
├── 01-app-shell-sidebar.md # COMPLETED (A+ grade, 14/14 PASS)
├── 02-my-library.md # IN PROGRESS (12 challenges)
├── 03-add-url.md # PENDING (depends on 02)
└── ...
Why it works: Each feature becomes a self-contained challenge with explicit acceptance criteria. The development team can see exactly what's done, what's in progress, and what's blocked.
Marketing Campaign Launches
Break a campaign into testable deliverables:
- Challenge 1: Audience segment definition (targeting criteria documented)
- Challenge 2: Creative assets (designs approved, copy finalized)
- Challenge 3: Channel setup (ads configured, tracking pixels live)
- Challenge 4: Launch checklist (all systems go)
- Quality tier: Brand consistency, legal compliance, conversion tracking
Content Production Pipelines
Editorial workflows map naturally to challenges:
- Challenge 1: Research and outline
- Challenge 2: First draft
- Challenge 3: Editorial review
- Challenge 4: SEO optimization
- Challenge 5: Visual assets
- Quality tier: Voice consistency, readability score, fact-check verification
Cross-Team Project Coordination
When multiple teams contribute, use wave dependencies:
Wave 1: Backend (API team)
- 01-database-schema.md
- 02-api-endpoints.md
Wave 2: Frontend (UI team, depends on Wave 1)
- 03-component-library.md
- 04-page-implementation.md
Wave 3: QA (depends on Wave 2)
- 05-integration-tests.md
- 06-user-acceptance.md
Research Synthesis Projects
Structure research into verifiable outputs:
- Challenge 1: Source identification (N sources catalogued)
- Challenge 2: Data extraction (key findings documented)
- Challenge 3: Synthesis (themes identified, conflicts resolved)
- Challenge 4: Deliverable (report/presentation complete)
- Quality tier: Citation accuracy, methodology transparency
Best Practices
Scoping & Planning
Assessing Complexity Tier
| Signal | Suggested Tier |
|---|---|
| Single feature, clear requirements | S (5-6 challenges) |
| Feature with integrations | M (6-8 challenges) |
| Multi-component feature | L (8-10 challenges) |
| System-wide changes, new architecture | XL (10-14 challenges) |
Tip: When in doubt, start with M. You can always split into multiple projects if scope grows.
When to Split Into Multiple Projects
Split when:
- Challenges have no dependencies between groups
- Different teams own different parts
- Timelines diverge significantly
- Complexity exceeds XL (14+ challenges)
Dependency Mapping Strategy
- List all challenges first (don't worry about order)
- For each challenge, ask: "What must exist before I can do this?"
- Group challenges with shared dependencies into waves
- Validate: Can Wave N start before Wave N-1 completes? If yes, merge waves
Challenge Design
Writing Effective Acceptance Criteria
Bad: "User can log in" Good:
Acceptance Criteria:
- [ ] Login form accepts email and password
- [ ] Invalid credentials show error message
- [ ] Successful login redirects to dashboard
- [ ] Session persists across page refresh
- [ ] Logout clears session and redirects to login
Each criterion should be:
- Observable — Can be verified by looking/testing
- Binary — Either true or false, no "mostly done"
- Independent — Checking one doesn't require checking another
Balancing Granularity
| Too Broad | Just Right | Too Narrow |
|---|---|---|
| "Implement auth" | "Login form with validation" | "Add email input field" |
| "Build UI" | "Dashboard page with metrics" | "Style button hover state" |
| "Set up database" | "User table with RLS policies" | "Add index on email column" |
Rule of thumb: A challenge should take 1-4 hours of focused work. Longer suggests splitting; shorter suggests merging.
Quality Tier Challenge Patterns
Always include domain-appropriate quality challenges:
Software:
- Accessibility (WCAG 2.1 AA)
- Motion (CLS=0, 60fps, prefers-reduced-motion)
- Testing (unit, integration, Storybook)
- Tokens (all styling uses design tokens)
Marketing:
- Brand (visual identity alignment)
- Targeting (audience segment precision)
- Metrics (conversion tracking defined)
- Compliance (legal, platform policies)
Progress Tracking
Using <- CURRENT Effectively
The <- CURRENT marker shows exactly where work stopped:
## Wave 1
- [x] 01-database-schema.md (A+)
- [ ] 02-api-endpoints.md <- CURRENT
- [ ] 03-frontend-ui.md (pending 02)
Best practices:
- Move the marker immediately when starting new work
- Only one challenge should be CURRENT at a time
- Use
(pending NN)for blocked items
Managing Blockers
When blocked, don't just wait:
## Blocked
- [ ] 04-payment-integration.md
- Blocked: Waiting on Stripe API keys from finance team
- Escalated: 2025-01-10
- Owner: @finance-lead
Always capture:
- What's blocked
- Why it's blocked
- When it was escalated
- Who owns the resolution
Phase Gate Discipline
Never skip phase gates. They exist to prevent:
- Documenting features that don't work yet
- Shipping before quality checks pass
- Losing track of incomplete work
If you're tempted to skip a gate, you probably have a challenge that should be marked FAIL.
Team Coordination
Agent Ownership Patterns
Every challenge has exactly one owner:
| Rule | Description |
|---|---|
| Single owner | One agent per challenge |
| Domain-based | Owner matches primary domain |
| Delegation OK | Can delegate work, but owner validates |
| PASS authority | Only owner can mark PASS |
Cross-Domain Sync Strategies
When work spans domains (e.g., Software + Marketing for a launch):
- Create separate TODO.md files for each domain
- Add cross-references in "Sync Notes" section
- Use shared milestones as wave boundaries
- Regular sync points (e.g., "Marketing Wave 2 starts when Software Wave 1 completes")
Adding Custom Domains
The eight built-in domains cover common use cases, but you can extend Eko GTD for specialized work.
Domain Template
Domain: Eko GTD:<DomainName>
Focus: <primary work type>
Quality Standards:
- <standard 1: measurable criterion>
- <standard 2: measurable criterion>
- <standard 3: measurable criterion>
- <standard 4: measurable criterion>
Invocation: Use Eko GTD:<DomainName> to <task>
Example Custom Domains
Operations
Domain: Eko GTD:Operations
Focus: Workflow automation, SOP creation, process optimization
Quality Standards:
- Automation: Manual steps < 20% of total workflow
- Documentation: Runbook exists with troubleshooting guide
- Monitoring: Alerts configured for failure modes
- Rollback: Recovery procedure tested
Design
Domain: Eko GTD:Design
Focus: UI/UX flows, design systems, visual assets
Quality Standards:
- Consistency: Components follow design system
- Responsive: Works on mobile, tablet, desktop
- Accessibility: Color contrast, focus states, alt text
- Handoff: Dev specs complete with measurements
Legal
Domain: Eko GTD:Legal
Focus: Contract review, compliance audits, policy creation
Quality Standards:
- Completeness: All required clauses present
- Compliance: Meets regulatory requirements
- Clarity: Plain language score > 60
- Review: Approved by legal counsel
Data
Domain: Eko GTD:Data
Focus: Analytics pipelines, data quality, reporting
Quality Standards:
- Accuracy: Data matches source of truth
- Freshness: Update latency within SLA
- Documentation: Schema and transformations documented
- Validation: Data quality checks in pipeline
Extending Quality Standards
When defining A+ criteria for custom domains:
- Make them measurable — "Good documentation" → "README with setup, usage, and examples sections"
- Tie to outcomes — What does success look like for users of this work?
- Keep it to four — More than four becomes noise; fewer than four misses coverage
- Make them independent — Each standard should be evaluable separately
Anti-Patterns to Avoid
Over-Scoping
Problem: Using XL complexity for a task that should be M. Symptoms: Many challenges marked PASS from the start, wave dependencies that don't matter. Fix: Start with smaller tier, expand only if needed.
Vague Acceptance Criteria
Problem: "Works correctly" or "User-friendly interface" Symptoms: Disagreement about whether challenge PASS, scope creep within challenges. Fix: Every criterion should be a checkbox someone unfamiliar with the work could verify.
Skipping Phase Gates
Problem: Moving to Phase 2 with FAIL challenges "we'll fix later." Symptoms: Documentation doesn't match implementation, quality debt accumulates. Fix: Enforce gates. If you need to move on, explicitly mark challenges as deferred with reason.
Ignoring Quality Tier
Problem: Completing all functional challenges, skipping quality challenges. Symptoms: A grade instead of A+, technical/design debt, accessibility issues. Fix: Quality challenges are not optional. Budget time for them explicitly.
Challenge Drift
Problem: Original challenge requirements change mid-implementation. Symptoms: Acceptance criteria getting rewritten, challenges that never PASS. Fix: When requirements change, create a new challenge. Don't modify in-flight challenges.
Integration Tips
Combining with Existing Workflows
Eko GTD complements rather than replaces:
| Existing | Integration Point |
|---|---|
| Jira/Linear | One Eko challenge = one ticket |
| Git branches | Branch per wave or per challenge |
| CI/CD | Phase 1 gate = all tests pass |
| Design reviews | Quality tier challenge = design approval |
Git Integration Patterns
# Branch naming
feature/eko-gtd/<project>/<challenge>
feature/eko-gtd/auth/01-database-schema
# Commit messages
[eko-gtd] auth/01: Add user table with RLS policies
[eko-gtd] auth/02: PASS - Login endpoint complete
# PR titles
[Eko GTD] Auth Wave 1: Challenges 01-03 PASS
Documentation Sync Strategy
Keep docs in sync with implementation:
- Phase 1: Challenge docs are the source of truth
- Phase 2: Update user-facing docs from challenge acceptance criteria
- Phase 3: Archive challenge docs, user docs are authoritative
Troubleshooting
Stuck in a Phase
Symptoms: Same challenge CURRENT for multiple sessions, no progress.
Diagnose:
- Is the challenge too big? → Split it
- Is it blocked? → Document blocker, escalate
- Is scope unclear? → Revisit acceptance criteria
- Is it actually PASS? → Re-evaluate honestly
Challenge Drift
Symptoms: Requirements changed, challenge feels different than when created.
Resolution:
- Create new challenge with updated requirements
- Mark original as superseded (not FAIL)
- Update wave dependencies if needed
Dependency Conflicts
Symptoms: Circular dependencies, challenges that can never start.
Resolution:
- Draw dependency graph visually
- Find the cycle
- Usually one dependency is optional → remove it
- Or split a challenge to break the cycle
Quality Debt Accumulation
Symptoms: Functional challenges PASS, quality tier challenges keep failing.
Resolution:
- Don't proceed to Phase 2
- Allocate explicit time for quality work
- Consider: Is the quality standard realistic? Adjust if needed (but document why)
Quick Reference Card
┌─────────────────────────────────────────────────────────┐
│ INVOKE Use Eko GTD:<Domain> to <task> │
│ RESUME Continue Eko GTD @path/to/TODO.md │
├─────────────────────────────────────────────────────────┤
│ TIERS S (5-6) | M (6-8) | L (8-10) | XL (10-14) │
├─────────────────────────────────────────────────────────┤
│ PHASES 1:Challenges → 2:Documentation → 3:Launch │
├─────────────────────────────────────────────────────────┤
│ DOMAINS Software | Marketing | Content | Video │
│ Ads | Research | Business | <Custom> │
├─────────────────────────────────────────────────────────┤
│ GRADES A+ (all PASS) | A (functional) | B (most) │
└─────────────────────────────────────────────────────────┘
Master the method, and the method masters complexity.