Config Prompts
Prompts for configuration management: TypeScript config data files, environment variables, model routing, and feature flags.
Prompts
| # | Prompt | Purpose |
|---|---|---|
| 1 | Update Model Registry | Add a new AI model or change tier assignments |
| 2 | Update Taxonomy Rules | Per-topic extraction guidance, vocabulary, and voice |
| 3 | Update Challenge Voice | Voice constitution, format, style, and difficulty rules |
| 4 | Update Seed Controls | Seeding pipeline mode, volume, and quality thresholds |
| 5 | Add Banned Patterns | Add CQ violation regex patterns |
FAQ
TypeScript Config Data
How does the configuration system work?
- Configuration data lives in TypeScript data files within their respective packages (e.g.,
packages/ai/src/config/,packages/config/src/). - Each data file exports typed constants validated with Zod schemas at import time.
- To modify configuration, edit the TypeScript file directly -- no sync step is needed.
- Configuration is imported by TypeScript modules at runtime with full type safety.
What are the main config data files and where do they live?
packages/ai/src/config/categories.ts-- Topic categories and seeding prompts (44 CategorySpec definitions across 33 root categories).packages/config/src/model-registry-data.ts-- AI model registry and tier routing (30 models across 7 providers).packages/ai/src/config/challenge-voice.ts,challenge-style-rules.ts,challenge-style-voices.ts,challenge-difficulty.ts,challenge-banned-patterns.ts-- Voice, style, difficulty, quality rules.packages/ai/src/config/taxonomy-rules-data.ts,taxonomy-voices-data.ts-- Per-topic guidance, vocabulary, voice.
Environment Variables
Where are environment variables defined and how do I add a new one?
- Central config:
packages/config/src/index.ts-- all env vars are loaded and typed here. - All env vars loaded from
.env.localat the monorepo root. - Example file:
.env.example(must be kept in sync with actual usage). - To add a new var: define it in
packages/config/src/index.ts, add to.env.example, document in APP-CONTROL.md.
What do the env:check scripts validate?
bun run env:check-example-- Ensures.env.examplehas entries for all used env vars.bun run env:check-local-- Validates.env.localagainst.env.example(warns on missing vars).bun run env:check-typos-- Checks for common env file typos (e.g.,SUPABASE_ULRinstead ofSUPABASE_URL).env:check-exampleruns in CI to enforce completeness.
Which env vars control AI budget and cost limits?
ANTHROPIC_DAILY_SPEND_CAP_USD-- Daily budget cap (default $5).GOOGLE_DAILY_SPEND_CAP_USD-- Google API daily budget cap (default $3).OPUS_ESCALATION_ENABLED-- Allow routing to Opus (default false).OPUS_MAX_DAILY_CALLS-- Hard cap on Opus invocations (default 20).NOTABILITY_THRESHOLD-- Minimum score to retain a fact (default 0.6).FACT_EXTRACTION_BATCH_SIZE-- Facts processed per AI call (default 10).- All defined in
packages/config/src/index.ts.
Model Routing
How does the three-tier model routing system work?
- Three tiers:
default(92% of calls, cost-efficient),mid(5%, balanced),high(1%, top-tier reasoning). - Tier-to-model mapping:
packages/config/src/model-registry-data.ts. - Runtime:
packages/ai/src/model-router.tsreads config with a 60-second cache from theai_model_tier_configDB table. - Each model has a ModelAdapter in
packages/ai/src/models/adapters/for per-model prompt optimizations. - Current default mapping (fallback when DB unavailable): all tiers →
gemini-3-flash-preview. - Thinking budgets: models that support extended thinking (e.g., Gemini 3 Flash) have configurable thinking token limits per adapter.
- Registry:
packages/config/src/model-registry.ts.
How do I add a new AI model to the registry?
- Add a new entry to
packages/config/src/model-registry-data.tswith:model_id,provider,status,input_price,output_price,deprecation_note. - Add the model identifier to the
ai_modelDB enum via migration:ALTER TYPE ai_model ADD VALUE IF NOT EXISTS '<model-id>'. The enum must include the model before it can be set inai_model_tier_config. - Optionally create a ModelAdapter in
packages/ai/src/models/adapters/. - Adapters added Feb 2026:
gemini-3-flash-preview,gpt-5-nano,grok-4-1-fast-non-reasoning,MiniMax-M2.5. - Test:
bun scripts/seed/llm-fact-quality-testing.ts --all --models <model_id> --limit 50.
How do I change which model is used for a specific tier?
- DB override (production): Update the
ai_model_tier_configtable directly — this is the authoritative source at runtime. Change propagates within 60 seconds (DB cache refresh), no restart needed. - Code fallback: Edit
DEFAULT_TIER_CONFIG_DATAinpackages/config/src/model-registry-data.ts— this is the fallback used when the DB is unavailable. - The model must exist in both
MODEL_REGISTRY_DATA(code) and theai_modelDB enum before it can be assigned to a tier.
What is Opus escalation and how is it controlled?
- Allows routing to Opus (highest-tier model) for top-1% complex tasks.
- Controlled by:
OPUS_ESCALATION_ENABLED(default false) andOPUS_MAX_DAILY_CALLS(default 20). - When enabled, the model router may select Opus for high-tier requests.
- Config:
packages/config/src/index.ts.
Feature Flags
Where are feature flags stored and how do I toggle one?
- Database table accessed via
packages/db/src/drizzle/feature-flags.ts. - Admin UI:
admin.eko.day/feature-flags-- toggle switches with immediate effect. - Runtime checks throughout the codebase via feature flag queries.
What are the key operational feature flags?
EVERGREEN_ENABLED-- Master switch for evergreen fact generation (env var, not DB flag).- Feature flags in DB control user-facing features (specific flags depend on current implementation).
- Admin dashboard provides toggle UI with immediate effect.
- Note: some "flags" are env vars (like
EVERGREEN_ENABLED), others are DB-stored runtime flags.
See Also
- APP-CONTROL.md -- Environment controls section
- Key source:
packages/config/src/index.ts,packages/ai/src/model-router.ts