Now - Week 1
Platform evaluation complete
API waitlist submitted
Migration assessment done
Architecture designed
Week 2
GitHub repo setup
Config schema standardized
Webhook trigger tested
Receiver rewritten
Week 3
Speaker naming prompt
Parallel run begins
Config loader built
Docs written
Week 4
Legacy system cutover
Config self-maintenance
Email scan begins
Email agent starts
Week 5+
Email agent live
Stabilization period
Teaching documentation
API access (if granted)
The foundation everything else builds on. Configs move from local files to a versioned, programmatically accessible repository. Schema gets standardized with an Agent Instructions section that makes each config self-describing for every consumer.
advisor-os/
├── configs/
│ ├── CONFIG_INDEX.md
│ ├── clients/ Client A, Client B, Client C, Client D
│ ├── ventures/ Portfolio companies
│ ├── themes/ Content platforms, programs
│ ├── partners/ Joint ventures
│ └── personal/ Non-client projects
├── agents/
│ ├── transcript/ processor.py, receiver.py
│ ├── email/ email_agent.py
│ └── shared/ config_loader.py - new, used by all agents
├── prompts/ system prompts per agent + config_update_prompt
├── drafts/ proposed config updates land here for review
└── docs/ ARCHITECTURE.md, PIPELINE_REFERENCE.md, CONFIG_SCHEMA.md
Create private GitHub repo with categorized folder structure
Build
15 min
Migrate existing configs into categorized folders
Build
30 min
Add Agent Instructions section to each config for transcript and email consumers
Config
60 min
Write CONFIG_SCHEMA.md with field definitions, agent instructions format, examples
Build
Teaching
45 min
Build shared config_loader.py that fetches all configs from GitHub at runtime
Build
30 min
Add GitHub token to env and test config loader against live repo
Validate
20 min
Teaching context
GitHub as config source of truth is the pattern that separates hobby automation from durable systems. Version history means every change is reversible. The pull request review step is intentional friction: the same principle as manual import to Claude Projects, applied to config maintenance.
The receiver gets rewritten for the new transcription platform's trigger format. Everything downstream: config loading, Claude API routing, output naming, file actions, manual import: transfers unchanged. Run both systems in parallel for two weeks before cutting over.
CURRENT
Transcription Platform A -> webhook -> ngrok -> Flask receiver -> _inbox/ -> Folder Action -> processor.py -> Claude API -> output
FUTURE
Transcription Platform B -> Zapier -> ngrok -> Flask receiver (rewritten) -> _inbox/ -> Folder Action -> processor.py -> Claude API -> output
(or direct API if waitlist approved)
UNCHANGED ngrok - Folder Action - processor.py - Claude API - output folder - manual import
Set up webhook trigger on a test meeting and inspect raw payload JSON
Validate
60 min
Confirm whether payload delivers full transcript or requires secondary API call
Validate
within test above
Confirm whether UI-assigned speaker names flow through webhook payload
Validate
within test above
Rewrite Flask receiver for new trigger format and transcript extraction
Build
2-4 hours
Build speaker naming prompt that fires before processing if generic labels detected
Build
Design
1-2 hours
Update processor.py to use shared config_loader.py (GitHub source)
Build
30 min
Run parallel period: both platforms active on all meetings for 2 weeks
Validate
2 weeks
Cut over: disable legacy webhook, new platform becomes sole source
Build
15 min
If API access granted: replace Zapier with direct webhook integration
Waiting
2-3 hours if approved
Teaching context
The parallel run period is non-negotiable on any pipeline migration. Run both systems simultaneously, compare outputs, validate edge cases before cutting over. The cost of overlap is minimal. The cost of a failed cutover on a live client engagement is not.
Configs stay current because meetings and emails are the update trigger, not a maintenance cadence you remember to run. Two mechanisms: a post-transcript Claude call that proposes updates, and a weekly email scan that catches new contacts before they appear in meetings.
After each meeting processes, a second Claude call reviews the transcript against all configs. Proposes updates for new contacts, role changes, scope shifts, ended engagements. Writes draft to GitHub. Fires desktop notification.
Runs every Sunday evening. Scans 7 days of sent and received email. Extracts contacts appearing 2+ times with no config entry. Drafts skeleton config proposals. Weekly summary notification Monday morning.
All proposed updates land in drafts/proposed_config_updates/ as markdown files. You review, accept what is right, discard what is not, and merge. Merge updates the live config. Every agent gets current config on next run.
Write config_update_prompt.md: the signal/noise separation prompt for proposed updates
Design
Teaching
60 min
Add propose_config_updates() function to processor.py: second Claude call post-summary
Build
45 min
Build draft writer: formats proposed updates as GitHub-ready markdown
Build
30 min
Configure weekly email scan: Sunday 8pm, 7-day lookback, 2+ contact threshold
Build
60 min
Test full loop: new contact in meeting to draft proposed to review in GitHub to merge to loader pickup
Validate
30 min
The email agent reads the same configs as the transcript processor, using each config's Email Agent section for contact matching, tone, flagging rules, and draft style. One config system, multiple consumers. Adding a new client config automatically makes that client known to every agent.
Gmail MCP via Claude. Matches sender to config. Applies client-specific tone, flagging rules, and sensitivity guidelines from the same config the transcript processor uses.
Produces drafts via Gmail MCP. Draft style, subject line conventions, and sign-off preferences all come from the config's Email Agent instructions. Drafts only. Never sends.
Email agent flags when a thread surfaces new context: new stakeholder CC'd, scope language, contract references. Proposes config updates via the same draft mechanism as the transcript processor.
Add Email Agent instructions section to all existing configs
Config
60 min
Write email_system_prompt.md: contact matching, tone application, draft conventions
Design
Teaching
60 min
Build email_agent.py: uses shared config_loader, Gmail MCP, produces drafts
Build
3-4 hours
Test on 3 client email threads before going live: validate tone and sensitivity handling
Validate
60 min
Three living documents that make the system accessible to any Claude instance, to you six months from now, and to any practitioner building their own version. Written during the build, not after.
Write ARCHITECTURE.md: system map, how components connect, design decisions and why
Build
Teaching
60 min
Write PIPELINE_REFERENCE.md: file paths, env vars, trigger chain, how to restart, how to add a config
Build
45 min
Write CONFIG_SCHEMA.md: every field, agent instructions format, examples, new config template
Build
Teaching
45 min
Update practice case study to reflect completed migration and new architecture
Build
60 min
The meta-lesson
This system is the answer to "how do I get AI to know my business?" The config is the context layer. GitHub is the memory that persists across every tool. The agents are the consumers. Any practitioner running a complex practice can build a version of this: the complexity scales down to their situation. The principles do not change: single source of truth, event-driven maintenance, intentional human review, documented so it survives your memory.