Module 2 Debrief
Week of March 9–13, 2026
This week all three cohorts completed Module 2. What happened in these sessions was different from Module 1 — less orientation, more discovery. You came in with real experiments, taught each other, and left with a clearer sense of what this technology can actually do in your hands.
These notes synthesize what emerged across all three groups.
The Fifty First Dates Problem
Every conversation with an AI model starts from zero. No memory of last week. No memory of five minutes ago in a different window. Unless you build the system that changes that, you are meeting a stranger every single time.
This is not a flaw to work around. It is the core behavior to understand. Claude Projects exist specifically to solve it — a persistent context layer that travels with you across conversations, across days, across whatever you are building. Instructions you set once. Files that inform every exchange. A memory that updates as your work develops.
The homework this week was to build one. Not a perfect one. A real one, organized around a problem that actually matters to you.
What You Brought to the Room
The most useful parts of these sessions came from you, not from the curriculum. Across all three cohorts, participants arrived with real experiments — things that worked, things that did not, and a few things that surprised everyone in the room.
Adversarial prompting. One participant drafted a business development proposal with AI, built out decision-maker personas, then turned the same tool against its own work: “Now be a skeptic.” The result was a list of objections to his own proposal that he would not have thought to surface on his own. AI as both builder and challenger — that is a high-value workflow.
The assumption disclosure instruction. Another participant shared a standing rule he adds to every substantive conversation: ask Claude to disclose all assumptions it is currently working with before proceeding. It sounds simple. It surfaces things that would otherwise stay invisible and get baked silently into the output.
The markdown handoff. When a long conversation starts to degrade — when you see Claude compacting, or responses getting shallower — one participant invented a technique before we taught it: ask Claude to generate a markdown summary of what you have built together, then use that file to seed a fresh conversation. You carry forward the substance without the weight. Several of you hit this situation during the week and will recognize why it matters.
AI as thought partner, not search engine. The most common observation across all three cohorts was a version of the same thing: the difference is not in the answer, it is in the conversation that follows. Perplexity did not just return results — it asked follow-up questions. Claude did not just draft content — it pushed back on assumptions. Several of you used the phrase “thought partner” without being prompted. That is the frame we are building toward.
A Note on Token Budgets
Several of you hit a wall this week — a conversation that stopped working, an error that seemed to come from nowhere, a response that felt like the tool had forgotten the last hour of work.
This is almost always a token budget issue, not a server problem. Every conversation has a context limit. When Claude tells you it is “compacting” a conversation, it is already approaching that limit and compressing what it references. When a conversation dies unexpectedly, the budget is likely gone.
The rule of thumb: time-box your conversations. When you reach a natural stopping point — or when you see that compacting message — create a markdown summary and hand it off to a new conversation or drop it into your Project files. Do not try to squeeze more out of a conversation that is showing signs of strain. The work you have done is not lost as long as you capture it.
You will run into this again. When you do, the markdown handoff is your recovery move.
