Week of March 23–27, 2026

Session Debrief

What happened across all three cohorts when Module 4 ran. The synthesis draws from every session - there is more here than any one group saw.


The Central Idea

Conversations degrade. This is not a bug — it is the architecture. As a conversation grows, the model develops a bias toward recent context and loses track of what came first. When you introduce contradictions, it stops resolving them and starts agreeing with everything. When the context window fills, outputs get shorter, more generic, and less useful.

The instinct when this happens is to push harder. That is the wrong move.

The right move is to stop, ask Claude to generate a structured markdown summary of the conversation, and start fresh in a new window with that summary as the seed. What feels like abandoning work is actually distilling it. Every participant who tried this described the same experience: the fresh conversation feels like working with a different model. In every meaningful sense, you are. You have removed the clutter and handed it the substance.

"I could literally go do this right now because of the work."

Participant, Cohort B — after building a real-time bid engine requirements document in one session

What Participants Brought to the Room

The adversarial sequencing homework produced some of the most substantive participant work the program has seen.

Murder board on a market entry

One participant used a red team framing to pressure-test a potential move into an emerging insurance segment. The output surfaced two risks he was underweighting: how quickly a new entrant can get burned out of the gate without sufficient data to recover, and how reinsurers will price an unfamiliar risk class because they cannot price it accurately either. Speed and organization were part of the value. The 10% he would not have gotten to on his own was the part that mattered.

Seven passes

One participant ran seven iterations on a blog post, having Claude critique itself as a complete skeptic after each pass — calling out marketing copy weaknesses until the content had genuinely absorbed the criticism rather than just acknowledged it. He had not yet read the final version by the time the session ended. That is the right discipline: run the process, then evaluate the output, rather than accepting the first thing that sounds finished.

Domain expertise as context multiplier

One participant built a complete real-time bid engine requirements document — with pacing controls, identity verification attributes, and objections from financial, IT, vendor, and marketing perspectives — one week after struggling with a basic business plan. The unlock was bringing her full domain expertise into the setup frame. Claude's output quality is bounded by the context it receives. When you bring deep professional knowledge to the opening of a conversation, the model works at the level of that knowledge, not below it.

The tandem critique

One participant built a presentation in Claude, then took it to Perplexity with two adversarial roles: first as a non-technical marketing person, then as an engineer looking for anything that felt untrue or exaggerated. The two passes caught different things. Using models in tandem as critics with different lenses is more rigorous than asking one model to check its own work.

Naming it in real time

One participant built automated presentations with Claude, hit the moment where Claude started saying "you're right" to everything, pushed back to demand both sides of the argument, got good output — and then recognized that Claude's final recommendation was the model wrapping up a conversation it was done with. He named it as it happened. That recognition is the literacy this program is designed to build.


Techniques Worth Keeping

A number of practices surfaced across the three sessions. The full collection — credited to the participants who brought them into the room — is in the Participant Technique Field Guide.

Spring 2026 Cohorts
Participant Technique Field Guide

Practices surfaced by cohort members, credited to the people who brought them into the room.

Three-layer setup

Before starting any substantive conversation, declare what you know, what you don't know, and what you are aware you might not know you don't know. That third layer — handing the model your unknown unknowns as a named category — separates a good setup from a great one.

Source-citation instruction

Ask Claude to bold or notate any numerical data with its source at the point of generation, not in a references section at the end. Trust verification built into the output rather than chased afterward.

Setup frame in practice

One participant ran a live test on a real RFP — deliberately specifying audience, reviewer, goal, pricing parameters, and constraints. Second output: 17 pages, minimal tweaks needed. The gap between a bare request and a full setup frame is the difference between a draft you rework three times and one you ship.

When to stop

The most honest signal that a conversation has run its course: you start cussing at it. That emotional signal is more reliable than any technical indicator. Create the markdown. Start over.

The tailor problem

On why you cannot fix a degraded conversation by telling it to forget what happened: it is like trying to give somebody a suit that has been tailored — at a certain point it is just too messed up and you have to throw it out and start over. The answer is fresh material with the right cuts already in it.


A Note on Where This Is Headed

Several participants this week crossed a threshold worth naming. One connected Claude to his email and Notion and described the experience as falling out of his chair. Another's team connected Claude directly to a product analytics platform, eliminating the copy-paste step entirely. A third processed 117,000 emails through a ten-minute workflow instruction.

These are not edge cases. They are where the program is headed. The manual disciplines you are building now — setup frames, markdown handoffs, adversarial sequences, fresh starts — are the human behaviors that make automated systems trustworthy. The platform will keep getting better at memory, search, and direct application connections. The judgment about what to build, what to trust, and what to verify will always be yours.


Homework for Module 5
Assignment 1

Practice a full setup frame on a new conversation. Not a warm-up prompt — a real one, with role, context, objectives, audience, and constraints defined before your first question.

Assignment 2

Deliberately burn a conversation. Get it to the point where it is degraded and frustrating. Then create a markdown and start fresh. Pay attention to what it feels like on the other side. That experience is the lesson. Reading about it is not.

Your pre-read for Module 5 is the AI Fluency Index from Anthropic's research team, and the conversation architecture reference. Module 5 content is already available at jayfontanini.com/accelerator/curriculum/module-5/