You have been using AI for five weeks. This session is about what that experience has actually taught you, and what it means to work alongside systems you cannot fully verify.
The pre-read this week was the AI Model Landscape field guide. Before going anywhere new, find out what landed. Ask the room what surprised them — one thing they didn't expect to read.
Listen for three threads: surprise at the speed of capability convergence, surprise at the economics (DeepSeek's $5.6 million training cost), and surprise at the strategic positioning differences between platforms. Any of those opens something worth following.
If the group is quiet, offer the key insight from the guide directly: no single platform dominates every benchmark. The old question of "which AI should I use?" has been replaced by "which AI for this task?"
"What surprised you most in the field guide? One thing you didn't expect to read."
Open floor. Who tried something in the last two weeks that they want to show the group? The Conversation Architecture patterns from Module 4 are the anchor, but bring anything that worked or didn't.
If a participant demonstrates something with clear value, name the technique before moving on. The Participant Technique Field Guide captures these. If something new surfaces today, it belongs there.
Hold the floor for this. Show-and-tell is consistently the highest-value format in the program. Don't rush through it to get to instructor content.
"Walk me through a conversation that went sideways. Not what you did right. What broke, and what you tried to fix it."
This is the hard question at the center of Module 5. Not "is AI accurate?" but "what does trust actually require when the system is probabilistic by design?"
The field guide documented the verification imperative: Claude's low hallucination rate, Perplexity's citation-first design, cross-model validation as a professional practice. But that's the tactical version. Let this discussion go deeper.
Two threads worth following if they surface:
If the group is advanced, introduce the Dario Amodei consciousness question. His March 2026 essay on AI moral status is not a curiosity — it is a signal about where the people building these systems think this is going. You don't need to take a position. But participants should know the question is being asked seriously.
"You've been using AI for five weeks. On a scale of one to ten, how much do you trust what it gives you? And what would move that number in either direction?"
Participants bring a claim, a decision, or a piece of analysis they generated in AI over the last two weeks. Something where accuracy actually matters. Run the same question through Claude and one other platform. Come back in five minutes with one thing that surprised you about the comparison.
The goal is not to find the "right" answer in a different platform. It is to develop the habit of treating AI output as a first draft, not a final authority.
Cut this at 45 minutes total. Do not let live practice run into the project seed block.
Modules 7 and 8 are collaborative builds. Each cohort will identify a shared challenge, design an AI-assisted approach to it, and present at the combined Module 9 capstone. Problem selection happens at Module 6. But the thinking starts now.
Two constraints worth naming: the project must be doable in two sessions, and it must be something participants can speak to publicly at the capstone without sharing proprietary data. Abstract the problem. Do not share what you cannot share.
"What problem in your industry, your organization, or your function would you want your cohort to take on together? Something that actually matters, not something safe."
Two assignments. Details on the homework page. Assignment 1 is cross-model validation on a real piece of work. Assignment 2 is a two-to-three sentence problem statement for the group project conversation at Module 6.
One note before close: the Vinnie Garth field trip is Friday, April 10. Cross-cohort. Vinnie has built a ChatGPT-native operating system for his executive work. He will show it to all three cohorts together. Participants now have the model ecosystem vocabulary to understand what he built and why he built it the way he did. That context is the point of sequencing it after Module 5.
Next session we move from understanding the model landscape to building workflows within it. Multi-model patterns in practice. And the group project problem selection begins — come with your statement ready.