AI Executive Accelerator
Session Guide — Week 5

Trust, Complexity, and the Verification Imperative

You have been using AI for five weeks. This session is about what that experience has actually taught you, and what it means to work alongside systems you cannot fully verify.

Session Agenda
10
min

Landscape Review: What the Field Guide Showed You

The pre-read this week was the AI Model Landscape field guide. Before going anywhere new, find out what landed. Ask the room what surprised them — one thing they didn't expect to read.

Listen for three threads: surprise at the speed of capability convergence, surprise at the economics (DeepSeek's $5.6 million training cost), and surprise at the strategic positioning differences between platforms. Any of those opens something worth following.

If the group is quiet, offer the key insight from the guide directly: no single platform dominates every benchmark. The old question of "which AI should I use?" has been replaced by "which AI for this task?"

Opening Question

"What surprised you most in the field guide? One thing you didn't expect to read."

15
min

Show and Tell: What You Built Since Module 4

Open floor. Who tried something in the last two weeks that they want to show the group? The Conversation Architecture patterns from Module 4 are the anchor, but bring anything that worked or didn't.

If a participant demonstrates something with clear value, name the technique before moving on. The Participant Technique Field Guide captures these. If something new surfaces today, it belongs there.

Hold the floor for this. Show-and-tell is consistently the highest-value format in the program. Don't rush through it to get to instructor content.

If the Room Is Quiet

"Walk me through a conversation that went sideways. Not what you did right. What broke, and what you tried to fix it."

13
min

Trust Discussion: What Does It Mean to Trust a System You Cannot Fully Verify?

This is the hard question at the center of Module 5. Not "is AI accurate?" but "what does trust actually require when the system is probabilistic by design?"

The field guide documented the verification imperative: Claude's low hallucination rate, Perplexity's citation-first design, cross-model validation as a professional practice. But that's the tactical version. Let this discussion go deeper.

Two threads worth following if they surface:

  • The verification burden is real and unevenly distributed. High-stakes compliance work requires a different standard than drafting an internal memo. The question is whether participants have calibrated that difference consciously or by accident.
  • AI systems do not know what they do not know. Confidence in the output has no relationship to accuracy. The absence of hedging is not evidence of correctness.

If the group is advanced, introduce the Dario Amodei consciousness question. His March 2026 essay on AI moral status is not a curiosity — it is a signal about where the people building these systems think this is going. You don't need to take a position. But participants should know the question is being asked seriously.

The Central Question

"You've been using AI for five weeks. On a scale of one to ten, how much do you trust what it gives you? And what would move that number in either direction?"

12
min

Live Practice: Multi-Model Validation on Real Work

Participants bring a claim, a decision, or a piece of analysis they generated in AI over the last two weeks. Something where accuracy actually matters. Run the same question through Claude and one other platform. Come back in five minutes with one thing that surprised you about the comparison.

  • Claude for analytical depth, contract-style reasoning, or compliance-sensitive language
  • Perplexity for anything that requires current information or cited sources
  • ChatGPT for a second opinion on structure or framing when you suspect your prompt is the problem

The goal is not to find the "right" answer in a different platform. It is to develop the habit of treating AI output as a first draft, not a final authority.

Hard Time Check

Cut this at 45 minutes total. Do not let live practice run into the project seed block.

7
min

Project Seed: The Group Project and Why It Starts Now

Modules 7 and 8 are collaborative builds. Each cohort will identify a shared challenge, design an AI-assisted approach to it, and present at the combined Module 9 capstone. Problem selection happens at Module 6. But the thinking starts now.

Two constraints worth naming: the project must be doable in two sessions, and it must be something participants can speak to publicly at the capstone without sharing proprietary data. Abstract the problem. Do not share what you cannot share.

The Question to Sit With

"What problem in your industry, your organization, or your function would you want your cohort to take on together? Something that actually matters, not something safe."

3
min

Homework and Close

Two assignments. Details on the homework page. Assignment 1 is cross-model validation on a real piece of work. Assignment 2 is a two-to-three sentence problem statement for the group project conversation at Module 6.

One note before close: the Vinnie Garth field trip is Friday, April 10. Cross-cohort. Vinnie has built a ChatGPT-native operating system for his executive work. He will show it to all three cohorts together. Participants now have the model ecosystem vocabulary to understand what he built and why he built it the way he did. That context is the point of sequencing it after Module 5.

Module 6 Tease

Next session we move from understanding the model landscape to building workflows within it. Multi-model patterns in practice. And the group project problem selection begins — come with your statement ready.

Participants Should Leave With