AI Executive Accelerator
Trust and Verification - Week 5

Trust and the Model Landscape

When the output looks finished, that is precisely when you should push hardest. This module pairs the verification imperative with an honest look at the full model landscape beyond Claude.

Duration
60 min
Format
Demonstration + Live Comparison

Session Overview

Review what the Conversation Architecture patterns changed in your work since Module 4
Confront the verification problem: why polished AI outputs reduce critical evaluation, and what to do about it
Explore the model landscape honestly: Claude, ChatGPT, Gemini, Perplexity, and open-source options - similarities, differences, and philosophies
Run the same prompt through multiple models and evaluate the differences yourself
Build a personal verification workflow for the kind of work that matters most in your role
Session Materials
Session Guide
Six-block agenda with facilitation notes and discussion anchors.
Session Materials
The Model Landscape
An honest comparison of the major AI models: what each does well, where each falls short, and the philosophical differences that shape how they behave.
Reference
Building With AI: The 4Ds in Practice
A practical framework for working with AI: Define, Draft, Develop, and Deliver. Each stage builds on the last to turn rough intent into polished output.
Teaching Artifact
The AI Fluency Index
Anthropic's research on what separates effective AI users from everyone else. The finding about polished outputs and critical evaluation is the setup for this session.
Pre-Read
Homework
Two assignments due before Module 6. Includes pre-reads.
Session Materials
Module 5 Session Debrief
Cross-cohort synthesis notes including the Thor Matthiasson guest session on multi-model deliberation.
Post-Session Notes
Before This Session

Complete the Module 4 homework: write a Setup Frame before a substantive conversation, and perform a Fresh Start on a degraded conversation using the Summary Checkpoint. Bring both. Then read the two pre-reads: the AI Fluency Index (focus on the findings section and practical takeaways) and the AI Model Landscape Field Guide (arrive with a working vocabulary for how the platforms differ and when each one earns its place).