Jay Fontanini
Module 3

Module 3 — Going Deeper

AI as a Thinking Partner Without Letting It Think For You

A practical framework for using AI to surface assumptions, test arguments, and reach better decisions — without outsourcing your judgment.

Adapted from Dr. Robert N. Winter — February 2026

The Core Problem

AI is designed to be helpful. That is also its most significant weakness.

Left to its defaults, AI validates. It fills gaps in your reasoning with plausible-sounding content. It produces polished output that looks right even when the thinking underneath is soft. This is not a flaw to complain about — it is the architecture.

The question is not whether AI will agree with you. By default, it will. The question is whether you are using it in a way that makes that tendency work for you rather than against you.

"If the model makes the task feel instantly simple, you should assume you are skipping a cognitive step that normally protects your judgment."

— Dr. Robert N. Winter

Cognitive ease is not a feature. It is a warning signal. Treat it that way.


The Framework

Humans first. AI in the middle. Humans last.

Journalist Zach Seward described effective AI workflows as "humans first and humans last, with a little bit of powerful, generative AI in the middle to make the difference." That framing translates into three operational constraints.

The Winter Method — Three Constraints

01

Frame the question

Decide what the problem is, what counts as evidence, and what a good answer looks like. Never delegate this. A well-framed question is the executive act — before any AI is involved.

Human

02

Create friction

Use AI to surface hidden assumptions, generate counterarguments, force competing framings, and challenge the conclusions you have already reached. If it makes the task feel easy, push harder.

AI

03

Verify and record

Confirm the reasoning, make the decision, and document it — including what you considered and what you rejected. AI improves your audit trail. It does not own your judgment.

Human

Warning

Ease is a red flag. If AI makes your problem feel instantly resolved, you are likely skipping a step that protects your judgment. Slow down and ask harder questions.


The Practical Workflow

Six steps for executive AI-assisted thinking

The following routine is designed for situations where the point is not to generate content, but to reach a defensible decision without degrading judgment. It is deliberately repetitive — that is what makes it work.

1

Start with your own ugly first paragraph

Write 150 to 250 words answering: What is the problem? Why now? What decision is required? Do not use AI for this. Your first paragraph is a diagnostic — it reveals what you understand and what you do not. Then ask the model to identify ambiguous terms, list hidden assumptions, and generate three rival framings of the same problem.

The discipline

This turns AI into a conceptual editor rather than a ghost writer. You own the framing. AI stress-tests it.

2

Force an argument, then force a counterargument

Ask AI to make the strongest case for Option A using only your stated assumptions, then the strongest counterargument. Repeat for each option you are considering. Reject generic objections like "execution risk." Require falsifiable claims — what would you actually observe if this objection is correct?

Example prompts

  • Now be a skeptic. What are the three strongest objections to this proposal?
  • What would someone who disagrees with this conclusion point to as evidence?
  • Identify the assumptions in my reasoning that are most likely to be wrong.
3

Run an OODA loop on the decision

Observe, Orient, Decide, Act. The most common executive failure mode is jumping from data to decision while skipping orientation — the sensemaking step. Use AI to help you separate facts from conjectures, label confidence levels, and identify the assumptions driving each option. If AI cannot do this clearly, you have learned something about your own evidence base.

4

Use AI for red teaming, not reassurance

Before finalizing any significant decision, ask AI to take on the perspectives of those who will challenge it. Then convert the output into a pre-mortem paragraph in your own words — what is the most likely cause of failure, and what early indicators would tell you it is emerging?

  • If a skeptical regulator read this, what would they challenge?
  • If we are wrong about this, what is the most likely reason?
  • What early indicators would show that this decision is failing?
5

Write the decision record, then make the model attack it

A one-page decision record covers: the decision in one sentence, options considered and why they were rejected, key assumptions ranked by importance, evidence used and its quality, risk controls and early warning indicators, and a review date. Then ask AI to find holes — missing assumptions, false dichotomies, unsupported claims. Fix them yourself. The goal is not polished prose. It is hardened reasoning.

6

Protect the skill: schedule tool-free thinking

Thinking is a perishable skill. Deliberate nonuse is as important as deliberate use. A simple maintenance cadence: one meeting per week where the first 15 minutes are tool-free problem framing. One document per month written without AI for the first draft. One quarterly review where you compare your AI-assisted forecasts to actual outcomes.

The point

Choose what you offload. Keep practicing what you cannot afford to lose.


To Recap

Seven principles worth internalizing

Thinking is a perishable executive skill. If you stop practicing articulation and trade-off discipline, it decays.

Language is infrastructure. Executive judgment is tightly coupled to linguistic clarity and internal dialogue.

AI creates cognitive ease — sometimes at the expense of reasoning. Treat ease as a risk signal, not a feature.

Automation bias is real. Verification and decision records are not optional for high-stakes decisions.

The right workflow is humans first and humans last. Let AI work in the middle — structuring, challenging, red-teaming.

Use AI to increase productive friction. Ask for assumptions, counterarguments, and falsifiable tests before you accept a narrative.

"Write the first paragraph yourself, and let the model earn its keep by disagreeing with your assertions."

— Dr. Robert N. Winter

Read the full article and the companion piece, The Chat Trap: When AI Makes Your Thinking Softer, at robert.winter.ink. Both are free with a basic account.