Fontanini Advisory Services
AI Executive Accelerator
March 24, 2026

Building With AI: The 4Ds in Practice

How the AI Landscape field guide was built, from initial prompt to published artifact, using the Delegation, Description, Discernment, and Diligence framework.

This page walks through the actual conversation between Jay and Claude that produced the AI Model Landscape reference. It is not a theoretical exercise. Every prompt, every scoping question, and every editorial decision shown here happened in real time during a single working session.

The structure follows the 4Ds framework from the Anthropic Academy AI Fluency course: Delegation, Description, Discernment, and Diligence. Each D maps to a specific phase of the conversation, and the teaching moments are called out along the way.


The 4Ds of AI Fluency

The Anthropic Academy AI Fluency course identifies four dimensions that distinguish effective AI collaboration from basic prompting. These are not sequential steps but overlapping practices that run through every productive AI working session.

D
Delegation
Choosing the right task for AI, and the right task for you

Effective delegation means assigning AI the work it does well (research, synthesis, structured writing) while retaining the work that requires your judgment (editorial decisions, quality standards, deployment, and cross-checking). The division should be explicit, not assumed.

D
Description
Communicating what you need with enough clarity to get it

Description covers the what, the who, the where, the how, and the why. A well-described task produces a useful first draft without ambiguity, while leaving room for the AI to exercise judgment on structure and emphasis. Under-describing leads to generic output. Over-describing leads to rigid output.

D
Discernment
Knowing when to ask questions before investing effort

Discernment means pausing before execution to surface the decisions that would materially change the output. It applies both to the AI (asking clarifying questions rather than guessing) and to the human (making deliberate choices rather than accepting defaults). Discernment prevents wasted effort.

D
Diligence
Verifying, evaluating, and taking responsibility for the output

Diligence is the practice of checking AI output against reality. It means verifying claims, testing for accuracy, documenting limitations, and maintaining editorial responsibility. The AI Fluency Index research shows that polished AI artifacts reduce users' critical evaluation: diligence is the counterweight.

Framework source: Swanson, Bent, Ludwig, Dakan, and Feller. Anthropic Education Report: The AI Fluency Index. February 2026. Course: Anthropic Academy AI Fluency.


Part 1: The initial prompt

Jay opened the session with a single prompt that combined delegation and description. Notice what it covers: the deliverable, the audience, the four research dimensions, the editorial relationship, the deployment pipeline, and the cross-check plan. Notice also what it leaves open: which platforms, what depth, and structural decisions.

Jay

For my AI Executive Accelerator Curriculum, one of the artifacts I need to build for Module 5 is a detailed view of the main models. I want to give participants an overview:

  • The Companies - their background, funding and investment philosophies, mission, vision, and goals and any other notable or unique aspects
  • Their models - notable capabilities, limitations, and essential facts that new users might overlook that could derail their experience
  • Their platforms - the user interface, features, context and memory strengths and limitations, pace of development and change, and nuances, especially focused on unique aspects that new users should explore and gain awareness of
  • Their users - who gravitates to them, what use cases are best suited to them, and what people think of them, both enthusiasts and detractors

The goal of this research is to develop a visual artifact that we will deploy with the curriculum of the AI Executive Accelerator published on jayfontanini.com/accelerator.

I will lean on you for research, organization and in particular, surfacing the particular uniqueness of each model while also representing what is common among models so users know what they can count on across models, so that UX or other preferences can guide their decision to use models. One thing we'll want to do is frame up the importance of developing competency and comfort within a few different ecosystems rather than being constrained unnecessarily to just one.

When you are designing, focus on clarity and accessibility of the information, using highlights and callouts to draw attention to unique features. It would be good to include some interactivity.

I'll serve as editor, giving you feedback on content and structure to maximize the value of the artifact.

When I'm satisfied with the content and presentation, will shift to deployment: you'll create the HTML artifact and Claude Code prompt which I will deploy. Then I'll work with Claude Code to execute our vision. I'll iterate with you and ultimately, I'll share it with Perplexity to evaluate our work.

Teaching moment: Delegation

Jay explicitly assigned AI the research and synthesis work, retained the editorial role and deployment decisions, and planned a cross-check with Perplexity as a separate quality gate. The division of labor was stated up front, not left ambiguous. This is what effective delegation looks like: clear about what the AI does, clear about what you keep.

Jay

Note: My goals were to 1) Choose a project, 2) Establish vision and goals, and 3) Break down tasks and conduct delegation analysis. I may be jumping ahead because the next step is description. I think I've done that pretty well.

Finally, the meta moment: we'll capture this conversation itself as a teaching artifact.

Teaching moment: Description

The prompt covered the what (six platforms, four dimensions), the who (insurance executives new to AI), the where (jayfontanini.com/accelerator), the how (interactive HTML artifact via Claude Code), and the why (building informed AI fluency, not just tool familiarity). It was detailed enough to produce a first draft without ambiguity, while leaving room for Claude to exercise judgment on structure and emphasis. Jay even recognized the meta-moment: the process itself was the second artifact.


Part 2: Scoping questions

Rather than immediately diving into research, Claude asked scoping questions to ensure the output would match Jay's vision. Each question addressed a decision that would materially change the final artifact. This is discernment in action: pausing before effort to surface the choices that matter.

Claude

Before beginning the research, Claude presented two sets of scoping questions covering five decisions:

Question set 1: Core scope

Which models and platforms should we cover?
Options: Core 4 only (Claude, ChatGPT, Gemini, Perplexity), Core 4 + Grok and Mistral, Core 4 + Grok/Mistral/Llama, Custom list
Core 4 and Llama plus one more obscure standout that has standout features or capabilities
Which pricing tier should we orient around?
Options: Free tiers as baseline, Paid/Pro tiers as baseline, Cover both with clear distinctions
Cover both with clear distinctions
Content altitude for insurance executives?
Options: Confident decision-maker (practical, accessible), Semi-technical (some benchmarks and specs), Both layers with progressive disclosure
Both layers with progressive disclosure

Question set 2: Final refinements

Which "obscure standout" for the sixth slot?
Options: Cohere (enterprise-focused, RAG strength), Mistral (European, open-weight, multilingual), DeepSeek (China's emerging contender), Surprise me with your best pick
Surprise me with your best pick
Should we capture pace of change and trajectory, or keep it as a point-in-time snapshot?
Options: Current state only (March 2026 snapshot), Include brief trajectory and momentum indicators, Include trajectory plus upcoming announced features
Include brief trajectory and momentum indicators
Teaching moment: Discernment

Notice Jay's discernment in his answers. He chose "both layers with progressive disclosure" over either extreme, a decision that shaped the entire visual architecture of the final artifact. He requested a surprise standout rather than defaulting to the obvious choice, which led to DeepSeek and the most consequential strategic insight in the field guide. And he included trajectory indicators rather than limiting to a static snapshot, making the artifact more durable. Each answer was a deliberate choice, not a default acceptance.


Part 3: Confirmed scope

With Jay's answers locked in, the scope was confirmed before any research began. This checkpoint prevented wasted effort by ensuring both parties were aligned on the deliverable.

Platforms

Claude (Anthropic), ChatGPT (OpenAI), Gemini (Google), Perplexity, Llama (Meta), plus one surprise standout selected based on research

Dimensions

Company background, models, platform and UX, user ecosystem. Four dimensions per platform.

Framing

Both free and paid tiers with clear distinctions. Accessible first layer with progressive disclosure into semi-technical detail.

Trajectory

Include momentum indicators. Not a static snapshot: capture how the landscape shifted over the past twelve months.


Part 4: Research and production

Claude executed a comprehensive deep research process covering over 175 sources across the six platforms. Here is how the work was structured.

01

Research phase

All six platforms across four dimensions each, plus cross-platform commonalities, trajectory indicators, and the case for multi-platform fluency. Sources included company press releases, financial databases (Sacra, Crunchbase, Tracxn), technology publications (TechCrunch, The Verge, Ars Technica), industry analysis, pricing comparison sites, and official platform documentation.

02

The sixth platform decision

After evaluating Cohere (strong enterprise RAG but narrow appeal), Mistral (solid European option but less differentiated), and DeepSeek, the research pointed to DeepSeek as the standout choice. The reasoning: DeepSeek teaches insurance executives something no other platform addresses. Frontier AI can be built for $5.6 million by a 200-person team, fundamentally challenging assumptions about AI economics.

03

Structure decisions

Each platform received consistent four-dimension coverage. A "table-stakes" section captured what is now universal. A trajectory section covered the major shifts from March 2025 to March 2026. Insurance-specific use cases were woven into each platform section rather than segregated. The closing section made the explicit case for multi-platform fluency.

04

Writing approach

Prose-forward with minimal bullet points. Specific numbers (funding, valuations, user counts, benchmark scores, pricing) cited throughout. "Gotchas" and misconceptions highlighted per platform. Honest treatment of limitations and controversies alongside strengths. No promotional tone for any platform, including Claude.

Teaching moment: No promotional bias

The field guide covers Claude (made by Anthropic, the company that makes the AI used to write it) with the same critical lens as every other platform: limitations documented, criticisms included, gotchas flagged. This is what intellectual honesty looks like in AI-assisted work. If the tool you are using to produce research cannot be honest about its own maker's product, the research is compromised.


Part 5: How the 4Ds showed up in this conversation

Each D from the Anthropic Academy framework maps to a specific phase of the work. Here is the explicit mapping.

1
Delegation
What Jay assigned to AI, and what he kept

Jay chose the right task for AI collaboration: comprehensive multi-source research, synthesis across dimensions, and structured writing. He retained the editorial role, the deployment pipeline design, and the final quality judgment, including a planned cross-check with Perplexity. The division of labor was explicit from the first message.

2
Description
What the initial prompt covered

The what (six platforms, four dimensions), the who (insurance executives new to AI), the where (jayfontanini.com/accelerator), the how (interactive HTML artifact deployed via Claude Code), and the why (building informed AI fluency, not just tool familiarity). Detailed enough for a first draft without ambiguity. Open enough for Claude to exercise judgment on structure.

3
Discernment
The scoping questions and the answers

Claude asked questions that would materially change the output before investing research effort. Jay exercised discernment in his answers: choosing progressive disclosure over a single layer, requesting a surprise standout rather than defaulting, and including trajectory indicators. These choices shaped a more useful artifact than any default would have produced.

4
Diligence
How the output was verified

The research covered 175+ sources. Each claim is grounded in current data (March 2026). Pricing, model versions, user counts, and benchmark scores were verified across multiple sources. Limitations, controversies, and risks were documented alongside capabilities. Jay planned a cross-check with Perplexity as an independent verification step. The result is a document that can withstand scrutiny from participants who will test its claims against their own experience.


Part 6: What comes next

This conversation produced the research and content layer. The remaining pipeline shows how a single AI working session feeds into a multi-step production process.

01

Editorial review

Jay reviews field guide content for emphasis, accuracy, and gaps

02

Interactive artifact

Claude designs HTML version with progressive disclosure and comparison tools

03

Claude Code prompt

Deployment prompt produced for Claude Code to build and deploy

04

Deploy and iterate

Jay works with Claude Code to execute, then iterates based on testing

05

Cross-platform evaluation

Jay shared the finished artifact with Perplexity for independent quality assessment

06

Meta-artifact

This conversation transcript serves as a teaching example of the 4Ds in practice

The process of building with AI is as instructive as the output itself. The scoping, the questions, the decisions, the iteration: that is the skill being developed.

Cross-platform evaluation: what Perplexity found

Jay shared the completed AI Model Landscape artifact with Perplexity for independent review. Perplexity confirmed the structural design, multi-platform thesis, and insurance-specific mapping were strong. It flagged six wording-level improvements, all of which were applied:

Benchmark claims timestamped to March 2026 so they signal awareness of shelf life.

Superlative language tightened for Claude and Gemini rows to focus on distinctive strengths rather than dominance claims.

Scale figures clarified with measurement basis (weekly vs. monthly active users, cumulative downloads) to prevent apples-to-oranges comparison.

Llama and DeepSeek recommendation card updated with constraints: in-house MLOps requirement, GPU infrastructure, and compliance review for Chinese-trained models.

Perplexity enterprise usage statistic reframed from "validates" to "Perplexity reports" to avoid over-claiming causality from a single dataset.

Insurance recommendations section received a durability disclaimer: revisit this mapping annually as models and regulations evolve.

No structural changes were required. This is what the Diligence step looks like in practice: an independent second opinion that catches what the original collaboration missed.


Continue exploring

The AI Model Landscape Anthropic Academy AI Fluency
Back to Module 5

In creating this AI Landscape Overview, I collaborated with Claude while completing the exercises in Anthropic Academy's AI Fluency Course, and the 4 Ds in particular, to assist with research, summarization, and visual creation. I affirm that all AI-generated and co-created content underwent thorough review and evaluation. The final output accurately reflects my understanding, expertise, and intended meaning. While AI assistance was instrumental in the process, I maintain full responsibility for the content, its accuracy, and its presentation. This disclosure is made in the spirit of transparency and to acknowledge the role of AI in the creation process.