Jay Fontanini
Module 2
MODULE 2  ·  CONCEPTUAL FOUNDATION

The AI Landscape

A map for decision-makers. What AI is, where it is going, who is building it, and how to think about your role in it.

Session use: 15 min walkthrough
Self-study: 20–30 min with links
Level: Conceptual, not technical

How AI is organized

Not all AI is the same. Understanding the distinctions helps you ask better questions about risk, oversight, and where your judgment is required.

Artificial Intelligence
Traditional
Narrow AI
Rules-based systems designed for one specific task. Fraud detection, spam filters, recommendation engines.
What we use today
Foundation Models
Trained broadly on vast data. Flexible, context-aware, capable across many tasks.
Where we are now
Generative AI
Produces outputs for you to evaluate. You stay in the loop.
Claude · GPT-4o · Gemini
Midjourney · DALL-E
Copilot · Cursor
You evaluate every output
Where it is going
Agentic AI
Takes actions on your behalf. Oversight requirements change significantly.
Research agents
Workflow automation
Multi-agent systems
You define the guardrails

Generative AI: what it is and how it works

Generative AI does not retrieve information. It generates it - predicting what words, images, or code should follow based on patterns learned from training data. That distinction matters for how you use it.

It predicts, not recalls

When you ask Claude a question, it is not looking up an answer. It is generating the most plausible continuation of your prompt based on patterns in its training. This is why it can be wrong with complete confidence.

Context is everything

The quality of what generative AI produces is directly tied to the quality of what you give it. Vague prompts produce vague outputs. Specific context - your role, your goal, your constraints - produces useful work.

You are always the editor

Generative AI produces drafts, analysis, and options. Your job shifts from creation to evaluation. That is not a reduction in your role - it is a reorientation of where your judgment is applied. In practice, semi-automated workflows like auto-drafted emails or suggested replies already blur the line between generative and agentic - even when marketed as simple AI features.

The shift to agentic AI

Generative AI is a conversation. Agentic AI is a delegation. The difference determines where your oversight is required - and what happens if you skip it.

Where we are

Generative AI

AI produces. You decide what to do with it.

  • You ask, it responds
  • Every output is a draft
  • You choose whether to act
  • Errors are visible before they cause harm
  • Accountability stays with you throughout
Where it is going

Agentic AI

AI plans and takes action. You define the boundaries.

  • You define a goal, it executes steps
  • Actions happen in the real world
  • Errors may compound before review
  • Oversight requires upfront design
  • Accountability structures need explicit thought
Why this matters for executives

In traditional industries, accountability does not transfer to the tool. When an agentic AI takes an action on behalf of your organization - sending a communication, making a decision, executing a workflow - the question of who is responsible has not changed. The executive who deployed it is still accountable. Understanding that distinction before you delegate is not caution for its own sake. It is good management.

The model landscape

Four organizations are producing the models that matter for executive work right now. They have meaningfully different philosophies, privacy postures, and strengths. Knowing the field helps you make informed choices about what to use and when.

ChatGPT
OpenAI  ·  Microsoft-backed
Known forBroadest consumer awareness. Most established plugin ecosystem. GPT-4o is highly capable.
Privacy noteTrains on conversations by default unless you opt out. Enterprise version has stronger protections.
Best forGeneral tasks, image generation (DALL-E), broad integrations.
Claude
Anthropic  ·  Safety-focused
Known forLong context windows, strong reasoning, careful and precise writing. Constitutional AI training approach.
Privacy noteDoes not train on your conversations by default. Other major vendors - including OpenAI and Google - also offer enterprise configurations that disable training on user data, so this is not unique to Claude, but it is the default here rather than an opt-in.
Best forExtended analysis, long documents, nuanced writing, sensitive business contexts.
Gemini
Google DeepMind  ·  Alphabet
Known forDeep integration with Google Workspace. Strong multimodal capabilities. Massive data advantage.
Privacy noteData handling tied to Google account settings. Review carefully if using for sensitive work.
Best forGoogle Docs, Gmail, Search integration. Research with real-time web access.
Llama
Meta  ·  Open source
Known forOpen-source weights that can run locally or on private infrastructure. Highly customizable.
Privacy notePrivacy depends entirely on how it is deployed. Self-hosted Llama keeps data in your environment, but many Llama-based services are still SaaS products that move data off-premise like any other cloud model.
Best forOrganizations with data residency requirements or technical teams building custom tools.

Where Claude sits - and why it matters

Anthropic was founded in 2021 by former OpenAI researchers who wanted to build AI differently. The company's defining bet: safety and capability are not in tension. You can build a more reliable tool by being more deliberate about how it is trained.

Constitutional AI

Most AI models are trained to avoid harmful outputs by showing them examples of what not to do. Anthropic's approach - Constitutional AI - trains Claude using a written set of principles. Claude learns to evaluate its own outputs against those principles before responding. The result is behavior that is more consistent, more predictable, and less prone to being manipulated into producing things it should not. For executives working on sensitive business problems, that consistency is practically useful, not just philosophically interesting.

Privacy in practice

Claude Pro (what we use)

Does not train on your conversations

By default, Anthropic does not use your conversations to train future models. You can share business context, work through sensitive problems, and think out loud without that data being harvested.

ChatGPT free / default

Trains on conversations unless you opt out

OpenAI's consumer product uses conversations for training by default. You must navigate settings to disable this. The Enterprise version operates differently - check your organization's subscription.

A note on this overview

This is a landscape orientation, not a technical reference. The distinctions drawn here - particularly between generative and agentic AI - are simplified to be useful for decision-making, not precise for engineering purposes. The model landscape is accurate as of early 2026 and will change. The external links point to primary sources where you can find more current and more precise information. When in doubt about any claim here, follow the link. If a resource link is not loading, try opening it in an incognito window - browser extensions can sometimes trigger security blocks on institutional sites.