Going Deeper - Optional Reading

The Two Roads of Artificial Intelligence

How the dream of thinking machines diverged into competing visions. And why both roads matter now.

Unified Origins
Symbolic / Rule-Based
Neural / Statistical
Hybrid Approaches
The Philosophical Foundations 1936 - 1950
1936
The Turing Machine
Alan Turing
Defines theoretical universal computing machine. Proves what is computable in principle. Theory precedes hardware.
1943
First Artificial Neuron Model
McCulloch and Pitts
Mathematical model of neurons as logic gates. Published before general-purpose computers exist. Seeds both paths.
1950
"Can Machines Think?"
Alan Turing
Publishes "Computing Machinery and Intelligence." Proposes the Turing Test. Frames intelligence as describable and therefore computable.
The Birth of AI 1951 - 1956
1951
SNARC: First Neural Network Machine
Marvin Minsky and Dean Edmonds
40 artificial neurons (vacuum tubes) learn to navigate a maze. Hardware neural network before the field has a name.
1955
Logic Theorist
Newell and Simon
First program to mimic human reasoning. Proves mathematical theorems using symbolic logic. "The first AI program."
Summer 1956
Dartmouth Conference
McCarthy, Minsky, Shannon, Rochester
"Artificial Intelligence" coined. Both symbolic and neural approaches present. Attendees predict human-level AI within 20 years. The optimism, and the divergence, begins.
"Every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it."
Dartmouth Proposal, 1956. The assumption that would split the field.
The Great Divergence 1956 - 1990s
Symbolic AI
"Good Old-Fashioned AI" (GOFAI)
  • 1956-70s: Logic Theorist, General Problem Solver
  • 1965: ELIZA, rule-based conversation
  • 1970s: Expert Systems boom (MYCIN, DENDRAL)
  • 1980s: Knowledge-based systems, Prolog
  • Philosophy: Intelligence = Logic + Rules + Knowledge
  • Belief: We can describe intelligence precisely
  • Funding: Dominated until late 1980s
  • Collapse: "AI Winter." Could not scale.
Neural / Connectionist AI
Pattern Recognition and Learning
  • 1958: Perceptron (Frank Rosenblatt)
  • 1969: "Perceptrons" book kills funding
  • 1970s: "Neural winter," marginalized
  • 1986: Backpropagation revives the field
  • Philosophy: Intelligence = Learned Patterns
  • Belief: We can approximate intelligence statistically
  • Funding: Minimal until 2000s
  • Revival: Deep learning revolution (2012+)
The Neural Triumph 2012 - Present
2012
AlexNet Wins ImageNet
Krizhevsky, Sutskever, Hinton
Deep neural network crushes competition. Symbolic approaches abandoned in computer vision. The neural renaissance begins.
2013
Word2Vec
Tomas Mikolov, Google
Words become vectors. Concepts become regions in space. King - Man + Woman = Queen. Statistical semantics works.
2017
"Attention Is All You Need"
Vaswani et al., Google
Transformer architecture. Self-attention mechanism. Foundation for GPT, BERT, Claude, and everything that follows.
2020-2024
Large Language Models
OpenAI, Anthropic, Google, Meta
GPT-3, GPT-4, Claude, Gemini. Statistical pattern matching at unprecedented scale. No rules. No grammar. Just prediction.
Looking Ahead
The Emerging Convergence?
Early signs of hybrid approaches combining neural pattern recognition with symbolic reasoning and rule verification. The road not taken may yet merge with the road we are on.
2024
AlphaProof
Neural + Symbolic theorem proving
2024
AlphaGeometry
LLM + Deduction engine
Research
Neuro-Symbolic AI
MIT, CMU, IBM active programs
Future?
Patterns + Rules
The architecture that should exist
A Note on Narrative

This timeline is simplified for teaching purposes. The real history had more cross-pollination between traditions than a clean "two roads" narrative suggests. The closing thesis—that we may need both patterns and rules—is one plausible direction, not settled consensus. For technical depth, ask Claude or Perplexity to go deeper on any era.