Every blog has a first post. Most of them are throat-clearing — a promise of what's to come, a bit of biography, a vision statement. We're going to skip all of that and start with a question instead. Because that's what this place is for.

SquareCircle exists at the boundary of logic. It's named after an impossible thing — something that cannot exist by definition, but that you could probably sketch a version of if you felt like it. That tension — between what is formally impossible and what you can still think, represent, or will into being — is exactly the kind of territory we plan to explore here.

The first territory is one of the most important questions in AI right now. Not "will AI take my job" or "is AGI coming." Something deeper. Something that cuts to the foundation of what intelligence even is.

The Ladder of Causation

The statistician and philosopher Judea Pearl proposed that all reasoning about the world can be organized into three levels — a hierarchy he calls the Ladder of Causation. Each rung represents a fundamentally different kind of thinking, and each is more powerful than the one below it.

The Ladder of Causation — Judea Pearl
1
Seeing — Association
"When it rains, the ground is wet." Observing patterns and correlations.
✓ AI does this extremely well
2
Doing — Intervention
"If I turn on the sprinkler, will the ground get wet?" Understanding what happens when you actively change something.
~ AI struggles here
3
Imagining — Counterfactual
"If it hadn't rained yesterday, would the ground still be wet?" Reasoning about what didn't happen.
✗ AI largely cannot do this reliably

Current AI systems — including the most advanced large language models — are extraordinarily good at Rung 1. They are pattern-matching engines of unprecedented power. They have ingested more human knowledge than any individual could read in a thousand lifetimes, and they can find correlations, surface connections, and generate fluent responses at a scale that genuinely looks like understanding.

But looking like understanding and being understanding may be two very different things.

"Correlation tells you the world as it is. Causation lets you reason about the world as it could be."

— A useful distinction worth sitting with

The gap between Rung 1 and Rung 3 is not a gap that more data or more compute obviously closes. It may require something architecturally different — a system that doesn't just map patterns but builds genuine models of how the world works, reasons about interventions, and imagines alternate histories.

Whether that's achievable, how close we are, and what it would even mean for a machine to truly climb all three rungs — that's what we're here to explore.

But first — a question for you

Before we go further, we want to hear from you. Not because we don't have opinions — we have plenty — but because the whole point of this place is dialogue. The ladder of causation is a framework. Frameworks are only useful when you stress-test them against real thinking.

// Opening Question

If an AI can perfectly predict what you will do next — does it understand you?

Think carefully before you answer. Perfect prediction requires only Rung 1 — pattern recognition at scale. No model of your motivations, your history, your fears, or your reasons is required. A surveillance system with enough data might predict your behavior better than your closest friend. Does that make it more intelligent? More understanding? Or is it something else entirely — and if so, what do we call it?

Leave your answer in the comments below. There's no right answer — but there are more and less interesting ones. We're looking for the kind of thinking that doesn't resolve neatly. The kind that sits with you after you close the tab.

Welcome to SquareCircle. Where logic meets its edge.

David L. Goodis