Agency as a Model

The question "does that system have agency?" is often treated as a factual question about the nature of the system. It's more useful to treat it as a question about which model is most predictive.

Agency is a stance — the intentional stance, as Dennett calls it. When you treat a system as if it has beliefs, desires, and goals, you gain predictive power over its behavior. The question is whether applying that model produces better predictions than the alternatives (physical description, functional description, random process).

When the agency model helps

You apply the agency model productively when:

The system's behavior is sensitive to its goals, not just its current state. A thermostat responds to temperature. A person responds to what they want. The thermostat's behavior is fully described by its physical state; the person's behavior is better predicted by modeling their goals.

The system updates its behavior based on outcomes. Systems with agency learn — they modify their behavior when they get feedback. The agency model predicts this updating; a purely physical model doesn't.

The state space is too large to enumerate. When a system has billions of possible states, the physical model becomes computationally intractable. The agency model compresses this: you don't need to track every neuron; you need to know what the person wants.

The category error to avoid

The mistake is not in applying the agency model — it's in confusing the model with the territory. Saying "the system has genuine agency" as if agency is a real property independent of the model is a category error. It leads to confused debates about whether AIs "really" have goals, whether corporations "really" have interests, whether evolution "really" intends things.

These debates are not about the nature of these systems. They're about which model is most useful for which purposes. The answer varies by context. For predicting an AI system's behavior in distribution, a functional model is usually enough. For predicting its behavior out of distribution, the agency model may be more useful — because it captures what the system was optimized to pursue, not just what it does in familiar settings.

Agency is a tool. The question is always: useful for what?