Within a larger, and more political, point in his column, George Will explains something about structuring systems so as to “nudge” people toward a particular behavior pattern, without mandating anything: George F. Will: Nudge Against the Fudge
Such is the power of inertia in human behavior, and the tendency of individuals to emulate others’ behavior, that there can be huge social consequences from the clever framing of the choices that nudgeable people—almost all of us—make. Choice architects understand that every choice is made in a context, and that contexts are not “neutral”—they inevitably encourage certain outcomes. Organizing the context can promote outcomes beneficial to choosers and, cumulatively, to society.
It’s describing a thesis behind the book “Nudge: Improving Decisions about Health, Wealth and Happiness” from a couple of people who just happen to also be advising Obama.
Will’s examples are things like automatic-yet-optional enrollment in an employer’s 401k, or automatic-yet-optional defaulting organ-donor checkboxes on drivers’ licenses.
But, beyond the implications for government (which I think are fascinating, but don’t have time to get into right now), I think this is an excellent way of articulating something I’ve been trying to explain for quite a while about digital environments. Basically, that even in digital environments, there are ways to ‘nudge’ people’s decisions — both explicit and tacit — with the way you shape the focus of an interface, the default choices, the recommended paths. But you still give them plenty of freedom.
To the more libertarian or paranoid folks, this might sound horribly big-brother. But that’s only if you have a choice between a system and no system at all. The assumption is that — as with government — anarchy isn’t an option and you have to build *something*. Once you acknowledge that you have to build it, then you have to make these decisions anyway. Why not make them with some coherent, whole understanding of the healthiest, most beneficial outcomes?
The question then becomes, what is “beneficial” and to whom? That’ll be driven by a given organization’s goals and values. But the technique is neutral — and should be considered in the design of any system.