Language is Infrastructure at IA Summit 2014

I presented this talk at the IA Summit in San Diego this year, back in the spring. I’m adding it to inkblurt so it’ll have a home here, but I already wrote about it over at TUG a few months ago.

It’s all about how language makes stuff in the world that we need to treat like serious parts of our environment — material for design — and how there’s no such thing as “just semantics.”


The World is the Screen

Throughout 2013 and part of 2014, I gave various versions of a talk entitled “The World is the Screen”. (The subtitle varied.)

The general contention of the talk: as planners and makers of digital things and places that are increasingly woven into the fabric of the world around us, we have to expand our focus to understanding the whole environment that people inhabit, not just specific devices and interfaces.

As part of that mission, we need to bring a more rigorous perspective to understanding our materials. Potters and masons and painters, as they mature in their work, come to understand their materials better and more deeply than they would expect the users of their creations to understand them. I argue that our primary material is information … but we don’t have a good, shared concept of what we mean when we say “information.”

Rather than trying to define information in just one way, I picked three major ways in which information affects our world, and the characteristics behind each of those modes. Ultimately, I’m trying to create some foundations for maturing how we understand our work, and how it is more about environments than objects (though objects are certainly critical in the context of the whole).

Anyway … the last version of the talk I gave was at ConveyUX in Seattle. It is a shorter version, but I think it’s the most concisely clear one. So I’m embedding it below. [Other, prior (and longer) versions are also on Speakerdeck – one from IA Summit 2013, and one from Blend Conference 2013. I also posted about it at The Understanding Group.]

Context Design Talk for World IA Day Ann Arbor

The 2013 World IA Day was a huge success. Only its 2nd year in existence, and it had big crowds in 20+ locations (15 official). Congratulations to everyone involved in organizing the day, and to the intrepid board members of the IA Institute who decided to risk transforming the more US-based IDEA conference into this terrific, global, community-driven event.

I was fortunate to be asked to speak at the event in Ann Arbor, MI, where I talked about how information shapes context — the topic I’ve been writing a book about for a while now. I’ll probably continue having new permutations of this talk for quite some time, but here’s a snapshot at least, describing some central ideas I’m fleshing out in the book. I’m calling this “beta 2” — since it has somewhat different and/or updated content vs the one I did for CHI Atlanta back in the fall of 2012.

Video and Slides-with-notes embedded below. Enjoy!



The Composition of Context: a workshop proposal

Andrea Resmini and co-organizers of the upcoming workshop on Architectures of Meaning (part of the Pervasive Computing conference at Newcastle University in the UK) asked me to participate this year. I’m not able to be there in person, unfortunately, but plan to join remotely. What follows is the “paper” I’m presenting. It’s not a fully fledged academic piece of writing — more like a practitioner-theorist missive.

I’m sharing it here because others may be curious, and it’s also the best summary I’ve done to date of the ideas in the book I’m writing on IA and designing context.

This is a straight dump from MS Word (with a few tweaks). Caveat emptor.


Information Architecture and the Composition of Context

Andrew Hinton

Final Draft for Architectures of Meaning Workshop

June 18, 2012



We lack fully articulated models for context, yet information architecture is especially significant in how context is created, changed or communicated in digital-based information environments. This thesis proposes some principles, models and foundational theories for the beginnings of a framework of context and proposes composition as a rubric for tying these ideas together into IA practice.

The thesis follows a line of reasoning thus:

Context is constructed.

There’s a deep and wide intellectual history around the topic of context. Suffice it to say that there are many layers and threads in the ongoing conversation among experts on the subject. Even though all those threads don’t agree on every point, they add up to some generally accepted ideas, such as:

  • Context is both internal and external. Our minds and bodies determine and influence how we perceive reality, and that internal experience is affected by external objects and interactions. Both affect one another to the point where the distinction between “inner” and “outer” is almost entirely academic.
  • Context has both stable and fluid characteristics. Certainly there are some elements of our lives that are stable enough to be considered “persistent.” But our interactions with (and understanding of) those elements still can make them mean something very different to us from moment to moment. Context exists along an undulating spectrum between those poles.
  • Context is social. Our experience of context emerges from a cognitive history as social beings, with mental models, languages, customs — really pretty much everything — originating from our interactions with others of our kind.

Context is not so simple as “object A is in surrounding circumstance X” — the roles are interchangeable and interdependent. This is why context is so hard to get our hands around as a topic.

(In particular, I’m leaning on the work of Paul Dourish, Bonnie Nardi, Jean Lave, Marcia Bates and Lucy Suchman.)

Context is about understanding.

This phenomenological & post-modern frame for context necessarily complicates the topic — not to point out these complexities would keep us from getting at a real comprehension of how context works.

Still, it can be helpful to have a simple model to use as a compass in this Escher-like landscape.  Hence, the following:

Context is conventionally defined as the interplay between several elements:

  • Situation: the circumstances that comprise the setting (place, time, surroundings, actions, etc.). The concept of “place” figures very heavily here.
  • Subject (Event/Person/Statement/Idea): the thing that is in the situation, and that is the subject of the attempted understanding.
  • Understanding: an apprehension of the true nature of the subject, through awareness and/or comprehension of the surrounding situation.
  • Agent: the individual who is trying to understand the subject and situation (this element is implied in most definitions, rather than called out explicitly).

Context, then, is principally about understanding. There is no need for discussion of context unless someone (agent) is trying to understand a subject in a given situation. That is, context does not exist out in the world as a thing in itself. It emerges from the act of seeking to understand.

This also forms a useful, simple model for talking about context and parsing the elements in a given scenario. However, it gets more complicated due to the ideas, mentioned above, about how context is constructed. Just a few of the wrinkles that come to light:

  • There can be multiple subjects, even if we understand them by focusing on (or foregrounding) one at a time.
  • The subject is also always part of the situation, and any of the circumstances could easily be one or more subjects.
  • In fact, in order to understand the situation, it has to be focused on as a subject in its own right.
  • All of these elements affect one another.
  • Importantly, the subject may be the agent. And there can be multiple agents, where another observer-agent may be able to understand the situation better than the subject-agent, because the subject-agent “can’t see the forest for the trees.” In design for a “user” this is an especially important point, because the user is both agent and subject — a person trying to understand and even control his or her own context.

As you can see, what looks like a simple grammar of what makes context can actually expose a lot of complexity. But this simple model of elements helps us at least start to have a framework for picking apart scenarios to figure out who is perceiving what, which elements are affecting others, and where understanding is and isn’t happening.

In order to unravel this massive tapestry, we have to grab a thread; a good one to grab is what we mean by “understanding.”

And that means we have to understand cognition, which is the engine we use for understanding much of anything.

Continue reading “The Composition of Context: a workshop proposal”

Embodied Responsiveness

I’ve been thinking a lot lately about responsiveness in design, and how we can build systems that work well in so many different contexts, on so many different devices, for so many different scenarios. So many of our map-like ways of predicting and designing for complexity are starting to stretch at the seams. I have to think we are soon reaching a point where our maps simply will not scale.

Then there are the secret-sauce, “smart” solutions that promise they can take care of the problem. It seems to happen on at least every other project: one or more stakeholders are convinced that the way to make their site/app/system truly responsive to user needs is to employ some kind of high-tech, cutting-edge technology.

This can take the form of clippy-like “helpers” that magically know what the user needs, to “conversation engines” that try to model a literal conversational interaction with users, like Jellyvision, or established technologies like the “collaborative filtering” technique pioneered by places like Amazon.

Most of the time, these sorts of solutions hold out more promise than they can fulfill. They aren’t bad ideas — even Clippy had merit as a concept. But to my mind, more often than not, these fancy approaches to the problem are a bit like building a 747 to take people across a river — when all that’s needed is a good old-fashioned bridge. That is, most of the time the software in question isn’t doing the basics. Build a bridge first, then let’s talk about the airliner.

Of course, there are genuine design challenges that do seem to still need that super-duper genius-system approach. But I still think there are more “primitive” methods that can do most of the work by combining simple mechanisms and structures that can actually handle a great deal of complexity.

We have a cognitive bias that makes us think that anything that seems to respond to a situation in a “smart” way must be “thinking” its way through the solution. But it turns out, that’s not how nature solves complex problems — it’s not even really how our bodies and brains work.

I think the best kind of responsiveness would follow the model we see in nature — a sort of “embodied” responsiveness.

I’ve been learning a lot about this through research for the book on designing context I’m working on now. There’s a lot to say about this … a lot … but I need to spend my time writing the book rather than a blog post, so I’ll try to explain by pointing to a couple of examples that may help illustrate what I mean.

Consider two robots.

One is Honda’s famous Asimo. It’s a humanoid robot that is intricately programmed to handle situations … for which it is programmed. It senses the world, models the world in its brain and then tells the body what to do. This is, by the way, pretty much how we’ve assumed people get around in the world: the brain models a representation of the world around us and tells our body to do X or Y. What this means in practice, however, is that Asimo has a hard time getting around in the wild. Modeling the world and telling the limbs what to do based on that theoretical model is a lot of brain work, so Asimo has some major limitations in the number of situations it can handle.  In fact, it falls down a lot (as in this video) if the terrain isn’t predictable and regular, or if there’s some tiny error that throws it off. Even when Asimo’s software is capable of handling an irregularity, it often can’t process the anomaly fast enough to make the body react in time. This, in spite of the fact that Asimo has one of the most advanced “brains” ever put into a robot.

Another robot is nicknamed Big Dog, by a company called Boston Dynamics. This robot is not pre-programmed to calculate its every move. Instead, its body is engineered to respond in smart, contextually relevant ways to the terrain. Big Dog’s brain is actually very small and primitive, but the architecture of its body is such that its very structure handles irregularity with ease, as seen in this video where, about 30 seconds in, someone tries to kick it over and it rights itself.

The reason why Big Dog can handle unpredictable situations is that its intelligence is embodied. It isn’t performing computations in a brain — the body is structured in such a way that it “figures out” the situation by the very nature of its joints, angles and articulation. The brain is just along for the ride, and providing a simple network for the body to talk to itself. As it turns out, this is actually much more like how humans get around — our bodies handle a lot more of our ‘smartness’ than we realize.

I won’t go into much more description here. (And if you want to know more, check this excellent blog post on the topic of the robots, which links/leads to more great writing on embodied/extended cognition & related topics.)

The point I’m getting at is that there’s something to be learned here in terms of how we design information environments. Rather than trying to pre-program and map out every possible scenario, we need systems that respond intelligently by the very nature of their architectures.

A long time ago, I did a presentation where I blurted out that eventually we will have to rely on compasses more than maps. I’m now starting to get a better idea of what I meant. Simple rules, simple structures, that combine to be a “nonlinear dynamical system.” The system should perceive the user’s actions and behaviors and, rather than trying to model in some theoretical brain-like way what the user needs, the system’s body (for lack of a better way to put it) should be engineered so that its mechanisms bend, bounce and react in such a way that the user feels as if the system is being pretty smart anyway.

At some point I’d like to have some good examples for this, but the ones I’m working on most diligently at the moment are NDA-bound. When I have time I’ll see if I can “anonymize” some work well enough to share. In the meantime, keep an eye on those robots.