Language is Infrastructure at IA Summit 2014

I presented this talk at the IA Summit in San Diego this year, back in the spring. I’m adding it to inkblurt so it’ll have a home here, but I already wrote about it over at TUG a few months ago.

It’s all about how language makes stuff in the world that we need to treat like serious parts of our environment — material for design — and how there’s no such thing as “just semantics.”

//speakerdeck.com/assets/embed.js

The World is the Screen

Throughout 2013 and part of 2014, I gave various versions of a talk entitled “The World is the Screen”. (The subtitle varied.)

The general contention of the talk: as planners and makers of digital things and places that are increasingly woven into the fabric of the world around us, we have to expand our focus to understanding the whole environment that people inhabit, not just specific devices and interfaces.

As part of that mission, we need to bring a more rigorous perspective to understanding our materials. Potters and masons and painters, as they mature in their work, come to understand their materials better and more deeply than they would expect the users of their creations to understand them. I argue that our primary material is information … but we don’t have a good, shared concept of what we mean when we say “information.”

Rather than trying to define information in just one way, I picked three major ways in which information affects our world, and the characteristics behind each of those modes. Ultimately, I’m trying to create some foundations for maturing how we understand our work, and how it is more about environments than objects (though objects are certainly critical in the context of the whole).

Anyway … the last version of the talk I gave was at ConveyUX in Seattle. It is a shorter version, but I think it’s the most concisely clear one. So I’m embedding it below. [Other, prior (and longer) versions are also on Speakerdeck – one from IA Summit 2013, and one from Blend Conference 2013. I also posted about it at The Understanding Group.]

Context Design Talk for World IA Day Ann Arbor

The 2013 World IA Day was a huge success. Only its 2nd year in existence, and it had big crowds in 20+ locations (15 official). Congratulations to everyone involved in organizing the day, and to the intrepid board members of the IA Institute who decided to risk transforming the more US-based IDEA conference into this terrific, global, community-driven event.

I was fortunate to be asked to speak at the event in Ann Arbor, MI, where I talked about how information shapes context — the topic I’ve been writing a book about for a while now. I’ll probably continue having new permutations of this talk for quite some time, but here’s a snapshot at least, describing some central ideas I’m fleshing out in the book. I’m calling this “beta 2” — since it has somewhat different and/or updated content vs the one I did for CHI Atlanta back in the fall of 2012.

Video and Slides-with-notes embedded below. Enjoy!

 


 

The Composition of Context: a workshop proposal

Andrea Resmini and co-organizers of the upcoming workshop on Architectures of Meaning (part of the Pervasive Computing conference at Newcastle University in the UK) asked me to participate this year. I’m not able to be there in person, unfortunately, but plan to join remotely. What follows is the “paper” I’m presenting. It’s not a fully fledged academic piece of writing — more like a practitioner-theorist missive.

I’m sharing it here because others may be curious, and it’s also the best summary I’ve done to date of the ideas in the book I’m writing on IA and designing context.

This is a straight dump from MS Word (with a few tweaks). Caveat emptor.

 

Information Architecture and the Composition of Context

Andrew Hinton

Final Draft for Architectures of Meaning Workshop

June 18, 2012

 

Introduction

We lack fully articulated models for context, yet information architecture is especially significant in how context is created, changed or communicated in digital-based information environments. This thesis proposes some principles, models and foundational theories for the beginnings of a framework of context and proposes composition as a rubric for tying these ideas together into IA practice.

The thesis follows a line of reasoning thus:

Context is constructed.

There’s a deep and wide intellectual history around the topic of context. Suffice it to say that there are many layers and threads in the ongoing conversation among experts on the subject. Even though all those threads don’t agree on every point, they add up to some generally accepted ideas, such as:

  • Context is both internal and external. Our minds and bodies determine and influence how we perceive reality, and that internal experience is affected by external objects and interactions. Both affect one another to the point where the distinction between “inner” and “outer” is almost entirely academic.
  • Context has both stable and fluid characteristics. Certainly there are some elements of our lives that are stable enough to be considered “persistent.” But our interactions with (and understanding of) those elements still can make them mean something very different to us from moment to moment. Context exists along an undulating spectrum between those poles.
  • Context is social. Our experience of context emerges from a cognitive history as social beings, with mental models, languages, customs — really pretty much everything — originating from our interactions with others of our kind.

Context is not so simple as “object A is in surrounding circumstance X” — the roles are interchangeable and interdependent. This is why context is so hard to get our hands around as a topic.

(In particular, I’m leaning on the work of Paul Dourish, Bonnie Nardi, Jean Lave, Marcia Bates and Lucy Suchman.)

Context is about understanding.

This phenomenological & post-modern frame for context necessarily complicates the topic — not to point out these complexities would keep us from getting at a real comprehension of how context works.

Still, it can be helpful to have a simple model to use as a compass in this Escher-like landscape.  Hence, the following:

Context is conventionally defined as the interplay between several elements:

  • Situation: the circumstances that comprise the setting (place, time, surroundings, actions, etc.). The concept of “place” figures very heavily here.
  • Subject (Event/Person/Statement/Idea): the thing that is in the situation, and that is the subject of the attempted understanding.
  • Understanding: an apprehension of the true nature of the subject, through awareness and/or comprehension of the surrounding situation.
  • Agent: the individual who is trying to understand the subject and situation (this element is implied in most definitions, rather than called out explicitly).

Context, then, is principally about understanding. There is no need for discussion of context unless someone (agent) is trying to understand a subject in a given situation. That is, context does not exist out in the world as a thing in itself. It emerges from the act of seeking to understand.

This also forms a useful, simple model for talking about context and parsing the elements in a given scenario. However, it gets more complicated due to the ideas, mentioned above, about how context is constructed. Just a few of the wrinkles that come to light:

  • There can be multiple subjects, even if we understand them by focusing on (or foregrounding) one at a time.
  • The subject is also always part of the situation, and any of the circumstances could easily be one or more subjects.
  • In fact, in order to understand the situation, it has to be focused on as a subject in its own right.
  • All of these elements affect one another.
  • Importantly, the subject may be the agent. And there can be multiple agents, where another observer-agent may be able to understand the situation better than the subject-agent, because the subject-agent “can’t see the forest for the trees.” In design for a “user” this is an especially important point, because the user is both agent and subject — a person trying to understand and even control his or her own context.

As you can see, what looks like a simple grammar of what makes context can actually expose a lot of complexity. But this simple model of elements helps us at least start to have a framework for picking apart scenarios to figure out who is perceiving what, which elements are affecting others, and where understanding is and isn’t happening.

In order to unravel this massive tapestry, we have to grab a thread; a good one to grab is what we mean by “understanding.”

And that means we have to understand cognition, which is the engine we use for understanding much of anything.

Continue reading “The Composition of Context: a workshop proposal”

Embodied Responsiveness

I’ve been thinking a lot lately about responsiveness in design, and how we can build systems that work well in so many different contexts, on so many different devices, for so many different scenarios. So many of our map-like ways of predicting and designing for complexity are starting to stretch at the seams. I have to think we are soon reaching a point where our maps simply will not scale.

Then there are the secret-sauce, “smart” solutions that promise they can take care of the problem. It seems to happen on at least every other project: one or more stakeholders are convinced that the way to make their site/app/system truly responsive to user needs is to employ some kind of high-tech, cutting-edge technology.

This can take the form of clippy-like “helpers” that magically know what the user needs, to “conversation engines” that try to model a literal conversational interaction with users, like Jellyvision, or established technologies like the “collaborative filtering” technique pioneered by places like Amazon.

Most of the time, these sorts of solutions hold out more promise than they can fulfill. They aren’t bad ideas — even Clippy had merit as a concept. But to my mind, more often than not, these fancy approaches to the problem are a bit like building a 747 to take people across a river — when all that’s needed is a good old-fashioned bridge. That is, most of the time the software in question isn’t doing the basics. Build a bridge first, then let’s talk about the airliner.

Of course, there are genuine design challenges that do seem to still need that super-duper genius-system approach. But I still think there are more “primitive” methods that can do most of the work by combining simple mechanisms and structures that can actually handle a great deal of complexity.

We have a cognitive bias that makes us think that anything that seems to respond to a situation in a “smart” way must be “thinking” its way through the solution. But it turns out, that’s not how nature solves complex problems — it’s not even really how our bodies and brains work.

I think the best kind of responsiveness would follow the model we see in nature — a sort of “embodied” responsiveness.

I’ve been learning a lot about this through research for the book on designing context I’m working on now. There’s a lot to say about this … a lot … but I need to spend my time writing the book rather than a blog post, so I’ll try to explain by pointing to a couple of examples that may help illustrate what I mean.

Consider two robots.

One is Honda’s famous Asimo. It’s a humanoid robot that is intricately programmed to handle situations … for which it is programmed. It senses the world, models the world in its brain and then tells the body what to do. This is, by the way, pretty much how we’ve assumed people get around in the world: the brain models a representation of the world around us and tells our body to do X or Y. What this means in practice, however, is that Asimo has a hard time getting around in the wild. Modeling the world and telling the limbs what to do based on that theoretical model is a lot of brain work, so Asimo has some major limitations in the number of situations it can handle.  In fact, it falls down a lot (as in this video) if the terrain isn’t predictable and regular, or if there’s some tiny error that throws it off. Even when Asimo’s software is capable of handling an irregularity, it often can’t process the anomaly fast enough to make the body react in time. This, in spite of the fact that Asimo has one of the most advanced “brains” ever put into a robot.

Another robot is nicknamed Big Dog, by a company called Boston Dynamics. This robot is not pre-programmed to calculate its every move. Instead, its body is engineered to respond in smart, contextually relevant ways to the terrain. Big Dog’s brain is actually very small and primitive, but the architecture of its body is such that its very structure handles irregularity with ease, as seen in this video where, about 30 seconds in, someone tries to kick it over and it rights itself.

The reason why Big Dog can handle unpredictable situations is that its intelligence is embodied. It isn’t performing computations in a brain — the body is structured in such a way that it “figures out” the situation by the very nature of its joints, angles and articulation. The brain is just along for the ride, and providing a simple network for the body to talk to itself. As it turns out, this is actually much more like how humans get around — our bodies handle a lot more of our ‘smartness’ than we realize.

I won’t go into much more description here. (And if you want to know more, check this excellent blog post on the topic of the robots, which links/leads to more great writing on embodied/extended cognition & related topics.)

The point I’m getting at is that there’s something to be learned here in terms of how we design information environments. Rather than trying to pre-program and map out every possible scenario, we need systems that respond intelligently by the very nature of their architectures.

A long time ago, I did a presentation where I blurted out that eventually we will have to rely on compasses more than maps. I’m now starting to get a better idea of what I meant. Simple rules, simple structures, that combine to be a “nonlinear dynamical system.” The system should perceive the user’s actions and behaviors and, rather than trying to model in some theoretical brain-like way what the user needs, the system’s body (for lack of a better way to put it) should be engineered so that its mechanisms bend, bounce and react in such a way that the user feels as if the system is being pretty smart anyway.

At some point I’d like to have some good examples for this, but the ones I’m working on most diligently at the moment are NDA-bound. When I have time I’ll see if I can “anonymize” some work well enough to share. In the meantime, keep an eye on those robots.

 

 

 

Notes on IA from 2002

Tonight, I ran across some files from 2002 (10 yrs ago), some of which were documents from the founding of the IA Institute. At some point I need to figure out what to do with all that.

But among these files was a text clipping that looks as if it was probably part of a response I was composing for a mailing list or something. And it struck me that I’ve been obsessing over the same topics for at least 10 years. Which is … comforting… but also disconcerting. I suppose i’m glad I’m finally writing a book on some of these issues because now maybe I can exorcise them and move on.

Here’s the text clipping.

I agree it’s not specific to the medium. If you can call the Internet a medium. I really think it’s about creating spaces from electrons rather than whole atoms.

If putting two bricks together is architecture (Mies), then putting two words together is writing. The point is that you’re doing architecture or writing, but not necessarily well. Both acts have to be done with a rationale, with intention and skill. And their ultimate success as designs depend upon how well they are used and/or understood.

But what about putting two ideas together, when the ideas manifest themselves not as words alone, but as conceptual spaces that are experienced physically, with clicking fingers and darting eyeballs. No walking necessary, just some control that’s quick enough to follow each connecting thought.

What really separates IA from writing? I could say that putting About and Careers together is “writing” … It’s a phrase “about careers.” But if I put About and Careers together in the global navigation of a website, with perhaps a single line between them to separate them, there’s another meaning implied altogether.

Yet those labels are just the signs representing larger concepts, that bring with them their own baggage and associations, and that get even weirder when we put them together (they tend to exert force on one another, like gravity, in their juxtaposition). The decision to name them as they are, to place the entryways (signs/labels) to these areas in a globally accessible area of the interface, to group them together, and how the resulting “rooms” of this house unfold within those concepts — that’s information architecture.

We use many tools for the structuring of this information within these conceptual rooms, and these can include controlled vocabularies, thesauri, etc. There is a whole, deep, ancient and respected science behind these tools alone. But just as physics and enginnering do not make up the whole of physical Architecture, these tools do not make up the whole of Information Architecture.

Why did we not have to think about this stuff very much before the Web? Because no electron-based shared realities were quite so universally accessed before. Yes, we had HCI and LIS. Yes, we had interaction design and information design. We had application design and workflow and ethnographic discovery methods and business logic and networked information.

But the Web brings with it the serendipitous combination of language, pictures, and connections between one idea and another based on nothing but thought. Previous information systems were tied primarily to their directory structures. But marrying hypertext (older than the web) to an easy open source language (html) and nearly universal access, instantaneously from around the world (unlike hypertext applications and documents, such as we made with HyperCard) created an entirely new entity that we still haven’t gotten our heads around quite yet.

We’re still drawing on cave walls, but the drawings become new caves that connect to other caves. All we have to do is write the sign, the word, the picture, whatever, on the wall, and we’ve brought another place into being.

I wonder if Information Architecture can be seen as Architecture without having to worry so much about time and space? Traditional architecture sans protons and nuclei?

What if Jerusalem were an information space rather than a physical one? I wonder if many faiths could then somehow live there together in peace, with some clever profile-based dynamic interface control? (One user sees a temple, another sees a mosque?)

I wonder if Information Architecture is more about anthills and cowpaths than semantic hierarchies?

I wonder if MUSH’s, MOO’s and Multiplayer Quake already took Information Architecture as far as it’ll ever go, and we’re just trying to get business-driven IA to catch up?

 

Reading this now is actually disturbing to me. Not unlike if I were Jack Torrance’s wife looking at his manuscript in The Shining … but then realizing I was Jack. Or something.

So. Exorcism. Gotta keep writing.

 

The Contexts We Make

I’ve been presenting on this topic for quite a while. It’s officially an obsession. And I’m happy to say there’s actually a lot of attention being paid to context lately, and that is a good thing. But it’s mainly from the perspective of designing for existing contexts in the world, and accommodating or responding appropriately to them.

For example, the ubicomp community has been researching this issue for many years — if computing is no longer tied to a few discrete devices and is essentially happening everywhere, in all sorts of parts of our environment, how can we make sure it responds in relevant, even considerate ways to its users?

Likewise, the mobile community has been abuzz about the context of particular devices, and how to design code and UI that shapes the experience based on the device’s form factor, and how to balance the strengths of native apps vs web apps.

And the Content Strategy practitioner community has been adroitly handling the challenges of writing for the existing audience, situational & media contexts that content may be published or syndicated into.

All of these are worthy subjects for our attention, and very complex challenges for us to figure out. I’m on board with any and all of these efforts.

But I genuinely think there’s a related, but different issue that is still a blind spot: we don’t only have to worry about designing for existing contexts, we also have to understand that we are often designing context itself.

In essence, we’ve created a new dimension, an information dimension that we walk around in simultaneously with the one where we evolved as a species; and this dimension can significantly change the meaning of our actions and interactions, with the change of a software rule, a link name or a label. There are no longer clear boundaries between “here” and “there” and reality is increasingly getting bent into disorienting shapes by this pervasive layer of language & soft-machinery.

My thinking on this central point has evolved over the last four to five years, since I first started presenting on the topic publicly. I’ve since been including a discussion of context design in almost every talk or article I’ve written.

I’m posting below my 10-minute “punchy idea” version developed for the WebVisions conference (iterations of this were given in Portland, Atlanta & New York City).

I’m also working on a book manuscript on the topic, but more on that later as it takes more shape (and as the publisher details are ironed out).

I’m really looking forward to delving into the topic with the attention and breadth it needs for the book project (with trepidation & anxiety, but mostly the positive kind ;-).

Of course, any and all suggestions, thoughts, conversations or critiques are welcome.

PS: as I was finishing up this post, John Seely Brown (whom I consider a patron saint) tweeted this bit: “context is something we constantly underplay… with today’s tools we can now create context almost as easily as content.” Synchronicity? More likely just a result of his writing soaking into my subconscious over the last 12-13 years. But quite validating to read, regardless 🙂

I’m pasting the SlideShare-extracted notes below for reference.
Continue reading “The Contexts We Make”