Summer 2015 Update — Talks, Posts, and other things.

I’ve been pretty busy since my last blog post in December, since Understanding Context launched. Some really great work with clients, lots of travel, and a number of appearances at events have kept me happily occupied. Some highlights:

Talks and Things

O’Reilly: Webcast for Understanding Context, presented on June 10. Luckily, with a quick registration, you can sign up to watch the whole thing for free!

IA Summit:, where I co-facilitated a workshop on Practical Conceptual Modeling with my TUG colleagues Kaarin Hoff and Joe Elmendorf. (See the excellent post Kaarin created at TUG. summarizing choice bits of the workshop).

SXSW Workshop: I taught an invited workshop at SXSW with my colleague Dan Klyn, on “Information Architecture Essentials” — which was wildly successful and well-reviewed. We’re happy to say we’ll be teaching versions of this workshop again this year, at IA Summit Italy, and WebVisions Chicago!

UX Lisbon: where I taught a workshop on analyzing and modeling context for user experiences (which I also taught in abbreviated form at IA Summit, and which I’ll be reprising at UX Week later this summer).

UX Podcast: While in Lisbon, I had the pleasure of doing a joint podcast interview jointly with Abby Covert, hosted by the nice folks at UX Podcast.

Upcoming Appearances

As mentioned above, there are some upcoming happenings — I encourage you to sign up for any that aren’t already sold out!

Understanding Context — Some thoughts on writing the book.

Understanding Context - CoverAfter several years of proposing, writing, revising, and production, Understanding Context is finally a real book. For obvious reasons, I’ve not been especially prolific here at Inkblurt, since every spare moment was mostly used to get the book done.

And it’s still not really done … like the old saying goes, a work of writing is never finished, only abandoned. As I say in the Preface (now online), the book is definitely an act of understanding that is in progress. It’s an invitation to readers to come along on the journey and keep it moving in their own ways, from their own perspectives.
Continue reading “Understanding Context — Some thoughts on writing the book.”

Context Book: A Shape Emerging

I’ve been writing a book on designing context for about a year now. It’s been possibly the most challenging thing I’ve ever done.

I’m starting to see the end of the draft. It’s just beyond my carpal-tunnel-throbbing clutches. Of course, there are still many weeks of revision, review, and the rest still to go.

When I proposed the book to O’Reilly Media, I included an outline, as required. But I knew better than to post that outline anywhere, since I figured it would likely change as I wrote. It turns out, I was more right than I knew. So many of the hunches that nudged me into doing this work turned out to be a lot more complicated, but mostly in a good way.

One major discovery for me was how important the science around “embodied cognition” would be to sorting all this out; also, how little I actually knew about the subject. Now, I find myself fully won over by what some call the “Radical Embodied Cognition” school of thought. An overview of the main ideas can be found in a post at the Psych Science Notes blog, written by a couple of wonderful folks in the UK, from whom I’ve learned a great deal. (They also tweet via @PsychScientists)

At this point, I think the book has a fairly stable structure that’s emerged through writing it. There are 5 chapters; I have about 1/3 of the 4th chapter, and the 5th chapter, to go. (These shouldn’t take me nearly as long as the earlier stuff, for which I had to do a lot more research and learning.)

Partly to help explain this structure to myself, I came up with a diagram that shows how the points covered early on are revisited and built upon, layer by layer. (Touch/click to see full size in separate window)

contextbook_structure_diagram_jan17b

 

Admittedly, the topics listed here don’t sound like a typical O’Reilly book; some might look at it and say “this is too theoretical, it’s not practical enough for me.” But, as I mention in the (still in draft) Preface, “there’s nothing more practical than understanding the properties of the materials you work with, and the principles behind how people live with the things you make.”

There will be “practical examples” of course, though perhaps not every 2-3 pages like in many UX-related books. (Nothing wrong with that, of course, it’s just not as appropriate for this subject matter.)

However — I’m still in the thick of writing, so who knows what could change? Now back to the manuscript. *typetypetypetype*

 

 

 

 

Embodied Responsiveness

I’ve been thinking a lot lately about responsiveness in design, and how we can build systems that work well in so many different contexts, on so many different devices, for so many different scenarios. So many of our map-like ways of predicting and designing for complexity are starting to stretch at the seams. I have to think we are soon reaching a point where our maps simply will not scale.

Then there are the secret-sauce, “smart” solutions that promise they can take care of the problem. It seems to happen on at least every other project: one or more stakeholders are convinced that the way to make their site/app/system truly responsive to user needs is to employ some kind of high-tech, cutting-edge technology.

This can take the form of clippy-like “helpers” that magically know what the user needs, to “conversation engines” that try to model a literal conversational interaction with users, like Jellyvision, or established technologies like the “collaborative filtering” technique pioneered by places like Amazon.

Most of the time, these sorts of solutions hold out more promise than they can fulfill. They aren’t bad ideas — even Clippy had merit as a concept. But to my mind, more often than not, these fancy approaches to the problem are a bit like building a 747 to take people across a river — when all that’s needed is a good old-fashioned bridge. That is, most of the time the software in question isn’t doing the basics. Build a bridge first, then let’s talk about the airliner.

Of course, there are genuine design challenges that do seem to still need that super-duper genius-system approach. But I still think there are more “primitive” methods that can do most of the work by combining simple mechanisms and structures that can actually handle a great deal of complexity.

We have a cognitive bias that makes us think that anything that seems to respond to a situation in a “smart” way must be “thinking” its way through the solution. But it turns out, that’s not how nature solves complex problems — it’s not even really how our bodies and brains work.

I think the best kind of responsiveness would follow the model we see in nature — a sort of “embodied” responsiveness.

I’ve been learning a lot about this through research for the book on designing context I’m working on now. There’s a lot to say about this … a lot … but I need to spend my time writing the book rather than a blog post, so I’ll try to explain by pointing to a couple of examples that may help illustrate what I mean.

Consider two robots.

One is Honda’s famous Asimo. It’s a humanoid robot that is intricately programmed to handle situations … for which it is programmed. It senses the world, models the world in its brain and then tells the body what to do. This is, by the way, pretty much how we’ve assumed people get around in the world: the brain models a representation of the world around us and tells our body to do X or Y. What this means in practice, however, is that Asimo has a hard time getting around in the wild. Modeling the world and telling the limbs what to do based on that theoretical model is a lot of brain work, so Asimo has some major limitations in the number of situations it can handle.  In fact, it falls down a lot (as in this video) if the terrain isn’t predictable and regular, or if there’s some tiny error that throws it off. Even when Asimo’s software is capable of handling an irregularity, it often can’t process the anomaly fast enough to make the body react in time. This, in spite of the fact that Asimo has one of the most advanced “brains” ever put into a robot.

Another robot is nicknamed Big Dog, by a company called Boston Dynamics. This robot is not pre-programmed to calculate its every move. Instead, its body is engineered to respond in smart, contextually relevant ways to the terrain. Big Dog’s brain is actually very small and primitive, but the architecture of its body is such that its very structure handles irregularity with ease, as seen in this video where, about 30 seconds in, someone tries to kick it over and it rights itself.

The reason why Big Dog can handle unpredictable situations is that its intelligence is embodied. It isn’t performing computations in a brain — the body is structured in such a way that it “figures out” the situation by the very nature of its joints, angles and articulation. The brain is just along for the ride, and providing a simple network for the body to talk to itself. As it turns out, this is actually much more like how humans get around — our bodies handle a lot more of our ‘smartness’ than we realize.

I won’t go into much more description here. (And if you want to know more, check this excellent blog post on the topic of the robots, which links/leads to more great writing on embodied/extended cognition & related topics.)

The point I’m getting at is that there’s something to be learned here in terms of how we design information environments. Rather than trying to pre-program and map out every possible scenario, we need systems that respond intelligently by the very nature of their architectures.

A long time ago, I did a presentation where I blurted out that eventually we will have to rely on compasses more than maps. I’m now starting to get a better idea of what I meant. Simple rules, simple structures, that combine to be a “nonlinear dynamical system.” The system should perceive the user’s actions and behaviors and, rather than trying to model in some theoretical brain-like way what the user needs, the system’s body (for lack of a better way to put it) should be engineered so that its mechanisms bend, bounce and react in such a way that the user feels as if the system is being pretty smart anyway.

At some point I’d like to have some good examples for this, but the ones I’m working on most diligently at the moment are NDA-bound. When I have time I’ll see if I can “anonymize” some work well enough to share. In the meantime, keep an eye on those robots.