Designing the Engagement – About our Workshop for IA Summit

I’m happy to announce I’m collaborating with my Macquarium colleague, Patrick Quattlebaum, and Happy Cog Philadelphia’s inimitable Kevin Hoffman on presenting an all-day pre-conference workshop for this year’s Information Architecture Summit, in Denver, CO. See more about it (and register to attend!) on the IA Summit site.

One of the things I’ve been fascinated with lately is how important it is to have an explicit understanding of the organizational and personal context not only of your users but of your own corporate environment, whether it’s your client’s or your own as an internal employee. When engaging over a project, having an understanding of motivations, power structures, systemic incentives and the rest of the mechanisms that make an organization run is immeasurably helpful to knowing how to go about planning and executing that engagement.

It turns out, we have excellent tools at our disposal for understanding the client: UX design methods like contextual inquiry, interviews, collaborative analysis interpretation, personas/scenarios, and the like; all these methods are just as useful for getting the context of the engagement as they are for getting the context of the user base.

Additionally, there are general rules of thumb that tend to be true in most organizations, such as how process starts out as a tool, but calcifies into unnecessary constraint, or how middle management tends to work in a reactive mode, afraid to clarify or question the often-vague direction of their superiors. Not to mention tips on how to introduce UX practice into traditional company hierarchies and workflows.

It’s also fascinating to me how understanding individuals is so interdependent with understanding the organization itself, and vice-versa. The ongoing explosion of new knowledge in social psychology and neuroscience  is giving us a lot of insight into what really motivates people, how and why they make their decisions, and the rest. These are among the topics Patrick & I will be covering during our portion of the workshop.

As the glue between the individual, the organization and the work, there are meetings. So half the workshop, led by Kevin Hoffman, will focus specifically on designing the meeting experience.  It’s in meetings, after all, where the all parties have to come to terms with their context in the organizational dynamics — so Kevin’s techniques for increasing not just the efficiency of meetings but the human & interpersonal growth that can happen in them, will be invaluable. Kevin’s been honing this material for a while now, to rave reviews, and it will be a treat.

I’m really looking forward to the workshop; partly because, as in the past, I’m sure to learn as much or more from the attendees as they learn from the workshop presenters.

Strategy and Innovation: Strange Bedfellows

This is based on a slide I’ve been slipping into decks for over a year now as a “quick aside” comment; but it’s been bugging me enough that I need to get it out into a real blog post. So here goes.

We hear the words Strategy and Innovation thrown around a lot, and often we hear them said together. “We need an innovation strategy.” Or perhaps “We need a more innovative strategy” which, of course, is a different animal. But I don’t hear people questioning much exactly what we mean when we say these things. It’s as if we all agree already on what we mean by strategy and innovation, and that they just fit together automatically.

There’s a problem with this assumption. The more I’ve learned about Communities of Practice, the more I’ve come to understand about how innovation happens. And I’ve come to the conclusion that strategy and innovation aren’t made of the same cloth.

strategy and innovation

1. Strategy is top-down; Innovation is bottom-up

Strategy is a top-down approach. In every context I can think of, strategy is about someone at the top of a hierarchy planning what will happen, or what patterns will be invoked to respond to changes on the ground. Strategy is programmed, the way a computer is programmed. Strategy is authoritative and standardized.

Innovation is an emergent event; it happens when practitioners “on the ground” have worked on something enough to discover a new approach in the messy variety of practitioner effort and conversation. Innovation only happens when there is sufficient variety of thought and action; it works more like natural selection, which requires lots of mutation. Innovation is, by its nature, unorthodox.

2. Strategy is defined in advance; Innovation is recognized after the fact

While a strategy is defined ahead of time, nobody can seem to plan what an innovation will be. In fact, many (or most?) innovations are serendipitous accidents, or emerge from a side-project that wasn’t part of the top-down-defined work load to begin with. This is because the string of events that led to the innovation is never truly a rational, logical or linear process. In fact, we don’t even recognize the result as an innovation until after it’s already happened, because whether something is an innovation or not depends on its usefulness after it’s been experienced in context.

We fill in the narrative afterwards — looking back on what happened, we create a story that explains it for us, because our brains need patterns and stories to make sense of things. We “reify” the outcome and assume there’s a process behind it that can be repeated. (Just think of Hollywood, and how it tries to reproduce the success of surprise-hit films that nobody thought would succeed until they became successful.) I discuss this more in a post here.

3. Strategy plans for success in known circumstances; Innovation emerges from failure in unknown circumstances.

One explicit aim of a strategy is to plan ahead of time to limit the chance of failure. Strategy is great for things that have to be carried out with great precision according to known circumstances, or at least predicted circumstances. Of course strategy is more complex than just paint-by-numbers, but a full-fledged strategy has to have all predictable circumstances accounted for with the equivalent of if-then-else statements. Otherwise, it would be a half-baked strategy. In addition, strategy usually aims for the highest level of efficiency, because carrying something off with the least amount of friction and “wasted” energy often makes the difference between winning and losing.

However, if you dig underneath the veneer of the story behind most innovations, you find that there was trial and error going on behind the scenes, and lots of variety happening before the (often accidental) eureka moment. And even after that eureka moment, the only reason we think of the outcome as an innovation is because it found traction and really worked. For every product or idea that worked, there were many that didn’t. Innovation sprouts from the messy, trial-and-error efforts of practitioners in the trenches. Bell Labs, Xerox PARC and other legendary fonts of innovation were crucibles of this dynamic: whether by design or accident, they had the right conditions for letting their people try and fail often enough and quickly enough to stumble upon the great stuff. And there are few things less efficient than trial and error; innovation, or the activity that results in innovation, is inherently inefficient.

So Innovation and Strategy are incompatible?

Does this mean that all managers can do is cross their fingers and hope innovation happens? No. What it does mean is that to having an innovation strategy has nothing to do with planning or strategizing the innovation itself. To misappropriate a quotation from Ecclesiastes, such efforts are all in vain and like “striving after wind.”

Managing for innovation requires a more oblique approach, one which works more directly on creating the right conditions for innovation to occur. And that means setting up mechanisms where practitioners can thrive as a community of practice, and where they can try and fail often enough and quickly enough that great stuff emerges. It also means setting up mechanisms that allow the right people to recognize which outcomes have the best chance of being successes — and therefore, end up being truly innovative.

I’m as tired of hearing about Apple as anyone, but when discussing innovation they always come up. We tend to think of Apple as linear, controlled and very top-down. The popular imagination seems to buy into a mythic understanding of Apple — that Steve Jobs has some kind of preternatural design compass embedded in his brain stem.

Why? Because Jobs treats Apple like theater, and keeps all the messiness behind the curtain. This is one reason why Apple’s legal team is so zealous about tracking down leaks. For people to see the trial and error that happens inside the walls would not only threaten Apple’s intellectual property, it would sully its image. But inside Apple, the strategy for innovation demands that design ideas to be generated in multitudes like fish eggs, because they’re all run through a sort of artificial natural-selection mechanism that kills off the weak and only lets the strongest ideas rise to the top. (See the Business Week article describing Apple’s “10 to 3 to 1” approach. )

Google does the same thing, but they turn the theater part inside-out. They do a modicum of concept-vetting inside the walls, but as soon as possible they push new ideas out into the marketplace (their “Labs” area) and leverage the collective interest and energy of their user base to determine if the idea will work or not, or how it should be refined. (See accounts of this philosophy in a recent Fast Company article.) People don’t mind using something at Google that seems to be only half-successful as a design, because they know it’ll be tweaked and matured quickly. Part of the payoff of using a Google product is the fun of seeing it improved under your very fingertips.

One thing I wonder: to what extent do any of these places treat “strategy” as another design problem to be worked out in the bottom-up, emergent way that they generate their products? I haven’t run across anything that describes such an approach.

At any rate, it’s possible to have an innovation strategy. It’s just that the innovation and the strategy work from different corners of the room. Strategy sets the right conditions, oversees and cultivates the organic mass of activity happening on the floor. It enables, facilitates, and strives to recognize which ideas might fit the market best — or strives to find low-impact ways for ideas to fail in the marketplace in order to winnow down to the ones that succeed. And it’s those ideas that we look back upon and think … wow, that’s innovation.

Innovation: tinkering, failing & imagining

I like this column by Nicholas Taleb. I haven’t read his book (The Black Swan) but now I think I might.

I’m more and more convinced that this ineffable activity called “innovation” is merely the story we user after the fact, to help ourselves feel like we understand what happened to bring that innovation about. But, much like the faces we think we see in the chaos of clouds, these explanations are merely comfortable fictions that allow us to feel we’re in control of the outcome. When, in fact, success so often comes from trying and failing, even playing, until the law of averages and random inspiration collide to create something new. The trick is making sure the conditions are ideal for people to fail over and over, until imagination stumbles upon insight.

You Can’t Predict Who Will Change The World – Forbes.com

It is high time to recognize that we humans are far better at doing than understanding, and better at tinkering than inventing. But we don’t know it. We truly live under the illusion of order, believing that planning and forecasting are possible. We are scared of the random, yet we live from its fruits. We are so scared of the random that we create disciplines that try to make sense of the past–but we ultimately fail to understand it, just as we fail to see the future. … We need more tinkering: uninhibited, aggressive, proud tinkering. We need to make our own luck. We can be scared and worried about the future, or we can look at it as a collection of happy surprises that lie outside the path of our imagination.

He rails against the wrong-headed approach factory-style standardization for learning and doing. He doesn’t name them outright, but I suspect No Child Left Behind and Six Sigma are targets.

Caveat: the column does tend to oversimplify a few things, such as describing whole cultures as non-inventive instruction-following drones, but that may just be part of the polemic. There’s more good stuff than ill, though.

How the Web Melts Distinctions

I finally got a chance to listen to Bruce Sterling’s rant for SXSW 2007 via podcast as I was driving between PA and NC last week.

There were a lot of great things in it. A number of people have taken great notes and posted them (here’s one example). It’s worth a listen either way — as are all of his talks. I like how Bruce is at a point where he’s allowed to just spin whatever comes to mind for an hour to a group of people. Not because all of it is gold — but because the dross is just as interesting as the gold, and just as necessary.

A lot of this year’s talk was on several books he’s reading, one of which is Yochai Benkler’s The Wealth of Networks. It’s fascinating stuff — and makes me want to actually read this thing. (It’s available online for free — as are some excellent summaries of it, and a giant wiki he set up.)

In the midst of many great lines, one of the things Sterling said that stuck with me was this (likely a paraphrase):

“The distinctions just go away if you’re given powerful-enough compositing tools.”

He was talking about commons-based peer production — things like mashups and remixes, fan art, etc. and how the distinctions between various media (photography, painting, particular instruments, sculpture, etc) blur when you can just cram things together so easily. He said that it used to be you’d work in one medium or genre or another, but now “Digital tools are melting media down into a slum gully.”

First, I think he’s being a little too harsh here. There have always been amateurs who create stuff for and with their peers, and they all think it’s great in a way that has more to do with their own bubble of mutual appreciation than any “universal” measure of “greatness.” It just wasn’t available for everyone to see online across the globe. I’ve been in enough neighborhood writer’s circles and seen enough neighborhood art club “gallery shows” to know this. I’m sure he has too. This is stuff that gives a lot of people a great deal of satisfaction and joy (and drama, but what doesn’t?). It’s hard to fault it — it’s not like it’s going to really take over the world somehow.

I think his pique has more to do with how the “Wired Culture” at large (the SXSW-attending afficianados and pundits) seem to be enamored with it, lauding it as some kind of great democratizing force for creative freedom. But that’s just hype — so all you really have to do is say “we’ll get over it” and move on.

Second, though, is the larger implication: a blurring between long-standing assumptions and cultural norms in communities of creative and design practice. Until recently, media have changed so slowly in human history that we could take for granted the distinctions between photography, design, architecture, painting, writing, and even things like information science, human factors and programming.

But if you think of the Web as the most powerful “compositing tool” ever invented, it starts to be more clear why so many professions / practices / disciplines are struggling to maintain a sense of identity — of distinction between themselves and everyone else. It’s even happening in corporations, where Marketing, Technical Writing, Programming and these wacky start-up User-Experience Design people are all having to figure each other out. The Web is indeed a digital tool that is “melting” things down, but not just media.

Community architectures for good or ill

Austin Govella puts a question to me in his post here: Does Comcast have the DNA to compete in a 2.0 world? at Thinking and Making

Context of the post: Austin is wondering about this story from WSJ, “Cable Giant Comcast Tries to Channel Web TV” — specifically Jeremy Allaire’s comments doubting Comcast’s ability to compete in a “Web 2.0” environment.

At the end of his post, Austin says:

And the more important question, for every organization, how do you best change your DNA to adapt to new ages? Is it as simple as adjusting your organization’s architecture to enable more participation from good DNA? What happens if your internal conversations propagate bad DNA?
This is my question for Andrew: how do you architect community spaces to engender good DNA and fight infections of bad DNA?

My answer: I don’t know. I think this is something everybody is trying to figure out at once. It’s why Clay Shirky is obsessing over it. It’s why Tim O’Reilly and others are talking about Codes of Conduct.

So, when it comes to specifics, I don’t know that we have a lot of templates that we can say work most of the time… it’s so dependent on the kind of community, culture, etc.

However, in general, I think moderation tools that allow the organism to tend to itself are the best way to go. By that I mean “karma” functions that allow users to rate, comment, and police one another to a degree.

That, plus giving users the opportunity to create rich profiles that they come to identify with. Any geeks out there like me know what it’s like to create a quickie D&D character just to play with for the day — you can do whatever you want with it and it doesn’t matter. But one that you’ve invested time in, and developed over many sessions of gaming, is much more important to you. I think people invest themselves in their online ‘avatars’ (if you consider, for example, a MySpace profile to be an avatar — I do), and they’re generally careful about them, if they can be tied to the identity in a real way (i.e. it isn’t just an anonymous ‘alt’).

In short, a few simple rules can create the right structure for healthy complexity.

As for Comcast, I suspect that the company’s image is generally perceived to be a lumbering last-century-media leviathan. So it’s easy for people like Allaire to make these assumptions. I think I might have made similar assumptions, if I didn’t personally know some of the talented people who work at Comcast now!

What Allaire doesn’t come right out and say (maybe he doesn’t understand it?) is that the Web 2.0 video space isn’t so much about delivering video as about providing the social platform for people to engage one another around the content. Like Cory Doctorow said (and yes, I’m quoting it for like the 100th time), content isn’t king, “conversation is king; content is just something to talk about.”

Having the content isn’t good enough. Having the pipes and the captive audience isn’t good enough either. From what I’ve seen, of Ziddio and the like, Comcast is aware of this.

But it’s weird that the story in WSJ only mentions the social web as a kind of afterthought: “Competitors also are adding social networking and other features to their sites to distinguish them from traditional television.” As if social networking is just an added feature, like cup holders in cars. Obviously, WSJ isn’t quite clued in to where the generative power of Web 2.0 really lives. Maybe it’s because they’re stuck in an old-media mindset? Talk about DNA!