collective intelligence and the illusion of control
28 March, 2007
OK, the title of this talk was a bit on the vague side, but it did turn out to be very inspiring. The bottom line was how to keep complex systems usable.
Charles Armstrong took stage for the first part. His point was about making ‘sociomimetic’ systems easier to use. Sociomimetic stands for mirroring social behavioural patterns in electronic information systems. What it boils down to: the underlying system becomes complex and not intuitively understandable, but there are still people wanting to use it. How do you give the users some guidance for their intuition about how the system works.
Basically there are three factors to do this:
- grokability – make it easy for the users to understand what something is. Even a ‘cavemen’ should be able to understand what an axe could do.
- predictability – once the tool is understood, is it predictable in its function? If you understand what a computer can do, it is still not predictable how to accomplish certain tasks. This makes for a very steep learning curve.
- relevancy – having understood, being able to predict how a tool works, is it actually useful?
But how could these factors be addressed in complex systems?
- To get a better grokability something should be as simple as possible. Armstrong gave the example of two london subway maps. The newer maps (as we know them today) don’t really convey the real world, but they are really good to understand how the system is laid out, and make it possible to form a mental model.
- As an example of how to improve predictability the Eurofighter Typhoon Aircraft was given. Without any automatic adjustments this type of plane would be almost impossible to fly, because of its ‘aerodynamical unstability in the subsonic region‘. Below the standard aircraft control system is a very advanced system to keep the plane predictable. Again, this complexity is not exposed to the pilot resulting in a predictable experience.
- Although politics in a democracy is very complex, politicians seem to do a good job of conveying the relevancy of what they do. How they do it? By simplifying their messages to the bare minimum. They persuade voters with ‘in-your-face-usefulness’ like better education and lower taxes.
Mike Stenhouse took over the talk and showed a lot of examples of systems that were inherently complex, but easy to use. He started with the example of the power of photoshop filters: very complex, not intuitive. However, in the 1990s there were KPT filters, a break from the norm, which made filters very intuitive. Other examples were: the hidden complexity of 3D modelling in Bryce, tag clouds in last.fm that include authority of the tagger (but nobody should notice this complexity), google search (inherently complex, but just one text box to query), flickr.com interestingness, etc, etc. Bottom line: it IS possible to address the above factors in complex systems design.
Some hints and tips were shared at the end: It’s good to use metaphores. In their product (from Trampoline Systems) they use the radar metaphor. With a slider the range of the radar can be changed, resulting in less or more email about certain topics. Also, an expert mode is not always a good idea, because it might intimidate the average user. A better way to go around this is to slowly offer more functionality to people who seem to be experts.
Of course, this presentation sparked some questions from the audience. Someone asked if it would not weaken the tool if it was too simple. Another audience member actually answered with the example of the ‘choke’ in old cars: nobody misses that.