making applications fun

6 April, 2007

Raph Koster did a presentation called ‘the core of fun’ at the Etech 2007 (slides available). The main theme of the talk was about structure; ‘things that work’ have a certain structure. Also fun things, music is full of structure and so are games, art, etc. Often structure is limited within a grammar, understanding grammar can help to design ‘things that work’.

In order to make application fun, some ‘magic’ (which was after all the theme of etech) ingredients are necessary:

  • core mechanic (how?) – The action that must be taken to reach on objective is of main importance. It is something repeatable, so it better be good. (E.g. at ebay you keep coming back to bid). However, over time skill should come into play. So the action should be something that can be learned (feedback is necessary) and mastered over time. In the end the action becomes something competitive, thus ratings and metrics underline this competitiveness. (Examples: bidding on ebay is something you can get better at, making a connection at linked-in is different with a CEO or a collegue, etc)
  • preperation (when?) – The point in time some action is taken matters. It’s all about context. In games it matters what has happened before you take a certain action at a certain time (e.g. in strategic games). In other words, the context should keep changing based on actions that are taken. This is not only about the system, but also about the user. The user will change during the lifetime of the system, why shouldn’t his context.
  • territory (where?) – The location where something happens (another form of context) should also make a difference. This way the user is faced with a ‘fresh ascenario’. For example in Amazon, it should matter where (search results, author page, etc.) a user decides to buy something, this way his experience can stay refreshing and new everytime.
  • range of challenges (what?) – Once an action become ‘fun’, it should be able to perform it in many places, with different outcomes. Metaphorically, if the action is a ‘hammer’, there should be a lot of different ‘nails’ around to hit.
  • choice of abilities (with?) -Extending the previous metaphor, it’s not about a lot of different ‘nails’ it is also about a lot of different ‘hammers’. There should be different actions that could reach a certain goal. Users should be able to pick their action to rearch that goal. Based on the action they took, they should also be ‘rewarded’ in different ways. This is related to the first point: keep learning.
  • variable feedback (for?) – In the end, the question is, why is the user taking the actions? Well, he has a goal to reach. If there is just one goal, the system will be pretty boring. Therefor, there should be different outcomes of the system. Just like in games, sometimes a game ends in a suprise, sometimes is an even bigger challenge. But in the end, what makes a game fun is that someone who reached a goal becomes highly visible (think pinball highscores in arcades). Taking it back to systems, users might be rewarded for the hard work that they have put into learning the systems through discounts, ‘cheats’, new tools, etc.
  • bad return on investment (few?) – Games are never fun when it is an endless sequence of very simple actions with high payoffs. Users can only stay interested if they are being challenged just at the edge of their abilities.
  • cost of failure (phooey?) – Finally, just as in the real world something cannot be fun if there are no consequences. Looking at extreme sports, like skydiving, supports this idea: it is fun, because there is also some danger involved.

More information about this ‘theory of fun’ can be found in Koster’s book ‘A theory of fun for game design‘ or the book’s website.


The way Jeff Jonas gave his talk at etech07 was similar to the content he was presenting. His theme was ‘enterprise amnesia’. Jonas has a history in Las Vegas, where he worked on fraud detection for casinos. The main problem with these organisations is that the left hand does not know about what the right hand is doing. For example, a casino might not know that a dealer and a player at the same table might share the same streetaddress. This could indicate a fraud case. Tying different databases (e.g. the employee and the visitor database) together could solve some problems.

Of course, just tying the databases together does not solve the problem automagically. Data in different places can be slightly different. Therefore, some smart techniques (pdf link) need to be in place to connect data from one record to another. This technique is now featured by IBM (where Jonas is a chief scientist). Basically, all data that shares some elements is compared to connect them into one ‘entity’. A byproduct is that while a database keeps containing more and more records on individuals, the number of individuals grows slower than the database. In other words: the information overload becomes a virtue, more detailed information about individuals is known.

Another interesting point Jonas made is to treat data and queries as the same thing. When someone queries a system, this is also information that enriches the system. Just a simple example would be that a user is looking for information, doesn’t find it, but does find someone else was looking for the same. Having stored the query before, now makes it possible to connect these two individuals. Also, treating new data that enters the database as a query has serious benefits. In this case the new data is used to asked the question: “what does this change to what we already knew?” Jonas calls this ‘perpetual analytics‘.

These techniques can be used for good and bad things. An example where having a sound data storage about individuals might have helped was in the Katrina aftermath.

OK, the title of this talk was a bit on the vague side, but it did turn out to be very inspiring. The bottom line was how to keep complex systems usable.

Charles Armstrong took stage for the first part. His point was about making ‘sociomimetic’ systems easier to use. Sociomimetic stands for mirroring social behavioural patterns in electronic information systems. What it boils down to: the underlying system becomes complex and not intuitively understandable, but there are still people wanting to use it. How do you give the users some guidance for their intuition about how the system works.

Basically there are three factors to do this:

  1. grokability – make it easy for the users to understand what something is. Even a ‘cavemen’ should be able to understand what an axe could do.
  2. predictability – once the tool is understood, is it predictable in its function? If you understand what a computer can do, it is still not predictable how to accomplish certain tasks. This makes for a very steep learning curve.
  3. relevancy – having understood, being able to predict how a tool works, is it actually useful?

But how could these factors be addressed in complex systems?

  1. To get a better grokability something should be as simple as possible. Armstrong gave the example of two london subway maps. The newer maps (as we know them today) don’t really convey the real world, but they are really good to understand how the system is laid out, and make it possible to form a mental model.
  2. As an example of how to improve predictability the Eurofighter Typhoon Aircraft was given. Without any automatic adjustments this type of plane would be almost impossible to fly, because of its ‘aerodynamical unstability in the subsonic region‘. Below the standard aircraft control system is a very advanced system to keep the plane predictable. Again, this complexity is not exposed to the pilot resulting in a predictable experience.
  3. Although politics in a democracy is very complex, politicians seem to do a good job of conveying the relevancy of what they do. How they do it? By simplifying their messages to the bare minimum. They persuade voters with ‘in-your-face-usefulness’ like better education and lower taxes.

Mike Stenhouse took over the talk and showed a lot of examples of systems that were inherently complex, but easy to use. He started with the example of the power of photoshop filters: very complex, not intuitive. However, in the 1990s there were KPT filters, a break from the norm, which made filters very intuitive. Other examples were: the hidden complexity of 3D modelling in Bryce, tag clouds in that include authority of the tagger (but nobody should notice this complexity), google search (inherently complex, but just one text box to query), interestingness, etc, etc. Bottom line: it IS possible to address the above factors in complex systems design.

Some hints and tips were shared at the end: It’s good to use metaphores. In their product (from Trampoline Systems) they use the radar metaphor. With a slider the range of the radar can be changed, resulting in less or more email about certain topics. Also, an expert mode is not always a good idea, because it might intimidate the average user. A better way to go around this is to slowly offer more functionality to people who seem to be experts.

Of course, this presentation sparked some questions from the audience. Someone asked if it would not weaken the tool if it was too simple. Another audience member actually answered with the example of the ‘choke’ in old cars: nobody misses that.

technorati tags: