hacking day at yahoo

6 April, 2007

Chad Dickerson presented his experiences with the Yahoo Hack Day at Etech 2007. This post can be fairly short. Yahoo regularly organizes internal hack days to get developers together to work on quick prototypes or demonstrations. There are just a few rules:

  • it’s about building stuff, not about powerpoint
  • the day goes for 24 hours, starts at 12am on one day, and ends at 12a, the next day with presentation to collegues and management (no powerpoint!)
  • presentations last 90 seconds
  • there are no upfront reviews
  • and basically, that’s it!

At Yahoo this results in hundreds of prototypes. Results are that a lot of people get to know new collegues, because they do not all work on prototypes with their direct collegues. People also learn from each other what their knowledge and skills are. This is a benefit in day-to-day work, as people know where to go for specific problems. And most importantly: it’s a lot of fun! (Yahoo has a website with a gallery of ‘mashups’ where typical examples of such hack events can be found).

Advertisements

making applications fun

6 April, 2007

Raph Koster did a presentation called ‘the core of fun’ at the Etech 2007 (slides available). The main theme of the talk was about structure; ‘things that work’ have a certain structure. Also fun things, music is full of structure and so are games, art, etc. Often structure is limited within a grammar, understanding grammar can help to design ‘things that work’.

In order to make application fun, some ‘magic’ (which was after all the theme of etech) ingredients are necessary:

  • core mechanic (how?) – The action that must be taken to reach on objective is of main importance. It is something repeatable, so it better be good. (E.g. at ebay you keep coming back to bid). However, over time skill should come into play. So the action should be something that can be learned (feedback is necessary) and mastered over time. In the end the action becomes something competitive, thus ratings and metrics underline this competitiveness. (Examples: bidding on ebay is something you can get better at, making a connection at linked-in is different with a CEO or a collegue, etc)
  • preperation (when?) – The point in time some action is taken matters. It’s all about context. In games it matters what has happened before you take a certain action at a certain time (e.g. in strategic games). In other words, the context should keep changing based on actions that are taken. This is not only about the system, but also about the user. The user will change during the lifetime of the system, why shouldn’t his context.
  • territory (where?) – The location where something happens (another form of context) should also make a difference. This way the user is faced with a ‘fresh ascenario’. For example in Amazon, it should matter where (search results, author page, etc.) a user decides to buy something, this way his experience can stay refreshing and new everytime.
  • range of challenges (what?) – Once an action become ‘fun’, it should be able to perform it in many places, with different outcomes. Metaphorically, if the action is a ‘hammer’, there should be a lot of different ‘nails’ around to hit.
  • choice of abilities (with?) -Extending the previous metaphor, it’s not about a lot of different ‘nails’ it is also about a lot of different ‘hammers’. There should be different actions that could reach a certain goal. Users should be able to pick their action to rearch that goal. Based on the action they took, they should also be ‘rewarded’ in different ways. This is related to the first point: keep learning.
  • variable feedback (for?) – In the end, the question is, why is the user taking the actions? Well, he has a goal to reach. If there is just one goal, the system will be pretty boring. Therefor, there should be different outcomes of the system. Just like in games, sometimes a game ends in a suprise, sometimes is an even bigger challenge. But in the end, what makes a game fun is that someone who reached a goal becomes highly visible (think pinball highscores in arcades). Taking it back to systems, users might be rewarded for the hard work that they have put into learning the systems through discounts, ‘cheats’, new tools, etc.
  • bad return on investment (few?) – Games are never fun when it is an endless sequence of very simple actions with high payoffs. Users can only stay interested if they are being challenged just at the edge of their abilities.
  • cost of failure (phooey?) – Finally, just as in the real world something cannot be fun if there are no consequences. Looking at extreme sports, like skydiving, supports this idea: it is fun, because there is also some danger involved.

More information about this ‘theory of fun’ can be found in Koster’s book ‘A theory of fun for game design‘ or the book’s website.

myths of innovation

6 April, 2007

Scott Berkun gave a fun talk on innovation at the Etech 2007. It was in line with his upcoming book (the myths of innovation). Mainly his presentation featured a lot of good quotes about innovation, which were pretty interesting. The most interesting quote was from William McKnight from 3M about his ‘innovation ethos’ (1948):

As our business grows, it becomes increasingly necessary to delegate responsibility and to encourage men and women to exercise their initiative. This requires considerable tolerance. Those men and women, to whom we delegate authority and responsibility, if they are good people, are going to want to do their jobs in their own way.

Mistakes will be made. But if a person is essentially right, the mistakes he or she makes are not as serious in the long run as the mistakes management will make if it undertakes to tell those in authority exactly how they must do their jobs.

Management that is destructively critical when mistakes are made kills initiative. And it’s essential that we have many people with initiative if we are to continue to grow.

In the end his main point about how to reach innovation was a list of values:

  • delegate responsibilities
  • allow people to do their job in their own way
  • expect mistakes to me made
  • reward initiative

Two of the myths that were dealt with were: innovation is not possible in big organisation (counter: how’s 500.000 people working on the moon landings not a big organisation?), innovation is happening now (counter: everything we see has come from some form of innovation).

The way Jeff Jonas gave his talk at etech07 was similar to the content he was presenting. His theme was ‘enterprise amnesia’. Jonas has a history in Las Vegas, where he worked on fraud detection for casinos. The main problem with these organisations is that the left hand does not know about what the right hand is doing. For example, a casino might not know that a dealer and a player at the same table might share the same streetaddress. This could indicate a fraud case. Tying different databases (e.g. the employee and the visitor database) together could solve some problems.

Of course, just tying the databases together does not solve the problem automagically. Data in different places can be slightly different. Therefore, some smart techniques (pdf link) need to be in place to connect data from one record to another. This technique is now featured by IBM (where Jonas is a chief scientist). Basically, all data that shares some elements is compared to connect them into one ‘entity’. A byproduct is that while a database keeps containing more and more records on individuals, the number of individuals grows slower than the database. In other words: the information overload becomes a virtue, more detailed information about individuals is known.

Another interesting point Jonas made is to treat data and queries as the same thing. When someone queries a system, this is also information that enriches the system. Just a simple example would be that a user is looking for information, doesn’t find it, but does find someone else was looking for the same. Having stored the query before, now makes it possible to connect these two individuals. Also, treating new data that enters the database as a query has serious benefits. In this case the new data is used to asked the question: “what does this change to what we already knew?” Jonas calls this ‘perpetual analytics‘.

These techniques can be used for good and bad things. An example where having a sound data storage about individuals might have helped was in the Katrina aftermath.

During the O’Reilly Radar Briefing at Etech 2007, there was a short but thought provoking discussion about energy. The discussion was with Alec Proudfoot, a co-chair of the Energy Innovation Conference 2007, Paul Kedrosky and Rich Miller. The interesting point that came up when web2.0 came up. Basically, web2.0 is all about centralization of computing, applications move from PCs to the web. At the other side of a network connection the computing is taking place, instead of on the desktop. This has a great influence on where electricity is consumed. Already it is know that processors for datacenters cost more in their energy consumption than their upfront cost price.

The result is that data centers are moving to places where energy is more ‘abundant’, for example central Washington state. It’s like going back in time to the industrial revolution, where factories were build close to the necessary resources. Now the compute farms are moving to rural areas where green electricity (hydropower) is available. Another interesting piece of information is that the current backlog for ordering backup diesel generators is about 14 months. Amazing.

It will be interesting to see what happens if services like google applications or the online photoshop application really take off.

update: Nicholas Carr also makes the same point today in his blog post ‘the real web2.0

At the Etech 2007, Marc Hedlund and Brad Greenlee gave a technical talk about privacy techniques for web applications. They both work for Wesabe, a online community where people can manage their money. Users can upload their bank account information which is aggregated in the community. From the collected information good tips and recommendations can be made to help people reach their financial goals.

However, the talk was more about low level techniques for better privacy. There were five points that were dealt with:

  1. critical data local – for a user it can be rather frightening to upload all his account information into wesabe. Not just for ‘shameful expenses’, but there just are some things you don’t want the rest of the world to see about your spending habits. The solution Wesabe takes is to offer a local download client with filters. This tool downloads information from your bank, filters it and then uploads it to Wesabe. It might not be about real privacy (a user cannot really see if the information is actually filtered), but it solves the trust issue. There are some downsides to this approach: users have to go over a threshold (download a client), the burden is now placed on the security level of the user’s computer and there is a severe risk for trojans.
  2. privacy wall – this is a clever idea: Normally, in a database tables are connected through keys: each row in one table has an identifier, the other table a reference to that identifier. In case of Wesabe, there are some tables that connect the user (with its id) to some piece of information (referencing the id). However, it would be better to keep this connection secret. This is easily done by cryptographic hash of the reference to an id. This way, without some sort of password the connection cannot be made. Again, there are some problems with this approach: the biggest is when a user forgets his password. In this scenario, it takes some more effort to get all information back. (read Brad’s comment on this writeup) For a more in depth explanation of this idea, read Brad’s blog posting on the subject.
  3. partitioning – in a way, this concept is somewhat related to the previous. It is always possible that a system become compromised to people with bad intentions. When this happens the actual damage should be kept to a minimum. What Wasabe does, is partioning the databases in such a way that different kinds of data about the same user are stored in different places. For example, a membership and account database can be kept apart. Then a security breach stays compartimentalized. This compartimentalisation is even better when the databases are stored on different systems (not only physical, but also different OSes, database systems, etc).
  4. data fuzzing and log scrubbing – when building a web application with modern tools, a lot of debug and logging is done automatically by the system (for example in ruby on rails or django). This poses a serious threat, as these logs often contain sensitive information. Not just explicitly, also timestamps and IP addresses might be traced back to certain users or other information. When designing and building such a system logs and debug information should be handled very carefully. Wesabe made a point of scrubbing the logs meticiously, and had a retainment policy for logs. Error messages, which are normally send around to developers are now stored on disk and link is sent. When the error is dealt with the log is immediately deleted. However, it keeps being a challenge to fix all possible holes (for example backups of logs also pose problems).
  5. voting algorithms – Wesabe relies on the community to build up knowledge about account information. For example, codes for bankaccount numbers are hard to read. When a user changes such a number to a sensible name, this might be interesting for other users as well. Again, this might be a privacy problem: not all users have to see the name someone gives to an account number. This is fixed by a voting algorithm, just like the one that is being used by Google to classify pictures. If a certain amount of people classify a picture as being a cat, then it is probably a cat. This way, only common knowledge becomes public, without introducing privacy problems.
  6. miscellaneous – furthermore, there were some best practices. Of course, one should always hash passwords in a database. Database IDs should be randomized instead of sequential (although this often is the default in database systems). Finally, the company or website should have a policy to describe how is being dealt with privacy sensitive information.

3D printing

5 April, 2007

Although I missed part of this session, it turned out to be very interesting. Forrest Higgs presented his work on a self-replicating 3D printer. His goal seems very bold, but actually feasible: build a 3D prototyping machine for less than $500. It should be able to replicate most of its own parts.

Actually, during the talk it turned out that he actually wants such a device to be in the hands of youngsters (his proverbial 12 year old), so that the can make their own stuff.

Most of what he showed dealt with his own endeavour: the Tommelise. It is a spinoff from the RepRap project. Basically, the printer is a hot glue-gun with xyz mechanism to position itself to ‘print’ objects. This is the very short version. Of course it does not print glue, but extrudes different kinds of materials from a continuus filament. The materials could be almost anything, like ceramics, plastics or even metals. So far, it has already printed some plastic parts, but experiments are on the way to also print electronics.

Furthermore, this printer will be able to make other machines as well. This way a whole ‘production’ plant can be printed together. First the plant can be scaled up by printing more 3D printers. Then machines can be printed that can do some specialised things, like printing cogs. If all parts for a certain product can be printed, then a pick and place machine can be print to integrate all parts.

Very cool stuff indeed. In the future it could happen that we do not buy products in a store, but just go shopping for the best filaments to print our own things.

It has been called the invention that will bring down global capitalism, start a second industrial revolution and save the environment – and it might just put Santa out of a job too. (Guardian, november 25, 2006)

Still having Etech 2007 on my mind. One of the themes this year was clearly the future of manufacturing. One session about this theme took place in the O’Reilly Radar Briefing.

A lot of people start hacking ‘stuff’. Dale Dougherty is the editor and publisher of Make and was part of the O’Reilly Radar Briefing. I cannot find clear statistics on how many people actually read the magazine, but here are some demographics. A countertrend at this moment is that ‘working with your hands’ is becoming unpopular in the American school system.

Brian Warshawsky has worked at Apple on the Ipod, but is now actively working on the $100 (or now called the One Laptop per Child project (OLPC)). The company he works for, Potenco, has developed the portable power generator for the OLPC. This work has radically changed through the way they do prototyping nowadays. It has become possible to draw up a product, and get a working prototype within a week. Either through 3D printing, or through subcontracting in Eastern Asia (read: China).

John Hagel gave some insights about how R&D and manufacturing is changing in China. This is most visible in ‘creation nets‘ (pdf link) in the motor industry in Chongqin (just a tiny Chinese city of about 32 million people). One organizer (in this case a motor manufacturer) has many parties (hundreds or thousands) working on components for a new motorcycle. Each participant has to compete with many others in the network for his ideas (and parts) to get into the final product. In the end the end-product is highly modularized and in a way standardised (per release). The key results of this way of research and development are:

  • The participants in the network keep pushing the envelope. In order to get business in the network, each participants has to compete to get a deal with the integrator ‘on top’. This way, innovation occurs rapidly.
  • In order to become a succesful player, the delivered components should be highly reliable. The outcome of the ‘game’ is that the integral bike is very reliable. This is a must because the bike’s owners do not have the money (or time) to go back to the garage every week.
  • Together, the network learns rapidly. There cannot be many secrets in the network, so everyone benefits from the lessons other competitors learned.

Of course, this network approach to R&D and manufacturing is not totally new, it has happened in the west also. However, the sheer scale of it makes a difference. This is actually turning a weakness of Chinea (dealing with IPR) into a strength: fixing some standards interfaces makes it possible to innovate within a large network

Today just a few pictures from the Make: Fest at O’Reilly’s Emerging Technology conference 2007 (etech2007). The ‘fest’ was not that big, but there were some cool projects. Especially the 3D projects were awesome, all hacked together with fairly little resources and normal parts. Great stuff.

Here are also some pictures from the hotels I’ve been at this week. The Hyatt for the conference and the Marriott as my home base.

technorati tags:

OK, the title of this talk was a bit on the vague side, but it did turn out to be very inspiring. The bottom line was how to keep complex systems usable.

Charles Armstrong took stage for the first part. His point was about making ‘sociomimetic’ systems easier to use. Sociomimetic stands for mirroring social behavioural patterns in electronic information systems. What it boils down to: the underlying system becomes complex and not intuitively understandable, but there are still people wanting to use it. How do you give the users some guidance for their intuition about how the system works.

Basically there are three factors to do this:

  1. grokability – make it easy for the users to understand what something is. Even a ‘cavemen’ should be able to understand what an axe could do.
  2. predictability – once the tool is understood, is it predictable in its function? If you understand what a computer can do, it is still not predictable how to accomplish certain tasks. This makes for a very steep learning curve.
  3. relevancy – having understood, being able to predict how a tool works, is it actually useful?

But how could these factors be addressed in complex systems?

  1. To get a better grokability something should be as simple as possible. Armstrong gave the example of two london subway maps. The newer maps (as we know them today) don’t really convey the real world, but they are really good to understand how the system is laid out, and make it possible to form a mental model.
  2. As an example of how to improve predictability the Eurofighter Typhoon Aircraft was given. Without any automatic adjustments this type of plane would be almost impossible to fly, because of its ‘aerodynamical unstability in the subsonic region‘. Below the standard aircraft control system is a very advanced system to keep the plane predictable. Again, this complexity is not exposed to the pilot resulting in a predictable experience.
  3. Although politics in a democracy is very complex, politicians seem to do a good job of conveying the relevancy of what they do. How they do it? By simplifying their messages to the bare minimum. They persuade voters with ‘in-your-face-usefulness’ like better education and lower taxes.

Mike Stenhouse took over the talk and showed a lot of examples of systems that were inherently complex, but easy to use. He started with the example of the power of photoshop filters: very complex, not intuitive. However, in the 1990s there were KPT filters, a break from the norm, which made filters very intuitive. Other examples were: the hidden complexity of 3D modelling in Bryce, tag clouds in last.fm that include authority of the tagger (but nobody should notice this complexity), google search (inherently complex, but just one text box to query), flickr.com interestingness, etc, etc. Bottom line: it IS possible to address the above factors in complex systems design.

Some hints and tips were shared at the end: It’s good to use metaphores. In their product (from Trampoline Systems) they use the radar metaphor. With a slider the range of the radar can be changed, resulting in less or more email about certain topics. Also, an expert mode is not always a good idea, because it might intimidate the average user. A better way to go around this is to slowly offer more functionality to people who seem to be experts.

Of course, this presentation sparked some questions from the audience. Someone asked if it would not weaken the tool if it was too simple. Another audience member actually answered with the example of the ‘choke’ in old cars: nobody misses that.

technorati tags: