Bugs of collective intelligence: why the best ideas aren’t selected?

One of my early posts was about idea killers and idea helpers. Idea killers are common phrases that kill creativity at its origin. Recently, Matt May wrote a piece called Mind of the Innovator: Taming the Traps of Traditional Thinking, where he identified ‘Seven Sins of Solutions,’ routine patterns of thinking that prevent people from being creative. He suggests that idea stiffling is the worst sin being the most destructive. He illustrated the point by a nice experiment:

At the off-site, there were about 75 people of varying degrees of seniority, ranging from field supervisors to senior execs. i gave the assignment, one of those group priority exercises whereby you rank a list of items individually and then as a group and compare (sort of a “wisdom of crowds” exercise to show that “we” is smarter than “me”). this specific exercise required you to rank 25 items with which you’ve crashed on the moon in relation to how important they were to your survival. nasa had compiled the correct ranking, so there was a clear answer.

I did the exercise with a twist. at each table i put a ringer. i gave the lowest-ranking person the answer. it was their job to convince the command-control types they knew the right answer.

During the group exercise, NOT A SINGLE CORRECT ANSWER GOT HEARD.

After debriefing the exercise in the regular way, i had each person to whom i had given the correct answer stand up. i announced that these individuals had offered the right answer, but their ideas had been stifled, mostly due to their source and stature and seniority, or lack thereof.

I wish i had a camera to catch the red-faced managers.

This is a good example illustrating repeated failure of collective intelligence. Matt suggests that in collective problem-solving workshops groups discuss the right answer and still commonly propose the wrong one as the chosen solution, “because members second-guess, stifle, dismiss and even distrust their own genius.”

Collective problem solving involves iterated innovation and selection of solutions. In his experiment, Matt decoupled the two by ensuring that the right solution was injected into the pool of solutions that group considered and yet he repeatedly observed that the group rejected the right solution. Apparently, the group evaluation of ideas was seriously biased towards accepting the inferior ideas of the senior members at the expense of other ideas, i.e. the senior status of the idea source overweighted the intrinsic merits of the idea. This is an example of subjective selectionist bias. Another common type of bias is temporal bias. For example, solutions proposed earlier can be preferred to solutions proposed later (or the other way around).

We can see these sources of bias working in many “collective intelligence” web 2.0 platforms, where people are supposed to select the fittest among several versions of the content based on the merit of the content. However, in reality, the selection is heavily biased by other factors that has little to do with the quality of the content. Yahoo Answers, for example, presents the answers for voting in the order they were received. The earliest solutions get at the top of the list and receiving a disproportionally high number of votes regardless of their merit. Wikipedia exhibits the opposite kind of temporal bias where the last edit always wins, at least until it is reverted by someone. The revert decision itself is heavily biased by the status of the person who made the edit (e.g. anonymous/registered/admin). Majority of web 2.0 sites make the information about the content author readily available. This results is selection bias towards the content contributed by senior members of the community in the same way as in Matt’s experiment. This is the mechanism ensuring that any Kevin Rose’s submission ends up at the front page of Digg, while the contribution of an ordinary Digg user is unlikely to get there regardless of its merit.

When I was designing 3form website 10 years ago, my primary goal was to select the fittest solutions regardless of who submitted them, i.e. reduce the decision error and subjective/temporal biases that contribute to this error. As a result, 3form interface isn’t showing the names of the authors of submissions to be evaluated (like in a blind peer review). The presentation order of solutions is randomized to reduce temporal bias, so every solution has an equal chance to be placed at the top of the list for evaluation.

The Matt May manifesto is found thanks to Guy Kawasaki post The Seven Sins of Solutions.

Leave a Reply