Archive for the ‘creativity’ Category

The new Google Q&A service expands to China

Monday, August 20th, 2007

Google’s free knowledge market service initially was only available in Russia (see my short review of it in a post Google Answers is reborn in Russia). Now China has this service as well. Haochi Chen from Googlified has more details on this. I am going to post a detailed comparison of five knowledge markets soon, including ones of Naver, Yahoo, and Google.

Bugs of collective intelligence: why the best ideas aren’t selected?

Monday, August 20th, 2007

One of my early posts was about idea killers and idea helpers. Idea killers are common phrases that kill creativity at its origin. Recently, Matt May wrote a piece called Mind of the Innovator: Taming the Traps of Traditional Thinking, where he identified ‘Seven Sins of Solutions,’ routine patterns of thinking that prevent people from being creative. He suggests that idea stiffling is the worst sin being the most destructive. He illustrated the point by a nice experiment:

At the off-site, there were about 75 people of varying degrees of seniority, ranging from field supervisors to senior execs. i gave the assignment, one of those group priority exercises whereby you rank a list of items individually and then as a group and compare (sort of a “wisdom of crowds” exercise to show that “we” is smarter than “me”). this specific exercise required you to rank 25 items with which you’ve crashed on the moon in relation to how important they were to your survival. nasa had compiled the correct ranking, so there was a clear answer.

I did the exercise with a twist. at each table i put a ringer. i gave the lowest-ranking person the answer. it was their job to convince the command-control types they knew the right answer.

During the group exercise, NOT A SINGLE CORRECT ANSWER GOT HEARD.

After debriefing the exercise in the regular way, i had each person to whom i had given the correct answer stand up. i announced that these individuals had offered the right answer, but their ideas had been stifled, mostly due to their source and stature and seniority, or lack thereof.

I wish i had a camera to catch the red-faced managers.

This is a good example illustrating repeated failure of collective intelligence. Matt suggests that in collective problem-solving workshops groups discuss the right answer and still commonly propose the wrong one as the chosen solution, “because members second-guess, stifle, dismiss and even distrust their own genius.”

Collective problem solving involves iterated innovation and selection of solutions. In his experiment, Matt decoupled the two by ensuring that the right solution was injected into the pool of solutions that group considered and yet he repeatedly observed that the group rejected the right solution. Apparently, the group evaluation of ideas was seriously biased towards accepting the inferior ideas of the senior members at the expense of other ideas, i.e. the senior status of the idea source overweighted the intrinsic merits of the idea. This is an example of subjective selectionist bias. Another common type of bias is temporal bias. For example, solutions proposed earlier can be preferred to solutions proposed later (or the other way around).

We can see these sources of bias working in many “collective intelligence” web 2.0 platforms, where people are supposed to select the fittest among several versions of the content based on the merit of the content. However, in reality, the selection is heavily biased by other factors that has little to do with the quality of the content. Yahoo Answers, for example, presents the answers for voting in the order they were received. The earliest solutions get at the top of the list and receiving a disproportionally high number of votes regardless of their merit. Wikipedia exhibits the opposite kind of temporal bias where the last edit always wins, at least until it is reverted by someone. The revert decision itself is heavily biased by the status of the person who made the edit (e.g. anonymous/registered/admin). Majority of web 2.0 sites make the information about the content author readily available. This results is selection bias towards the content contributed by senior members of the community in the same way as in Matt’s experiment. This is the mechanism ensuring that any Kevin Rose’s submission ends up at the front page of Digg, while the contribution of an ordinary Digg user is unlikely to get there regardless of its merit.

When I was designing 3form website 10 years ago, my primary goal was to select the fittest solutions regardless of who submitted them, i.e. reduce the decision error and subjective/temporal biases that contribute to this error. As a result, 3form interface isn’t showing the names of the authors of submissions to be evaluated (like in a blind peer review). The presentation order of solutions is randomized to reduce temporal bias, so every solution has an equal chance to be placed at the top of the list for evaluation.

The Matt May manifesto is found thanks to Guy Kawasaki post The Seven Sins of Solutions.

Scientific knowledge markets: the case of InnoCentive

Sunday, May 27th, 2007

Today I came across HBS working paper “The Value of Openness in Scientific Problem Solving” by Karim Lakhani, Lars Jeppesen, Peter Lohse and Jill Panetta (link to 58 page PDF is here). The paper studies InnoCentive, a knowledge market similar to 3form that corporations use to solve their research problems unsolved by corporate R&D labs.

InnoCentive was founded by Eli Lilly & Company in 2001 and shares a significant similarity with 3form in organizing the distributed problem solving process, except that it does not broadcast solutions it receives, keeping them private for the corporation that posted the respective problem. As a result, the innovation process at InnoCentive while being distributed is not open: the solvers can’t modify or recombine the solutions proposed earlier or learn from them, as they do at 3form. However, the working paper shows that sharing problems by itself has many advantages over the traditional corporate practice of keeping them closed.

We show that disclosure of problem information to a large group of outside solvers is an effective means of solving scientific problems. The approach solved one-third of a sample of problems that large and well-known R & D-intensive firms had been unsuccessful in solving internally.

There are many interesting observations in this paper that might be relevant to 3form as well and are likely to be interesting to the members of 3form community.

Problem-solving success was found to be associated with the ability to attract specialized solvers with range of diverse scientific interests. Furthermore, successful solvers solved problems at the boundary or outside of their fields of expertise, indicating a transfer of knowledge from one field to others.

Here are the results I found the most interesting:

  • the diversity of interests across solvers correlated positively with solvability, however, the diversity of interests per solver had a negative correlation
  • the further the problem was from the solvers’ field of expertise, the more likely they were to solve it; there was a 10% increase in the probability of being a winner if the problem was completely outside their field of expertise
  • the number of submissions is not a significant factor of solvability
  • very few solvers are repeated winners

The authors of the HBS paper draw analogy to local and global search to explain effectiveness of the problem broadcasting. They suggest that each solver performs a local search, implying that broadcasting the problem to outsiders makes the search global (”broadcast search” in authors’ terminology). Indeed, if solvers don’t have access to solutions of other solvers (the case at InnoCentive), all they can do is a local search (hillclimbing). From the computational perspective, the InnoCentive problem solving process is analogous to a hillclimbing with a random restart: each new solver performs a local search and returns a locally optimal solution, finally, the best of those locally optimal solutions determines the winner.

How to discover the best people?

Saturday, January 6th, 2007

The New York Times published an article Google Answer to Filling Jobs Is an Algorithm (also available here). The article describes new algorithmic approaches for people selection adopted by Google.

It is starting to ask job applicants to fill out an elaborate online survey that explores their attitudes, behavior, personality and biographical details going back to high school.

The questions range from the age when applicants first got excited about computers to whether they have ever tutored or ever established a nonprofit organization.

The answers are fed into a series of formulas created by Google’s mathematicians that calculate a score — from zero to 100 — meant to predict how well a person will fit into its chaotic and competitive culture.

I didn’t apply for Google employment, but had some experience with their methods of people selection last summer. Google uses a proactive approach to hiring. In particular they actively contact new Ph.D.s and invite them to participate in phone interviews. Google recruiters found my resume on the web and I was suggested and agreed to participate in three phone interviews. Google phone interviews were about 30 minutes each. There were sessions of multiple choice questions and problem solving session in which I was asked to program an algorithmic solution on a piece of paper and dictate the result back to the interviewer. I found that recruiting techniques were not a strong area of Google and approaches were far from innovative. I was puzzled how a company like Google can’t create a simple web application to administer those multiple choice questions or outsource the whole thing to a company that does it better (e.g. Brainbench). Now I see that Google begins to entertain the same thoughts and maybe something will be changing:

“As we get bigger, we find it harder and harder to find enough people,” said Laszlo Bock, Google’s vice president for people operations. “With traditional hiring methods, we were worried we will overlook some of the best candidates.”

Last month, Haochi Chen and Christian Binderskagnaes ( discovered Google Online Assessments that might be a new Google tool to assess people’s skills: “The purpose of this website is still something of a secret, but it’s going to be great, whatever it is.”

We will see how great a new Google algorithmic approach for skill assessment will be. I think they definitely will be more efficient this way saving time of their employees and phone bills. But can they also be more effective? I don’t know the answer to this quesiton. Multiple choice questions still have fundamental limitation: they don’t allow participants to manifest their creativity because they don’t provide a space for any creative solution. They only test ability of judgement.

Another point is well taken by this reddit review

You are creating a society within a society where you weed out undesirables using a simple algorithm. The problem is … whether creativity and innovation can rise out of homogeneity, even the type of homogeneity that Google is practicing.

Human innovation has evolutionary dynamics, cycles of change and selection. My research suggests that innovation and creativity are manifestations of the underlying evolutionary process. In this case, the diversity is crucial as it is one of the main prerequisites for evolution. This is also supported by results of experimental research suggesting that diverse teams of ordinary individuals outperform homogeneous teams of elite individuals (Hong, 2004). So, from evolutionary point of view the loss of diversity is quite dangerous. Google shares this problem with many top Universities.

From the other point of view (my research in social synthesis), diversity is just one way to increase chances to achieve complementarity of resources needed for synergetic exchange. For example, if backgrounds of two people are too similar, they can have little misunderstandings, but don’t have much chance to benefit from mutual learning. On the other hand, if their interests are complementary, they have a great opportunity to learn from each other if they will be able to overcome misunderstandings.

See also my previous post suggesting another way for employee selection that allows to identify creative solutions and people.


Lu Hong and Scott E. Page (2004) Groups of diverse problem solvers can outperform groups of high-ability problem solvers, Proceedings of the National Academy of Sciences, 101(46), 16385-16389 [link]

What is social search?

Thursday, December 7th, 2006

A panel on social search at SES Chicago tried to define social search more precisely yesterday. Chris Sherman suggested the following definition: social search are “wayfinding tools informed by human judgment.” Further discussion of this definition can be found here and here.

I am an evolutionary computation researcher and see the striking resemblance of this new definition to the definition of an interactive genetic algorithm. An interactive genetic algorithm (IGA) is defined as a genetic algorithm informed by human judgement. Genetic algorithm itself is a search procedure inspired by the Darwinian model of evolution. So you can see from here the tight connection between social search as defined above and IGA. However, similarities don’t end here. The situation with defining social search mirrors the earlier one with defining IGA. The definition of IGA was too narrow to encompass other kinds interaction in addition to human judgement. One important part that was missing is the use of human creativity in addition to human judgement or without it. The new class of algorithms therefore was called human-based genetic algorithms (HBGA), though they are even more heavily interactive than IGA. If the definition of IGA did not limit human input to judgement there would be no need to create a new term.

Similar things are now happening with social search. In the case of social search, some examples discussed are not well captured by the proposed definition. For example, Yahoo Answers already uses both human judgement and human creativity and is a typical HBGA. This is why I would like to propose a different definition for social search:

Social search is a search algorithm where some functions are outsourced to humans.

This definition positions social search in abstraction hierarchy between human-based computation and human-based evolutionary computation. On one hand, human-based computation (”algorithms outsourcing some functions to humans”) may be used for other purposes than search. On the other hand, there may be other ways of doing social search than those using evolutionary models.

Idea killers and idea helpers

Saturday, December 2nd, 2006

Scott Berkun has two great posts on idea killers and idea helpers in his blog. Idea killers are statements that stop ideas in their tracks. Idea helpers “act as idea fertilizer, helping them to grow, find homes, make friends, and grow from ideas into solutions.” Here is my selection from each category:

Idea Killers

  • Corporate won’t support it.
  • We can never sell this idea to the client
  • Not an interesting problem
  • Don’t you have enough to do?
  • We tried that already
  • Don’t worry about that - we have smart people working on it
  • I am sure someone has already thought of that
  • If it worked they would have implemented it
  • Isn’t Google already working on something like that?
  • We don’t have time
  • Do it in your spare time, if it is succesfull we maybe use it
  • If we had more time, I’d say go for it.
  • Let’s take that offline
  • I’ll look into it…
  • see more

Idea helpers

  • Why not?
  • How can I help you?
  • Good, lets make a prototype and see if it holds together.
  • What should we change to help make this happen?
  • Go for it!
  • see more

The number of items in each category reflects my recent experience. Idea killers were more abundant and were coming from a variety of sources. Idea helpers were quite rare and mostly coming from my relatives and friends. What is your experience?