Mapping the social web with Postrank

August 23rd, 2010

This weekend I had a chance to play with the Postrank data to get some insights it reveals about the user engagement patterns across the major social media sites. Since this might be interesting to people studying human-based computation, I decided to share my preliminary results here.

I used Postrank metrics API to retrieve a data for a set of urls. It provides counts of individual user interactions from all PostRank monitored social sites around a single web page. The metrics update in real-time as new user activity occurs and reflect the amount of user engagement the page accumulated so far. If you haven’t used Postrank metrics before, the easiest way is to try their new google reader extension which is pretty nice.

The data for an individual web page comes in the following form:

"de3d4d72ebac1e886232f4ab27bd7b46": {
"brightkite": 2.0,
"reddit_votes": 1880.0,
"delicious": 304.0,
"reddit_comments": 369.0,
"views": 9.0,
"identica": 5.0,
"gr_shared": 13.0,
"google": 40.0,
"fb_post": 5.0,
"diigo": 2.0,
"clicks": 62.0,
"blip": 1.0,
"digg": 103.0,
"buzz": 5.0,
"bookmarks": 4.0,
"twitter": 988.0,
"jaiku": 2.0,
"ff_comments": 2.0

Different social media sites implement different human-based computation techniques, so their activity metrics are not comparable to each other, in general. We can compare the same metric for different web pages, but it doesn’t tell us much about the site/algorithm that computed the metric. One way to analyse this data is to look into pairwise correlations between the metrics across multiple sites. The pairwise correlation may be indicative of some interaction among the metrics. It can be the overlap in the user base (e.g. a user shares the same sites to both diigo and delicious), common interests among users of different sites (users of each site share to the respective sites independently because of similar preferences), or some other factors.

I took a sample of 2169 urls pulled from about 200 feeds in my google reader. Those feeds cover a pretty diverse set of topics, including science, engineering, entrepreneurship, business, management, psychology, legal, photography, music, humor, lifestyle, etc. I pulled the Postrank metrics for each of those urls into a user engagement matrix. Each row of the matrix represents a url information, and each column has values of a single engagement metric (e.g. number of posts on twitter) across all the 2169 urls. I computed the Pearson correlation between every pair of columns. This resulted in a matrix visualized below:

Social media correlation matrix

We can see that the Hacker news score and the Hacker news comments highly correlate with each other (correlation 0.9 suggests that one is nearly proportional to another). However very high correlations between different sites (orange spots in the matrix) are less expected. A likely reason for very high correlations is availability of the tools that allow users to export their activity on one site into another. This might be responsible for correlation 0.8 between magnolia and delicious and correlation 0.6 between diigo and delicious. Such import/export ability is enabled by the apis, so we can expect that the sum of correlations in each row would be indicative of the quality and usage of its apis for data portability. Here is the top 10 social sites according to this metric and the top three sites are hardly surprising:

twitter 11.894999
fb_post 11.187575
ff_comments 10.898951
buzz 9.911882
identica 9.897181
ff_likes 9.319641
hn_comments 8.370196
blip 8.366225
diigo 8.334757
hn_score 8.180564

Finally, in order to better visualize the relationships among these sites/metrics, I used MDS (multi-dimensional scaling), a technique often used to map multi-dimensional points into a plane in a way that the distance between them on a plane best approximates the distance in the original multi-dimensional space. For this case, I used 1-correlation as an input to the MDS. This way, sites showing similar user engagement patterns end up close to each other.

One use of this map could be finding alternative sites to explore that have a like-minded community of people. For example, if you are using deliciousto share your bookmarks, you might consider exploring its nearest neighbors: diigo, tumblr, magnolia, and hatena.

Social media map

Unfortunately, not every social media site allows the access to their user engagement data via their activity streams. I hope more sites do this in the near future, so this map could be more complete. The landscape of social media sites is changing fast and many new sites appear. Some of these new sites might not be getting attention they deserve and this kind of data-driven social media mapping may help users to find sites that offer them the best experience.

Comparison of free knowledge markets

September 16th, 2007

Knowledge market is a distributed social mechanism that helps people to identify and satisfy their demand for knowledge. It can facilitate locating existing knowledge resources similarly to what search engines do (the name social search refers to this). It can also stimulate creation of new knowledge resources to satisfy the demand (something that search engines can’t do). The goal of this post is to compare several free knowledge markets created by 3form, Naver, Yahoo,, and Google to identify their common elements and differences. All these sites organize collaborative problem solving activity of a large number of participants providing means and incentives to contribute participants’ intelligent abilities to the distributed problem solving process. MIT Tech Review published an attempt to make a comparison of Q&A sites by Wade Rush. Unfortunately, that comparison was of low quality and too superficial to be useful (see the readers comments). Here is my attempt at such a comparison. Its focus is on free knowledge markets, i.e. those that don’t charge fees for participation and allow participants to build on top of the knowledge resources contributed by others.


Free Knowledge Exchange project was launched in summer 1998 in Russia and its international version became available at in February 1999. The project allows any participant to submit problems and brings those problems to the attention of other people (3form community) to collect hints and solutions. The credit assignment system of the website tracks contribution of each individual participant to solving problems. It rewards the actions of the participants as well as the quality of contributed content. In exchange for the contribution to solving problems of others, the website returns to the participant the proportional share of the collective attention directed towards solving the participant’s own problems. 3form uses the method known as human-based genetic algorithm (published in 2000). Naver Knowledge iN is a Korean free knowledge market service opened in 2002 by NHN Corp. The site is based on the same idea as 3form, though implements it somewhat differently. This service made Naver the biggest internet destination in Korea and was a major factor allowing Naver to beat Google and Yahoo in the Korean search market. It took a couple of years for Yahoo and Google to learn their Korean lesson. Yahoo launched Yahoo Answers in December 2005 that is now the biggest free knowledge market worldwide. is a Russan knowledge market inspired by Yahoo Answers and currently the biggest similar service in Russia. Google Q&A is the newest service being tested in Russia and China by Google. Google’s service is likely to be inspired by prior work, though I am not aware of Google acknowledging this.

Prior to 3form, two ways of collective problem solving on the internet were available. On one hand, there were free knowledge sharing forums such as Usenet and IRC, where users could ask technical questions and get help from volunteer experts whose participation was neither accounted nor rewarded in any way. On the other hand, there were expert advice services designed around fee-based Q&A model where questions are answered for a fee by a limited number of paid experts. Experts Exchange (EE) made a step towards becoming a free service by allowing anyone to answer questions. It introduced “answer points” to identify experts among its volunteer answerers. The answer points were awarded based on user evaluation of the answers: the author of the problem could allocate the total amount of answer points among all people who contributed useful ideas toward solution. Despite of beign innovative at the expert side, the service remained fee-based on the user side (even though users were getting some credit in “question points” independent of their contribution, allowing them to ask a limited number of questions). In other words, while the question points had monetary value, answer points had no such value (in particular they couldn’t be counted towards “question points”). Once the limited amount of question points was exhaused, users had to start buying question points to continue using the system.

Korean Naver played a key role in popularizing the concept of knowledge market. Naver, however, was not the first knowledge market in Korea. DBDiC offered analogous service as early as Oct 2000. DBDiC presumably developed its technique independently from 3form, but shares similar architecture, including general structure and credit assignment system. There are two key differences of DBDiC technique from the one of 3form: (1) the identity of the author of the solution biases evaluation of the solution, i.e. high status of the author can lead to accepting the inferior answer as the best despite the presence of the better answer contributed by a person with lower status, (2) answers that are positioned earlier in the list are more likely to be read, chosen, and evaluated, i.e. a great answer later in the list can easily be overlooked. These differences resulted in subjective and position specific biases in solution evaluation system (see also my previous post .Bugs of collective intelligence: why the best ideas aren’t selected?). I could speculate that if DBDiC designers were more familiar with 3form service, they could have avoided those undesirable biases that later propagated into every knowledge market platform created subsequently.

Free knowledge markets are currently abundant with numerous implementations. Wade Rush lists six recently created services. ReadWriteWeb post lists 29 knowledge market services. Neither of these lists is complete and it may not be feasible to create a complete list as new similar services appear almost every day. However, most new services are similar to one of those reviewed here and are likely to be inspired by them.

Incentive systems

Knowledge markets differ from knowledge sharing websites by implementing knoweldge evaluation and incentive systems that encourage participants to help each other. The incentive system of a typical free knowledge market is based on rewarding actions of its participants as well as rewarding quality of their contribution. Measure of quality is normally based on user evaluation. An alternative to this would be computational evaluation, e.g. one based on frequency of occurrence of different answers collected independently (see Luis von Ahn work that explores this model in specialized applications like image labeling). In the case of a typical knowledge market frequency counting is problematic due to the bigger search space.

The action rewards encourage particular actions reinforcing participant behavior that is beneficial for the system as a whole, for example, it can be as simple as visiting the website. The following table summarizes action rewards offered by different knowledge markets:

Action 3form Naver Yahoo Google
Join 1 100 100 100
Visit 3 1 1 5
Submit question anon 0 -20 N/A N/A N/A
Submit question pseu N/A 1 -5 -5 -B
Submit expert question N/A -50*E N/A N/A N/A
Submit answer 0.01 2 2 f(K) 2
Select best answer to your question 3 3
Evaluate answer 1 1 [S>=250] 1
Evaluate question 1 1

In this table, S refers to the current score of the participant. For example, will not reward new participants for provided evaluations until they get score of 250 points. This seems to be an effective way to guard the system from abuse. With this system in place, it becomes hard for someone to manipulate the values of the answers in by creating multiple accounts and submitting votes from them. It is no longer enough to create multiple accounts, it is also required to earn 250 points of credit for each, which will protect the system from bots better than any CAPTCHA would do. also rewards non-peer reviewed answers differently, depending on the prior performance of their author. For this purpose designers inroduced “Energy Conversion Coefficient” which is simply the share of the best answers to the total number of answers that the person generated. This seems to be a good incentive to provide quality answers and so far this is a unique feature of answers. B stands for bonus, a number from 1 to 100 that is set by the author of a question. The bonus can reflect the difficulty or importance of a problem for its author and supposedly high bonus will motivate people to pay more attention to the question. E is the number of experts in Naver’s expert question, can be 1 or 2 (not available in other services).

The quality evaluation rewards are summarized below:

Evaluation 3form Naver Yahoo Google
Question reward 0.02*R 1
Answer reward R*log(E)
Best answer 0.03*A 10 10 10 B


Common features comparison:

Feature 3form Naver Yahoo Google
Question bookmarking Y Y Y Y Y
Question evaluation Y Y N N
Answering your own question Y N N Y Y
Submitting multiple answers Y N N N Y
Search questions asked by others N Y Y Y Y
Search returns how many answers? N/A 1 All All All
How many answers can be selected as best? 1 2 1 1 1
Question open (days) until removed 2-15, default 5 4 answ/1 vote 5 5
Innovation/Selection conc seq seq seq seq
Social networking N Y Y Y Y

Yahoo Answers explicitly forbids you to answer your own quesiton: “You can’t answer your own question.” It forbids to submit another answer if one is already submitted. 3form and Google allow this. In fact, people can use Google system as a discussion forum and post comments and additions to the previous answers (this requires to keep answers in the order they were received, i.e. creates temporal selection bias towards earlier answer).

Unique features

  • 3form doesn’t have subjective and temporal biases of evaluation present in other systems. This improves the chances that the best answers will be selected. The amount of attention each problem receives is proportional to the amount of attention its author (and other people interested in this problem) paid to solving problems of others.
  • Naver’s registration involves cell phone verification. If you want to register, you have to provide you cell phone to Naver and then input a verification code sent to your cell phone. This makes sockpuppetry (a practice of establishing several accounts to influence the system) much harder than simple email verification used in other services. Owners of several cell phone numbers still can have multiple accounts. Another specific thing about Naver is that most of the questions ask for information rather than knowledge. Maybe it partly explains that our question about the first knowledge network was too unusual for Naver, so it didn’t receive any answers
  • uses cell phone messaging to auction a limited number of featured questions. The users compete for a limited space by sending multiple IM messaged to number from their cell phones (and paying IM fees). Questions of people who sent the largest number of messages are featured at the front page. In addition to this, has a button, “Send thanks to the answerer.” If this button is pressed the message pops up suggesting to send a text message from a cell phone to number, each thank you costs $1 at allows to search questions, however this search couldn’t find anything when I entered keywords for my question. It assume it takes time for new questions to become searchable
  • Google has extended question exposure statistic: number of times the question was shown

A small empirical test

The purpose of this test was not determining “the best” Q&A site as in the MIT Tech review comparison. My purpose is to give some information on what kind of results you could expect from using free knowledge market services. I needed this test mainly to see how the websites work and plan to do more extensive testing later.

Recent article at How to kill a great idea claims that “Jonathan Abrams created the first online social network” right from the beginning. Wikipedia suggests that at least and were created earlier than Friendster and both definitely fall into online social network category. This suggested that this question is non-trivial and might be a good question to post to the free knowledge market sites for the purpose of a small empirical test. I was interested if participants of these sites will be able to suggest a site that is earlier than or explain why shouldn’t be counted as a social network site. This question also asks for problem reformulation “What are the freatures of onlien social network, in the first place?” and requires some research. At the same time it is possible to verify the answers by tracing their references. So here is the question I posted into 3form, Naver, Yahoo,, and Google services:

What was the first online social network?

I am interested to know what was the name of the first online social network, who implemented it, and when. Thank you for your answers!

3form Free Knowledge Exchange:

  • Wikipedia suggests that it was (1995). Maybe Email (with address book) can be thought of the frist online social network. Email existed since 1972, though I am not sure when the first email address books were implemented.
  • A social network is a social structure made of connected individuals. This definition suggests that the Internet itself is the first online social network.
  • “social networking on the PC based internet preceed the internet on PCs itself (having existed on “walled garden” BBS systems of that time such as Compuserve, AOL, Genie and Prodigy before they connected to the mostly then university and government used internet) being roughly 16-17 years of age as a mass market proposition (obviously early social networking existed on mainframes” link
  • I would say it was The Well.

Naver Knowledge-iN:

No answers were received from Naver.

Yahoo Answers:

  • I believe it was AOL. Ok, probably not, but that was the first commercially availible one.
  • Of course it depends on how you define the term “social networking site”.
  • The worldwide distributed discussion system which is known as usenet (but whose proper name is netnews) was developed in 1979 by Steve Bellovin, Tom Truscott, and Jim Ellis. I personally believe this netnews as the first social networking site — it was certainly the first that relied on the
    Internet - although it also used a transport called UUCP for unix-to-unix-copy. Answers (translated from Russian):

  • IMHO it is LiveJournal Answers (translated from Russian):

  • Cites a russian article “The first social net of the Internet”: “Who was the first? Different sources mention different social networks, however the historical records give unambiguous answer—the first social network appears in the internet in 1995. The website of social network was opened for users in 1995 by Randy Conrad, the owner of Classmates Online, Inc. The website helped registered users to find and keep connections with their friends and contacts, with whom one had relationships throughout their life—preschool, school, colledge, work, and military service. Now has more than 40 millions registered users from the US and Canada.”
  • In addition to the previous answer. … The term “social network” was introduced by sociologist James Barnes from Manchester School in his work “Classes and …” (This long answer lists the major social networks in the US and Russia and citing a jornal article “Social Networks in the Internet” from ComputerPress).
  • (Third answer lists networks corrently popular in Italy, Latvia, Estonia and seems irrelevant to the question).

Yahoo was the fastest to provide answers. Google was also very fast and also provided the largest number of answers in the first day (3). Two answers from Yahoo and two answers from Google were received in less than 15 minutes after the question was posted. The remaining answers came within one day, with no answers for subsequent days, despite the fact that any question is open for answering for 4 days in Yahoo Answers, 5 days in Google answers, and no time limit in 3form (the question is kept if at least one of the participants is interested in keeping it). I can speculate that Yahoo and Google use recency of the question as a criterion to determine its salience/exposure, this would allow to direct the most of the answerers’ attention towards recent questions and receive answers to new and easy questions faster at the expense of older and more difficult questons. If the problem is not solved within one day, my experience is that it is unlikely to be solved in successive days in Yahoo/Google services unless it is reposted again. In this situation, multiple reposts will be needed to answer a difficult question and on each repost the problem solving process will start from scratch without benefitting from older solutions (you might post a link to the old thread in the question to compensate this). These services seem to be a good way to answer simple questions quickly, but it seems not suitable for solving problems that are somewhat more difficult. 3form, on the opposite, seems more suitable to address more difficult problems that are unlikely be answered in one day. Of course more experimentation and research is necessary to arrive at some reliable generalizations. This post is just a first step in this direction.


As suggested by their name “Answers” services are more appropriate for questions that are easy to answer and especially when the answer is instantly needed. They should be your second choice after doing search on wikipedia or on the web. If the question is not answered within one day, it is unlikely to be answered. A good idea is to post it again (maybe at a different site). Most of the answers at these sites arrive within minutes after the posting. If your problem requires some time, research, and/or creativity, you might have better chances at 3form that gives people more time to find solutions. 3form is also preferrable when (1) you have ill-defined problem that often needs reformulation and/or making assumptions, (2) many other people might be interested in the same problem, or (3) you lack expertise to select the best solution out of many (other services have strong biases in solution evaluation that often prevent them from selecting the best solutions). Korean knowledge markets represented here by Naver Knowledge iN offer the richest set of features. Russian has the most intricate incentive system. In our small test they were not particularly helpful, but their distinctive features seems quite useful.

In summary, if you need to find certain knowledge in English I would recommend the following sequence of steps: (1) Wikipedia search, (2) Google web search, (3) Yahoo Answers, (4) 3form. Each following step requires significantly more time than the previous step. This might change if the Google will make its new Q&A service available in English.

Acknowledgments: This text benefitted greatly from the help of Hwanjo Yu and Sang-Chul Lee in collecting information on Korean knowledge markets that is not available in English.

MIT Handbook of Collective Intelligence opens up

September 5th, 2007

Since last year I was contributing to MIT project that attempts to create a comprehensive Handbook of Collective Intelligence. This project was initiated by the newly created MIT Center for Collective Intelligence (CCI). It is quite logical that the managers of this project decided to use a collective intelligence technique to describe the collective intelligence itself. The collective intelligence techniques (e.g. human-based computation) that process natural languages are ideally suited for this purpose and were successfully used to describe themselves in the past. I used a human-based genetic algorithm to evolve a short description to put on the website implementing it in 1998. Wikipedia had an evolving page describing itself since 2002 at least. Dr. Terence Fogarty used human-based genetic algorithm to evolve another name for itself (”Automated Concept Evolution”) in 2003. A more recent example is Assignment Zero by Jay Rosen and collaborators, a successful experiment to use collective intelligence and crowdsourcing to report on crowdsourcing itself. MIT project with the same purpose may be even more ambitious than the previous projects, but can’t be called a success so far.

I was initially surprised that MIT Handbook of Collective Intelligence decided to use SocialText wiki software and not the MediaWiki one (especially taking into account that Jimmy Wales is on the advisory board of the CCI). I found SocialText less convenient to work with than MediaWki software (though I am biased here as I was a contributor of Wikipedia for several years before I tried SocialText for the first time). The accumulation of the content in the Handbook was rather slow. Researchers had to request an account by email to contribute to the Hanbook. In addition, it is often suggested that researchers are reluctant to contribute content to wikis, because of the pressures of the academic system encouraging them to submit their writing into the traditional peer review system and avoid publications not officially approved to be “peer-reviewed.” I contributed the majority of the content to the page on Examples of Collective Intelligence in February. Little editing by others was made to this page since then. Other pages were not frequently updated either. It seems that the page on Examples of Collective Intelligence was the most visited page of the Handbook with 10885 views at the time of writing. The only page I could find with a larger number of views is the main page (now missing) with 16619 views.

This summer, the Handbook of Collective Intelligence team decided to move the Handbok from SocialText to the domain, changed the software from SocialText to MediaWiki, and what is more important, open the Handbook to the public contributions as Wikipedia does, i.e. they now allow anyone to register or edit the content of the Handbook without registration. Apparenlty it is the lack of progress that motivated opening up the project for anyone to contribute. People at MIT must have thought that their project will share the success of Wikipedia once it opens itself up to accept anyone’s contributions. However, the reality so far doesn’t seem to support this. A lot of new content is indeed being contributed and a lot of progress can be seen in the list of recent changes (at the time of writing it looks like this). My first impression was that the Handbook went international. I had a hard time to find anything related to collective intelligence in this list, though many irrelevant pages are created every day in different languages. The majority of the newly created pages seems to be in Simplified Chinese. In a random sample of three pages all were Chinese and had no relation to collective intelligence whatsoever.

This returns us back to the topic that I discussed in my previous post Bugs of collective intelligence: why the best ideas aren’t selected?. The common failures of collective intelligence clearly suggest that it is not a phenomenon that automatically emerges once someone set up a shared space like wiki and brought it to the attention of many people. It requires understanding of the dynamic of this systems to make them work, and this is especially true with wikis. There is still serious research to be done on the factors that make different collective intelligence methods effective. It is beyond the scope of this post, but here I want to give some hints into why some wiki-based projects perform poorly.

The main weakness of wikis as a collective intelligence platform is their weak mechanisms of selection. This may lead to what is known as a genetic drift. The selection in current wikis is strongly biased towards the most recently contributed content (”the last edit wins”). In order for a wiki-based project to work, the community have to have enough people who put some effort into overcoming this temporal selection bias present in the software. Those people should be motivated enough to go into the revision history, reverting unhelpful changes and selecting better versions of the content (the software doesn’t encourage the ordinary user to do this). They also have to check recent changes history to delete obvious spam pages. The deletion is necessary in wikis because there is no other way to focus attention of people on important pages (like importance sampling in human-based genetic algorithms). Any wiki-based project pretty much depends on the community of people to overcome the bias present in its software. The MIT CCI project so far haven’t created a community that is effectively performing these functions.

Update: I found it curious that the license under which the content of the Handbook is published prohibits its editing (link). It is a creative commons license Attribution-NonCommercial-NoDerivs 2.5 that allows no derivative works, while any edit creates a derivative work. The license explicitly says “You may not alter, transform, or build upon this work” and yet it is provided in an editable form of a wiki. On the other hand, the same license requires attribution, and yet when the content I contribued was copied presumably by MIT CCI employees to the domain, the attribution information was stripped, so no attribution is given to me or any of the other contributors. Hopefully, whoever is responsible for this project will fix this because currently there are too many contradictions. Meanwhile I would recommend Wikipedia as a better organized resource about the topic of collective intelligence and, importantly, a working example of this concept.

The new Google Q&A service expands to China

August 20th, 2007

Google’s free knowledge market service initially was only available in Russia (see my short review of it in a post Google Answers is reborn in Russia). Now China has this service as well. Haochi Chen from Googlified has more details on this. I am going to post a detailed comparison of five knowledge markets soon, including ones of Naver, Yahoo, and Google.

Bugs of collective intelligence: why the best ideas aren’t selected?

August 20th, 2007

One of my early posts was about idea killers and idea helpers. Idea killers are common phrases that kill creativity at its origin. Recently, Matt May wrote a piece called Mind of the Innovator: Taming the Traps of Traditional Thinking, where he identified ‘Seven Sins of Solutions,’ routine patterns of thinking that prevent people from being creative. He suggests that idea stiffling is the worst sin being the most destructive. He illustrated the point by a nice experiment:

At the off-site, there were about 75 people of varying degrees of seniority, ranging from field supervisors to senior execs. i gave the assignment, one of those group priority exercises whereby you rank a list of items individually and then as a group and compare (sort of a “wisdom of crowds” exercise to show that “we” is smarter than “me”). this specific exercise required you to rank 25 items with which you’ve crashed on the moon in relation to how important they were to your survival. nasa had compiled the correct ranking, so there was a clear answer.

I did the exercise with a twist. at each table i put a ringer. i gave the lowest-ranking person the answer. it was their job to convince the command-control types they knew the right answer.

During the group exercise, NOT A SINGLE CORRECT ANSWER GOT HEARD.

After debriefing the exercise in the regular way, i had each person to whom i had given the correct answer stand up. i announced that these individuals had offered the right answer, but their ideas had been stifled, mostly due to their source and stature and seniority, or lack thereof.

I wish i had a camera to catch the red-faced managers.

This is a good example illustrating repeated failure of collective intelligence. Matt suggests that in collective problem-solving workshops groups discuss the right answer and still commonly propose the wrong one as the chosen solution, “because members second-guess, stifle, dismiss and even distrust their own genius.”

Collective problem solving involves iterated innovation and selection of solutions. In his experiment, Matt decoupled the two by ensuring that the right solution was injected into the pool of solutions that group considered and yet he repeatedly observed that the group rejected the right solution. Apparently, the group evaluation of ideas was seriously biased towards accepting the inferior ideas of the senior members at the expense of other ideas, i.e. the senior status of the idea source overweighted the intrinsic merits of the idea. This is an example of subjective selectionist bias. Another common type of bias is temporal bias. For example, solutions proposed earlier can be preferred to solutions proposed later (or the other way around).

We can see these sources of bias working in many “collective intelligence” web 2.0 platforms, where people are supposed to select the fittest among several versions of the content based on the merit of the content. However, in reality, the selection is heavily biased by other factors that has little to do with the quality of the content. Yahoo Answers, for example, presents the answers for voting in the order they were received. The earliest solutions get at the top of the list and receiving a disproportionally high number of votes regardless of their merit. Wikipedia exhibits the opposite kind of temporal bias where the last edit always wins, at least until it is reverted by someone. The revert decision itself is heavily biased by the status of the person who made the edit (e.g. anonymous/registered/admin). Majority of web 2.0 sites make the information about the content author readily available. This results is selection bias towards the content contributed by senior members of the community in the same way as in Matt’s experiment. This is the mechanism ensuring that any Kevin Rose’s submission ends up at the front page of Digg, while the contribution of an ordinary Digg user is unlikely to get there regardless of its merit.

When I was designing 3form website 10 years ago, my primary goal was to select the fittest solutions regardless of who submitted them, i.e. reduce the decision error and subjective/temporal biases that contribute to this error. As a result, 3form interface isn’t showing the names of the authors of submissions to be evaluated (like in a blind peer review). The presentation order of solutions is randomized to reduce temporal bias, so every solution has an equal chance to be placed at the top of the list for evaluation.

The Matt May manifesto is found thanks to Guy Kawasaki post The Seven Sins of Solutions.

Human-Based Computation at Google

August 9th, 2007

Google is actively exploring human-based computation (HBC) recently. HBC is a class of hybrid techniques where a computational process performs its function via outsourcing certain steps to a large number of human participants. HBC is a ten year old concept that got pervasive on the Internet, but still perceived by many as new or even revolutionary). Academic research in HBC is still in its initial stages despite many internet projects and companies exploring these techniques widely. While HBC was developed in the context of evolutionary computation and Artificial Intelligence, it is often perceived as conflicting with the goal and the very term of AI as HBC often explores natural intelligence (both creativity and judgment of humans) in the loop of a computational learning algorithm. The goal of AI is most often understood as creating a machine intelligence that is competitive to one of humans. From HBC perspective, the artificial and natural intelligences don’t have to be competitors but instead work best together in a symbiosis. HBC is also somewhat outside of the traditional focus of Human-Computer Interaction (HCI) research, even though it is perfectly compatible with the literal meaning of the HCI term. It reverses the traditional assignment of roles between computer and human. Normally a person asks a computer to perform a certain task and receives the result. In HBC it is often the other way around. As a result, some traditional concepts and terminology used in AI and HCI fields may create difficulties when thinking about HBC.

Probably due to the reason described above, Google was also somewhat late to explore this field. It preferred pure AI and datamining techniques to using the hybrid human-computer intelligence. Right from its inception, Google used human judgment expressed in the link structure of the web as input data for algorithms. It is, however, different from outsourcing algorithmic functions to humans that is a main feature of HBC. Matt Cutts, search quality engineer at Google said “People think of Google as pure algorithms, we’ve recently begun trying to communicate the fact that we’re not averse to using some manual intervention. … Google does reserve the right to use humans in a scalable way,” (read the full Infoworld article here). Google introduced voting buttons into its toolbar to collect user evaluations of web pages and help to remove spam from the search results. However, Google wasn’t fully exploring the potential of HBC until very recently. This quickly changes now as Google begins to understand the potential of the technique and willing to test various ways to allow humans not only to evaluate, but also contribute and modify existing content. This kind of testing mostly happens outside of the US. A possible reason may be that Google perceives this to be a high-risk projects: the experimental features Google offers in the US seem to be much more conservative.

In my last post, I described Google Questions and Answers service being tested by Google Russia (I am going to review it in more detail in one of my next posts and compare more systematically with other similar services). More recently, Google UK is testing HBC as a way to improve ranking and coverage of its search results. Mike Grehan noticed that Google UK now allows some users to add URLs to a set of relevant search results: “Know of a better page for digital cameras? Suggest one!” (my thanks for finding this post go to Haochi Chen from Googlified). Members of 3form will find this new google interface very familiar as this is nearly the same interface that 3form uses to evolve solutions to problems for about ten years now, except google doesn’t provide an easy way to choose the most relevant option among those already displayed (submitting the one of the already displayed URLs into the suggestion box will probably work, though not as convenient as selecting one).

Several bloggers referred to the new feature as the beginnings of Google’s social tagging/bookmarking, a response to recent projects attempting to build open-source social search engines, allowing people to edit search results, like Wikia Search and Mahalo. Google’s new feature indeed can turn google search into social tagging/bookmarking tool as the query is essentially a set of tags for the contributed url (if any), so the information Google receives through this service is essentially the same as the one users contribute to or any other similar service.

Google Answers is reborn in Russia

June 28th, 2007

Google announced today about the launch of beta version of its Q&A service (used to be Google Answers, see my previous post Google says adieu to Google Answers from November last year).

Today we are launching the beta version of “Questions and Answers”. The new Google service, where you can ask a difficult question on the topic interesting to you, get an answer from other users, and can earn points, responding to other people’s questions. Or you can just chat with smart people :). We are particularly pleased to announce that Russia — the world’s first country where we are launching this service; It is not even available for English language users.

It is remarkable that Google had chosen Russia to test the new service, as Russia is the country where the concept of this kind of service originated. For me as a researcher, the most interesting were the details of implementation of the new Google service and how they differ from the existing services. By launching the service in Russia, Google gave me an advantage in reveiwing it, as I am a native speaker of Russian.

Google service is pseudonymous as ones of Naver and Yahoo, the nicknames of the authors of questions and answers are shown right next to their contributed content. Google uses automatic question tagging and search box on top in order to help users find quesitons.

As most services of this kind, Google provides an incentive system to motivate people to answer questions. It is based on assigning points for actions as follows:

Action Points
Registration +100
Visit +5
Posting a question -Bonus
Posting an answer +2
Posting an evaluation +1

Google encourages users to visit often. It is quite curious that Google’s reward for a visit is higher than for posting an answer. This seems counterintuitive to me and unique among similar services. Points are also assigned based on results of human evaluation:

Evaluation Points
Best answer +Bonus
Excellent evaluation (5 stars) +10x
Good evaluation (4 stars) +5x
OK evaluation (3 stars) +1x
Bad evaluation (2 stars) -3x
Awful evaluation (1 stars) -5x

Here x is a log base 10 of the number of evaluations. The system of levels is quite google style:

Level Score
Newborn 0 ~ 28
Child 28+1 ~ 29
Elementary school student 29+1 ~ 210
Middle school student 210+1 ~ 211
Freshman 211+1 ~ 212
Senior 212+1 ~ 213
Graduate 213+1 ~ 214
Professor 214+1 ~ 215
Department head 215+1 ~ infinity

There are three main navigation buttons: Ask, Find, and Tags. Below them there is a list of open quesitons (with no answers shown). On the right side there is a list of popular tags and a list of top users. There is also tabs for “closed questions” and “all questions.” I had chosen a question “What is web 2.0?”. It has five answers:

  1. AJAX and Google (average score 2.5)
  2. An attempt to direct the flock into the required direction (average score 3.5)
  3. (the best answer) The term appeared in the article What is Web 2.0 by Tim O’Reilly … Despite the fact that the meaning of this term is often disputed, the people who assert the existence of web 2.0 distinguish several main aspects of this phenomenon (average score 5.0)
  4. Oh, yes + reference to the wikipedia article about web2.0 (average score 3.0)
  5. I will write about it in my blog soon. I was sceptical about it, but now accepted it. This is a constellation of technologies, and Ajax is far from the first place. I would put RSS and Atom into the first place.

Apparently, Google Q&A uses a five star system to evaluate the answers and the average evaluation determines the winning answers. It is not possible to add another answer once the question is closed and there is no way to reopen it (similarly to Naver and Yahoo Answers).

After a quick look at the new Google Q&A service I can say that technologically it closely resembles Naver and Yahoo Answers. I found the main differences in reward structure, style, and user interface. Google seems to have a cleaner user interface, that would be a reason to prefer the Google service over others everything else being equal. The effectiveness of such social search services depends on the community of participants they attract and the efficiency of the technology supporting the exchange of knowledge. Among two technologically very similar services, like those provided by Yahoo and Google, the one that will be able to build a more diverse and motivated community of participants will be able to provide a better service to them.

Scientific knowledge markets: the case of InnoCentive

May 27th, 2007

Today I came across HBS working paper “The Value of Openness in Scientific Problem Solving” by Karim Lakhani, Lars Jeppesen, Peter Lohse and Jill Panetta (link to 58 page PDF is here). The paper studies InnoCentive, a knowledge market similar to 3form that corporations use to solve their research problems unsolved by corporate R&D labs.

InnoCentive was founded by Eli Lilly & Company in 2001 and shares a significant similarity with 3form in organizing the distributed problem solving process, except that it does not broadcast solutions it receives, keeping them private for the corporation that posted the respective problem. As a result, the innovation process at InnoCentive while being distributed is not open: the solvers can’t modify or recombine the solutions proposed earlier or learn from them, as they do at 3form. However, the working paper shows that sharing problems by itself has many advantages over the traditional corporate practice of keeping them closed.

We show that disclosure of problem information to a large group of outside solvers is an effective means of solving scientific problems. The approach solved one-third of a sample of problems that large and well-known R & D-intensive firms had been unsuccessful in solving internally.

There are many interesting observations in this paper that might be relevant to 3form as well and are likely to be interesting to the members of 3form community.

Problem-solving success was found to be associated with the ability to attract specialized solvers with range of diverse scientific interests. Furthermore, successful solvers solved problems at the boundary or outside of their fields of expertise, indicating a transfer of knowledge from one field to others.

Here are the results I found the most interesting:

  • the diversity of interests across solvers correlated positively with solvability, however, the diversity of interests per solver had a negative correlation
  • the further the problem was from the solvers’ field of expertise, the more likely they were to solve it; there was a 10% increase in the probability of being a winner if the problem was completely outside their field of expertise
  • the number of submissions is not a significant factor of solvability
  • very few solvers are repeated winners

The authors of the HBS paper draw analogy to local and global search to explain effectiveness of the problem broadcasting. They suggest that each solver performs a local search, implying that broadcasting the problem to outsiders makes the search global (”broadcast search” in authors’ terminology). Indeed, if solvers don’t have access to solutions of other solvers (the case at InnoCentive), all they can do is a local search (hillclimbing). From the computational perspective, the InnoCentive problem solving process is analogous to a hillclimbing with a random restart: each new solver performs a local search and returns a locally optimal solution, finally, the best of those locally optimal solutions determines the winner.

Social websites and personality

March 10th, 2007

I made a curious observation today that the psychological concept of personality may be useful in characterizing social websites. For example, a website can be introvertive or extravertive. As in psychology, these are not absolute categories, but rather an indication of a bias toward one end or the other.

An intravertive social website draws attention of its users towards its local content, while extravertive social website draws attention of its users outside towards the content present on other sites of the web. Two examples to illustrate these are 3form and StumbleUpon, respectively. Both implement essentially the same technique, human-based evolutionary computation. This technique allows people to contribute items to the database, draw random samples from the population of items, evaluate sampled items. The software computes a fitness function from those evaluations and uses it in later sampling. However, 3form and SU use this technique in remarkably different ways. 3form samples content of its own database, provides an easy way to socially bookmark/evaluate/comment on it. However, it is less easy to bookmark any external resource or comment on it: you have to cut and paste its link into the web form and not many people bother to do it. This makes 3form community rather introspective and focused on the content found locally rather than resources found elsewhere. StumbleUpon, on the opposite, samples from the database containing primarily external resources found elsewhere. It naturally directs user attention to perceive the world outside of SU. SU makes it very easy to bookmark and evaluate any external resource with a single click. It is not true, however, for the local resources found at SU’s own site. When I start using SU I initially thought that, unlike most blogs, SU ones don’t support commenting. Then I found that it is possible to comment on a post, but not as easy or intuitive as commenting on external resources. You first need to find a permalink to the post you want to comment on (shown as the date of the post), click on it to open the post in a single window, then you can use normal SU buttons to evaluate and comment on it. Not many people take effort to go this way, so most posts at SU blogs remain without comments.

Though it might be a pure coincidence, but nevertheless interesting that the personality of a website reflects in this case the personality of its architect. My MBTI profile is INTJ (introvertive) and StumbleUpon chief architect and CEO Garrett Camp is ENTP (extravertive).

What about other websites?

Wikipedia was always mildly introvertive. It was always easier to link to an internal page than to create an external link. In addition, Wikipedia culture is discouraging creation of external links. Recently, Wikipedia has become more clearly introvertive by making you solve CAPTCHA, when you try to contribute a link to an external resource or even fix a broken link. This, undoubtedly will decrease the amount of external references in Wikipedia. and most social bookmarking tools are extravertive, their primary purpose is to direct attention to the other sites. I am quite curious if their creators are extraverts as well.

Digg seems to be pretty balanced in this respect, it requires high effort from any user trying to use it because of many CAPTCHAS, but commenting on an internal post and submitting a new story with an external reference involves about the same amount of effort.

Was Wikipedia innovation entirely social?

February 8th, 2007

Jimmy Wales, a founder of Wikipedia in his recent talks suggests that Wikipedia is not a technological innovation, but a purely social one:

When Wikipedia was started in 2001, all of its technology and software elements had been around since 1995. Its innovation was entirely social - free licensing of content, neutral point of view, and total openness to participants, especially new ones. The core engine of Wikipedia, as a result, is “a community of thoughtful users, a few hundred volunteers who know each other and work to guarantee the quality and integrity of the work.”

In his view, Wikipedia is not an emergent phenomena of the wisdom of crowds, where thousands of independent individuals contribute each a bit of their knowledge, but instead is a relatively well connected small community, pretty much like any traditional organization, e.g. one that created Encyclopedia Britannica. Even taking into account that he is a founder of Wikipedia, I still am quite skeptical about this explanation. In my opinion, it is insufficient to explain the phenomenon of Wikipedia. It also disagrees with my own experience as a Wikipedia contributor. I started to contribute in 2003, registered in 2004, and yet I don’t know other wikipedians personally and rarely thought about Wikipedia as a social network, even though it definitely can support one. Reading a post of Aaron Swartz Who writes Wikipedia made me even more skeptical.

I know that it is quite natural for entrepreneurs to focus more on organizational aspects because that is what they deal with most of the time, as well as it is common for technologists to focus mainly on technology. I am not arguing that Jimmy Wales point of view is wrong, but I am suggesting that it might be incomplete. I believe, we don’t need to choose between emergent phenomena and core community point of view. They are not mutually exclusive, so Wikipedia can be (and, in my opinion, is) an example of both.

Jimmy suggests that the Wikipedia technology and software had been around since 1995. I didn’t find any support for this. If the technology was there in 1995, why it took so long for large wiki-based collaborative projects to appear? I did some quick research into the history of wiki technology. It suggests that Wikipedia had no chances to succeed using the technology that existed in 1995. The elements that enabled large participatory organizations like Wikipedia were added to wiki software six year later, at approximately the same time when Wikipedia project was launched.

Early wikis were lacking two important features: revision history and support for concurrent editing. These two features are crucial for success of any mass collaboration project using wiki.

I first discovered wiki quite late, in the summer of 2002. I quickly grasped the potential of this simple and brilliant collaboration tool by Ward Cunningham: a site with web pages that anyone can edit with very low effort. I saw it as a web extension of CVS, a revision control system that allows programmers to collaborate on the same codebase concurrently. However, as I started to explore the potential advantages of wiki, I found that the implementation I was using has a serious limitation. Indeed, everyone could edit a page, unless it is currently edited by someone else. If I wanted to edit a page someone else is editing right now, a warning message appeared that the page is locked. The lock was advisory, meaning I still could go ahead and edit, disregarding the message. However, in this case, either my or other people’s work will be lost. Waiting for the lock to be released quickly becames annoying as more people start collaborating. My conclusion then was that twiki software wasn’t ready to support collaboration of large groups of people. I searched for an implementation that would not have this limitation but didn’t find it at that time. I even wrote a note into my TODO list to write a wiki software that uses CVS instead of RCS so that it could support concurrent editing (RCS and CVS are two revision control systems, but CVS is newer and allows lock-less concurrent editing). However, later I found a software that provided means of concurrent editing. This was MediaWiki software and it was the first wiki I saw that really could support mass collaboration.

Another feature that was crucial to the success of Wikipedia is a revision history providing a mechanism for reverting unhelpful changes. It was not present in the original wikis. In fact, according to Landmark changes to the Wiki it was added in 2002. Prior to this, another mechanism (Edit Copy) was used, providing a single backup copy of every page that can be edited. Edit Copy was clearly insufficient to save content from vandalism as it is too easy for vandals to edit both the working and the backup copy of a page. However, Wikipedia according to the Internet Archive already had revision history on August 8, 2001 (see View other revisions). At that time Wikipedia used UseModWiki software written by Clifford Adams. Again, according to the archive, UseModWiki got its revision history somewhere between December 9, 2000 and February 1, 2001, that nearly coincide with the launch of the Wikipedia project (January 15, 2001).

Jimmy Wales might be right suggesting that Wikipedia was a social rather then technological innovation, but the technology he refers to was not there in 1995. The features that made Wikipedia possible were added to UseModWiki approximately at the same time the Wikipedia was launched and began to use UseModWiki. It might be a lucky coincidence for Wikipedia or those might be new features of UseModWiki requested by founders of Wikipedia. Maybe some of them can comment on this post.