Archive for the ‘social search’ Category

Comparison of free knowledge markets

Sunday, September 16th, 2007

Knowledge market is a distributed social mechanism that helps people to identify and satisfy their demand for knowledge. It can facilitate locating existing knowledge resources similarly to what search engines do (the name social search refers to this). It can also stimulate creation of new knowledge resources to satisfy the demand (something that search engines can’t do). The goal of this post is to compare several free knowledge markets created by 3form, Naver, Yahoo, Mail.ru, and Google to identify their common elements and differences. All these sites organize collaborative problem solving activity of a large number of participants providing means and incentives to contribute participants’ intelligent abilities to the distributed problem solving process. MIT Tech Review published an attempt to make a comparison of Q&A sites by Wade Rush. Unfortunately, that comparison was of low quality and too superficial to be useful (see the readers comments). Here is my attempt at such a comparison. Its focus is on free knowledge markets, i.e. those that don’t charge fees for participation and allow participants to build on top of the knowledge resources contributed by others.

Background

Free Knowledge Exchange project was launched in summer 1998 in Russia and its international version became available at 3form.com in February 1999. The project allows any participant to submit problems and brings those problems to the attention of other people (3form community) to collect hints and solutions. The credit assignment system of the website tracks contribution of each individual participant to solving problems. It rewards the actions of the participants as well as the quality of contributed content. In exchange for the contribution to solving problems of others, the website returns to the participant the proportional share of the collective attention directed towards solving the participant’s own problems. 3form uses the method known as human-based genetic algorithm (published in 2000). Naver Knowledge iN is a Korean free knowledge market service opened in 2002 by NHN Corp. The site is based on the same idea as 3form, though implements it somewhat differently. This service made Naver the biggest internet destination in Korea and was a major factor allowing Naver to beat Google and Yahoo in the Korean search market. It took a couple of years for Yahoo and Google to learn their Korean lesson. Yahoo launched Yahoo Answers in December 2005 that is now the biggest free knowledge market worldwide. Mali.ru is a Russan knowledge market inspired by Yahoo Answers and currently the biggest similar service in Russia. Google Q&A is the newest service being tested in Russia and China by Google. Google’s service is likely to be inspired by prior work, though I am not aware of Google acknowledging this.

Prior to 3form, two ways of collective problem solving on the internet were available. On one hand, there were free knowledge sharing forums such as Usenet and IRC, where users could ask technical questions and get help from volunteer experts whose participation was neither accounted nor rewarded in any way. On the other hand, there were expert advice services designed around fee-based Q&A model where questions are answered for a fee by a limited number of paid experts. Experts Exchange (EE) made a step towards becoming a free service by allowing anyone to answer questions. It introduced “answer points” to identify experts among its volunteer answerers. The answer points were awarded based on user evaluation of the answers: the author of the problem could allocate the total amount of answer points among all people who contributed useful ideas toward solution. Despite of beign innovative at the expert side, the service remained fee-based on the user side (even though users were getting some credit in “question points” independent of their contribution, allowing them to ask a limited number of questions). In other words, while the question points had monetary value, answer points had no such value (in particular they couldn’t be counted towards “question points”). Once the limited amount of question points was exhaused, users had to start buying question points to continue using the system.

Korean Naver played a key role in popularizing the concept of knowledge market. Naver, however, was not the first knowledge market in Korea. DBDiC offered analogous service as early as Oct 2000. DBDiC presumably developed its technique independently from 3form, but shares similar architecture, including general structure and credit assignment system. There are two key differences of DBDiC technique from the one of 3form: (1) the identity of the author of the solution biases evaluation of the solution, i.e. high status of the author can lead to accepting the inferior answer as the best despite the presence of the better answer contributed by a person with lower status, (2) answers that are positioned earlier in the list are more likely to be read, chosen, and evaluated, i.e. a great answer later in the list can easily be overlooked. These differences resulted in subjective and position specific biases in solution evaluation system (see also my previous post .Bugs of collective intelligence: why the best ideas aren’t selected?). I could speculate that if DBDiC designers were more familiar with 3form service, they could have avoided those undesirable biases that later propagated into every knowledge market platform created subsequently.

Free knowledge markets are currently abundant with numerous implementations. Wade Rush lists six recently created services. ReadWriteWeb post lists 29 knowledge market services. Neither of these lists is complete and it may not be feasible to create a complete list as new similar services appear almost every day. However, most new services are similar to one of those reviewed here and are likely to be inspired by them.

Incentive systems

Knowledge markets differ from knowledge sharing websites by implementing knoweldge evaluation and incentive systems that encourage participants to help each other. The incentive system of a typical free knowledge market is based on rewarding actions of its participants as well as rewarding quality of their contribution. Measure of quality is normally based on user evaluation. An alternative to this would be computational evaluation, e.g. one based on frequency of occurrence of different answers collected independently (see Luis von Ahn work that explores this model in specialized applications like image labeling). In the case of a typical knowledge market frequency counting is problematic due to the bigger search space.

The action rewards encourage particular actions reinforcing participant behavior that is beneficial for the system as a whole, for example, it can be as simple as visiting the website. The following table summarizes action rewards offered by different knowledge markets:

Action 3form Naver Yahoo Mail.ru Google
Join 1 100 100 100
Visit 3 1 1 5
Submit question anon 0 -20 N/A N/A N/A
Submit question pseu N/A 1 -5 -5 -B
Submit expert question N/A -50*E N/A N/A N/A
Submit answer 0.01 2 2 f(K) 2
Select best answer to your question 3 3
Evaluate answer 1 1 [S>=250] 1
Evaluate question 1 1

In this table, S refers to the current score of the participant. For example, Mail.ru will not reward new participants for provided evaluations until they get score of 250 points. This seems to be an effective way to guard the system from abuse. With this system in place, it becomes hard for someone to manipulate the values of the answers in Mail.ru by creating multiple accounts and submitting votes from them. It is no longer enough to create multiple accounts, it is also required to earn 250 points of credit for each, which will protect the system from bots better than any CAPTCHA would do. Mail.ru also rewards non-peer reviewed answers differently, depending on the prior performance of their author. For this purpose Mail.ru designers inroduced “Energy Conversion Coefficient” which is simply the share of the best answers to the total number of answers that the person generated. This seems to be a good incentive to provide quality answers and so far this is a unique feature of Mail.ru answers. B stands for bonus, a number from 1 to 100 that is set by the author of a question. The bonus can reflect the difficulty or importance of a problem for its author and supposedly high bonus will motivate people to pay more attention to the question. E is the number of experts in Naver’s expert question, can be 1 or 2 (not available in other services).

The quality evaluation rewards are summarized below:

Evaluation 3form Naver Yahoo Mail.ru Google
Question reward 0.02*R 1
Answer reward R*log(E)
Best answer 0.03*A 10 10 10 B

Features

Common features comparison:

Feature 3form Naver Yahoo Mail.ru Google
Question bookmarking Y Y Y Y Y
Question evaluation Y Y N N
Answering your own question Y N N Y Y
Submitting multiple answers Y N N N Y
Search questions asked by others N Y Y Y Y
Search returns how many answers? N/A 1 All All All
How many answers can be selected as best? 1 2 1 1 1
Question open (days) until removed 2-15, default 5 4 answ/1 vote 5 5
Innovation/Selection conc seq seq seq seq
Social networking N Y Y Y Y

Yahoo Answers explicitly forbids you to answer your own quesiton: “You can’t answer your own question.” It forbids to submit another answer if one is already submitted. 3form and Google allow this. In fact, people can use Google system as a discussion forum and post comments and additions to the previous answers (this requires to keep answers in the order they were received, i.e. creates temporal selection bias towards earlier answer).


Unique features

  • 3form doesn’t have subjective and temporal biases of evaluation present in other systems. This improves the chances that the best answers will be selected. The amount of attention each problem receives is proportional to the amount of attention its author (and other people interested in this problem) paid to solving problems of others.
  • Naver’s registration involves cell phone verification. If you want to register, you have to provide you cell phone to Naver and then input a verification code sent to your cell phone. This makes sockpuppetry (a practice of establishing several accounts to influence the system) much harder than simple email verification used in other services. Owners of several cell phone numbers still can have multiple accounts. Another specific thing about Naver is that most of the questions ask for information rather than knowledge. Maybe it partly explains that our question about the first knowledge network was too unusual for Naver, so it didn’t receive any answers
  • Mail.ru uses cell phone messaging to auction a limited number of featured questions. The users compete for a limited space by sending multiple IM messaged to Mail.ru number from their cell phones (and paying IM fees). Questions of people who sent the largest number of messages are featured at the front page. In addition to this, Mail.ru has a button, “Send thanks to the answerer.” If this button is pressed the message pops up suggesting to send a text message from a cell phone to Mail.ru number, each thank you costs $1 at Mail.ru. Mail.ru allows to search questions, however this search couldn’t find anything when I entered keywords for my question. It assume it takes time for new questions to become searchable
  • Google has extended question exposure statistic: number of times the question was shown

A small empirical test

The purpose of this test was not determining “the best” Q&A site as in the MIT Tech review comparison. My purpose is to give some information on what kind of results you could expect from using free knowledge market services. I needed this test mainly to see how the websites work and plan to do more extensive testing later.

Recent article at inc.com How to kill a great idea claims that “Jonathan Abrams created the first online social network” right from the beginning. Wikipedia suggests that at least Classmates.com and SixDegrees.com were created earlier than Friendster and both definitely fall into online social network category. This suggested that this question is non-trivial and might be a good question to post to the free knowledge market sites for the purpose of a small empirical test. I was interested if participants of these sites will be able to suggest a site that is earlier than Classmates.com or explain why Classmates.com shouldn’t be counted as a social network site. This question also asks for problem reformulation “What are the freatures of onlien social network, in the first place?” and requires some research. At the same time it is possible to verify the answers by tracing their references. So here is the question I posted into 3form, Naver, Yahoo, Mail.ru, and Google services:

What was the first online social network?

I am interested to know what was the name of the first online social network, who implemented it, and when. Thank you for your answers!

3form Free Knowledge Exchange:

  • Wikipedia suggests that it was Classmates.com (1995). Maybe Email (with address book) can be thought of the frist online social network. Email existed since 1972, though I am not sure when the first email address books were implemented.
  • A social network is a social structure made of connected individuals. This definition suggests that the Internet itself is the first online social network.
  • “social networking on the PC based internet preceed the internet on PCs itself (having existed on “walled garden” BBS systems of that time such as Compuserve, AOL, Genie and Prodigy before they connected to the mostly then university and government used internet) being roughly 16-17 years of age as a mass market proposition (obviously early social networking existed on mainframes” link
  • I would say it was The Well. http://en.wikipedia.org/wiki/WELL

Naver Knowledge-iN:

No answers were received from Naver.

Yahoo Answers:

  • I believe it was AOL. Ok, probably not, but that was the first commercially availible one.
  • Of course it depends on how you define the term “social networking site”.
  • The worldwide distributed discussion system which is known as usenet (but whose proper name is netnews) was developed in 1979 by Steve Bellovin, Tom Truscott, and Jim Ellis. I personally believe this netnews as the first social networking site — it was certainly the first that relied on the
    Internet - although it also used a transport called UUCP for unix-to-unix-copy.

Mail.ru Answers (translated from Russian):

  • IMHO it is LiveJournal

Google.ru Answers (translated from Russian):

  • Cites a russian article “The first social net of the Internet”: “Who was the first? Different sources mention different social networks, however the historical records give unambiguous answer—the first social network appears in the internet in 1995. The website of social network Classmates.com was opened for users in 1995 by Randy Conrad, the owner of Classmates Online, Inc. The website helped registered users to find and keep connections with their friends and contacts, with whom one had relationships throughout their life—preschool, school, colledge, work, and military service. Now Classmates.com has more than 40 millions registered users from the US and Canada.”
  • In addition to the previous answer. … The term “social network” was introduced by sociologist James Barnes from Manchester School in his work “Classes and …” (This long answer lists the major social networks in the US and Russia and citing a jornal article “Social Networks in the Internet” from ComputerPress).
  • (Third answer lists networks corrently popular in Italy, Latvia, Estonia and seems irrelevant to the question).

Yahoo was the fastest to provide answers. Google was also very fast and also provided the largest number of answers in the first day (3). Two answers from Yahoo and two answers from Google were received in less than 15 minutes after the question was posted. The remaining answers came within one day, with no answers for subsequent days, despite the fact that any question is open for answering for 4 days in Yahoo Answers, 5 days in Google answers, and no time limit in 3form (the question is kept if at least one of the participants is interested in keeping it). I can speculate that Yahoo and Google use recency of the question as a criterion to determine its salience/exposure, this would allow to direct the most of the answerers’ attention towards recent questions and receive answers to new and easy questions faster at the expense of older and more difficult questons. If the problem is not solved within one day, my experience is that it is unlikely to be solved in successive days in Yahoo/Google services unless it is reposted again. In this situation, multiple reposts will be needed to answer a difficult question and on each repost the problem solving process will start from scratch without benefitting from older solutions (you might post a link to the old thread in the question to compensate this). These services seem to be a good way to answer simple questions quickly, but it seems not suitable for solving problems that are somewhat more difficult. 3form, on the opposite, seems more suitable to address more difficult problems that are unlikely be answered in one day. Of course more experimentation and research is necessary to arrive at some reliable generalizations. This post is just a first step in this direction.

Conclusions

As suggested by their name “Answers” services are more appropriate for questions that are easy to answer and especially when the answer is instantly needed. They should be your second choice after doing search on wikipedia or on the web. If the question is not answered within one day, it is unlikely to be answered. A good idea is to post it again (maybe at a different site). Most of the answers at these sites arrive within minutes after the posting. If your problem requires some time, research, and/or creativity, you might have better chances at 3form that gives people more time to find solutions. 3form is also preferrable when (1) you have ill-defined problem that often needs reformulation and/or making assumptions, (2) many other people might be interested in the same problem, or (3) you lack expertise to select the best solution out of many (other services have strong biases in solution evaluation that often prevent them from selecting the best solutions). Korean knowledge markets represented here by Naver Knowledge iN offer the richest set of features. Russian Mail.ru has the most intricate incentive system. In our small test they were not particularly helpful, but their distinctive features seems quite useful.

In summary, if you need to find certain knowledge in English I would recommend the following sequence of steps: (1) Wikipedia search, (2) Google web search, (3) Yahoo Answers, (4) 3form. Each following step requires significantly more time than the previous step. This might change if the Google will make its new Q&A service available in English.

Acknowledgments: This text benefitted greatly from the help of Hwanjo Yu and Sang-Chul Lee in collecting information on Korean knowledge markets that is not available in English.

The new Google Q&A service expands to China

Monday, August 20th, 2007

Google’s free knowledge market service initially was only available in Russia (see my short review of it in a post Google Answers is reborn in Russia). Now China has this service as well. Haochi Chen from Googlified has more details on this. I am going to post a detailed comparison of five knowledge markets soon, including ones of Naver, Yahoo, and Google.

Human-Based Computation at Google

Thursday, August 9th, 2007

Google is actively exploring human-based computation (HBC) recently. HBC is a class of hybrid techniques where a computational process performs its function via outsourcing certain steps to a large number of human participants. HBC is a ten year old concept that got pervasive on the Internet, but still perceived by many as new or even revolutionary). Academic research in HBC is still in its initial stages despite many internet projects and companies exploring these techniques widely. While HBC was developed in the context of evolutionary computation and Artificial Intelligence, it is often perceived as conflicting with the goal and the very term of AI as HBC often explores natural intelligence (both creativity and judgment of humans) in the loop of a computational learning algorithm. The goal of AI is most often understood as creating a machine intelligence that is competitive to one of humans. From HBC perspective, the artificial and natural intelligences don’t have to be competitors but instead work best together in a symbiosis. HBC is also somewhat outside of the traditional focus of Human-Computer Interaction (HCI) research, even though it is perfectly compatible with the literal meaning of the HCI term. It reverses the traditional assignment of roles between computer and human. Normally a person asks a computer to perform a certain task and receives the result. In HBC it is often the other way around. As a result, some traditional concepts and terminology used in AI and HCI fields may create difficulties when thinking about HBC.

Probably due to the reason described above, Google was also somewhat late to explore this field. It preferred pure AI and datamining techniques to using the hybrid human-computer intelligence. Right from its inception, Google used human judgment expressed in the link structure of the web as input data for algorithms. It is, however, different from outsourcing algorithmic functions to humans that is a main feature of HBC. Matt Cutts, search quality engineer at Google said “People think of Google as pure algorithms, we’ve recently begun trying to communicate the fact that we’re not averse to using some manual intervention. … Google does reserve the right to use humans in a scalable way,” (read the full Infoworld article here). Google introduced voting buttons into its toolbar to collect user evaluations of web pages and help to remove spam from the search results. However, Google wasn’t fully exploring the potential of HBC until very recently. This quickly changes now as Google begins to understand the potential of the technique and willing to test various ways to allow humans not only to evaluate, but also contribute and modify existing content. This kind of testing mostly happens outside of the US. A possible reason may be that Google perceives this to be a high-risk projects: the experimental features Google offers in the US seem to be much more conservative.

In my last post, I described Google Questions and Answers service being tested by Google Russia (I am going to review it in more detail in one of my next posts and compare more systematically with other similar services). More recently, Google UK is testing HBC as a way to improve ranking and coverage of its search results. Mike Grehan noticed that Google UK now allows some users to add URLs to a set of relevant search results: “Know of a better page for digital cameras? Suggest one!” (my thanks for finding this post go to Haochi Chen from Googlified). Members of 3form will find this new google interface very familiar as this is nearly the same interface that 3form uses to evolve solutions to problems for about ten years now, except google doesn’t provide an easy way to choose the most relevant option among those already displayed (submitting the one of the already displayed URLs into the suggestion box will probably work, though not as convenient as selecting one).

Several bloggers referred to the new feature as the beginnings of Google’s social tagging/bookmarking, a response to recent projects attempting to build open-source social search engines, allowing people to edit search results, like Wikia Search and Mahalo. Google’s new feature indeed can turn google search into social tagging/bookmarking tool as the query is essentially a set of tags for the contributed url (if any), so the information Google receives through this service is essentially the same as the one users contribute to del.icio.us or any other similar service.

Google Answers is reborn in Russia

Thursday, June 28th, 2007

Google announced today about the launch of beta version of its Q&A service (used to be Google Answers, see my previous post Google says adieu to Google Answers from November last year).

Today we are launching the beta version of “Questions and Answers”. The new Google service, where you can ask a difficult question on the topic interesting to you, get an answer from other users, and can earn points, responding to other people’s questions. Or you can just chat with smart people :). We are particularly pleased to announce that Russia — the world’s first country where we are launching this service; It is not even available for English language users.

It is remarkable that Google had chosen Russia to test the new service, as Russia is the country where the concept of this kind of service originated. For me as a researcher, the most interesting were the details of implementation of the new Google service and how they differ from the existing services. By launching the service in Russia, Google gave me an advantage in reveiwing it, as I am a native speaker of Russian.

Google service is pseudonymous as ones of Naver and Yahoo, the nicknames of the authors of questions and answers are shown right next to their contributed content. Google uses automatic question tagging and search box on top in order to help users find quesitons.

As most services of this kind, Google provides an incentive system to motivate people to answer questions. It is based on assigning points for actions as follows:

Action Points
Registration +100
Visit +5
Posting a question -Bonus
Posting an answer +2
Posting an evaluation +1

Google encourages users to visit often. It is quite curious that Google’s reward for a visit is higher than for posting an answer. This seems counterintuitive to me and unique among similar services. Points are also assigned based on results of human evaluation:

Evaluation Points
Best answer +Bonus
Excellent evaluation (5 stars) +10x
Good evaluation (4 stars) +5x
OK evaluation (3 stars) +1x
Bad evaluation (2 stars) -3x
Awful evaluation (1 stars) -5x

Here x is a log base 10 of the number of evaluations. The system of levels is quite google style:

Level Score
Newborn 0 ~ 28
Child 28+1 ~ 29
Elementary school student 29+1 ~ 210
Middle school student 210+1 ~ 211
Freshman 211+1 ~ 212
Senior 212+1 ~ 213
Graduate 213+1 ~ 214
Professor 214+1 ~ 215
Department head 215+1 ~ infinity

There are three main navigation buttons: Ask, Find, and Tags. Below them there is a list of open quesitons (with no answers shown). On the right side there is a list of popular tags and a list of top users. There is also tabs for “closed questions” and “all questions.” I had chosen a question “What is web 2.0?”. It has five answers:

  1. AJAX and Google (average score 2.5)
  2. An attempt to direct the flock into the required direction (average score 3.5)
  3. (the best answer) The term appeared in the article What is Web 2.0 by Tim O’Reilly … Despite the fact that the meaning of this term is often disputed, the people who assert the existence of web 2.0 distinguish several main aspects of this phenomenon (average score 5.0)
  4. Oh, yes + reference to the wikipedia article about web2.0 (average score 3.0)
  5. I will write about it in my blog soon. I was sceptical about it, but now accepted it. This is a constellation of technologies, and Ajax is far from the first place. I would put RSS and Atom into the first place.

Apparently, Google Q&A uses a five star system to evaluate the answers and the average evaluation determines the winning answers. It is not possible to add another answer once the question is closed and there is no way to reopen it (similarly to Naver and Yahoo Answers).

After a quick look at the new Google Q&A service I can say that technologically it closely resembles Naver and Yahoo Answers. I found the main differences in reward structure, style, and user interface. Google seems to have a cleaner user interface, that would be a reason to prefer the Google service over others everything else being equal. The effectiveness of such social search services depends on the community of participants they attract and the efficiency of the technology supporting the exchange of knowledge. Among two technologically very similar services, like those provided by Yahoo and Google, the one that will be able to build a more diverse and motivated community of participants will be able to provide a better service to them.

Scientific knowledge markets: the case of InnoCentive

Sunday, May 27th, 2007

Today I came across HBS working paper “The Value of Openness in Scientific Problem Solving” by Karim Lakhani, Lars Jeppesen, Peter Lohse and Jill Panetta (link to 58 page PDF is here). The paper studies InnoCentive, a knowledge market similar to 3form that corporations use to solve their research problems unsolved by corporate R&D labs.

InnoCentive was founded by Eli Lilly & Company in 2001 and shares a significant similarity with 3form in organizing the distributed problem solving process, except that it does not broadcast solutions it receives, keeping them private for the corporation that posted the respective problem. As a result, the innovation process at InnoCentive while being distributed is not open: the solvers can’t modify or recombine the solutions proposed earlier or learn from them, as they do at 3form. However, the working paper shows that sharing problems by itself has many advantages over the traditional corporate practice of keeping them closed.

We show that disclosure of problem information to a large group of outside solvers is an effective means of solving scientific problems. The approach solved one-third of a sample of problems that large and well-known R & D-intensive firms had been unsuccessful in solving internally.

There are many interesting observations in this paper that might be relevant to 3form as well and are likely to be interesting to the members of 3form community.

Problem-solving success was found to be associated with the ability to attract specialized solvers with range of diverse scientific interests. Furthermore, successful solvers solved problems at the boundary or outside of their fields of expertise, indicating a transfer of knowledge from one field to others.

Here are the results I found the most interesting:

  • the diversity of interests across solvers correlated positively with solvability, however, the diversity of interests per solver had a negative correlation
  • the further the problem was from the solvers’ field of expertise, the more likely they were to solve it; there was a 10% increase in the probability of being a winner if the problem was completely outside their field of expertise
  • the number of submissions is not a significant factor of solvability
  • very few solvers are repeated winners

The authors of the HBS paper draw analogy to local and global search to explain effectiveness of the problem broadcasting. They suggest that each solver performs a local search, implying that broadcasting the problem to outsiders makes the search global (”broadcast search” in authors’ terminology). Indeed, if solvers don’t have access to solutions of other solvers (the case at InnoCentive), all they can do is a local search (hillclimbing). From the computational perspective, the InnoCentive problem solving process is analogous to a hillclimbing with a random restart: each new solver performs a local search and returns a locally optimal solution, finally, the best of those locally optimal solutions determines the winner.

Linkedin Answers: a knowledge market in a social network

Tuesday, January 30th, 2007

Linkedin Answers is a knowledge market by Linkedin similar to ones of 3form, Naver, Yahoo, and others. However there are three things that make Linkedin Answers unique: its user base, its use of the network topology, and its ’suggest an expert’ feature.

  • Linkedin is a social network for business people and professionals, so due to its positioning many of its users are experts in their specific areas. Most experts have to specialize to get a deep knowledge in a certain area. As a result, they have to sacrifice knowledge outside of that area. This division of labor makes an ideal environment for a knowledge market.
  • Linkedin uses its social network to determine whom to ask the questions: the idea here is that you will be more willing to provide answers to the questions of people connected to you. In a sense, this is an affinity market. Relatedness to a person who is asking a question gives stronger motivation and more context to provide a better answer. Using network neighbourhood for question distribution is an interesting innovation of Linkedin.
  • “Suggest an expert” feature is another innovation of Linkedin Answers. If you don’t have any solutions to the problem posted by your neighbour, you can at least suggest someone who knows the answer. This can help resolve the problem and also provides extra information about people expertise that can later be used for better targeting of specific questions. This is a great feature that was absent in the previous implementations of knowledge markets.

Combining knowledge market and social network is not novel to Linkedin. I have an earlier blog post Wisdom of web 2.0 discussing potential evaluation problems this combination creates: subjective bias and information cascades. Linkedin, on the other hand, was quite innovative in exploring advantages of such combination.

Previously, when people asked my opinion about Linkedin network, I told them that Linkedin makes it easy to export your real-world connections, but doesn’t make it easy to discover new connections. I think Linkedin Answers adds that discovery element to Linkedin and makes it much more valuable resource by exploring the collective intelligence and creativity of its members.

If you decided to join Linkedin, Guy Kawasaki provides some good advice:

How to discover the best people?

Saturday, January 6th, 2007

The New York Times published an article Google Answer to Filling Jobs Is an Algorithm (also available here). The article describes new algorithmic approaches for people selection adopted by Google.

It is starting to ask job applicants to fill out an elaborate online survey that explores their attitudes, behavior, personality and biographical details going back to high school.

The questions range from the age when applicants first got excited about computers to whether they have ever tutored or ever established a nonprofit organization.

The answers are fed into a series of formulas created by Google’s mathematicians that calculate a score — from zero to 100 — meant to predict how well a person will fit into its chaotic and competitive culture.

I didn’t apply for Google employment, but had some experience with their methods of people selection last summer. Google uses a proactive approach to hiring. In particular they actively contact new Ph.D.s and invite them to participate in phone interviews. Google recruiters found my resume on the web and I was suggested and agreed to participate in three phone interviews. Google phone interviews were about 30 minutes each. There were sessions of multiple choice questions and problem solving session in which I was asked to program an algorithmic solution on a piece of paper and dictate the result back to the interviewer. I found that recruiting techniques were not a strong area of Google and approaches were far from innovative. I was puzzled how a company like Google can’t create a simple web application to administer those multiple choice questions or outsource the whole thing to a company that does it better (e.g. Brainbench). Now I see that Google begins to entertain the same thoughts and maybe something will be changing:

“As we get bigger, we find it harder and harder to find enough people,” said Laszlo Bock, Google’s vice president for people operations. “With traditional hiring methods, we were worried we will overlook some of the best candidates.”

Last month, Haochi Chen and Christian Binderskagnaes (googlified.com) discovered Google Online Assessments that might be a new Google tool to assess people’s skills: “The purpose of this website is still something of a secret, but it’s going to be great, whatever it is.”

We will see how great a new Google algorithmic approach for skill assessment will be. I think they definitely will be more efficient this way saving time of their employees and phone bills. But can they also be more effective? I don’t know the answer to this quesiton. Multiple choice questions still have fundamental limitation: they don’t allow participants to manifest their creativity because they don’t provide a space for any creative solution. They only test ability of judgement.

Another point is well taken by this reddit review

You are creating a society within a society where you weed out undesirables using a simple algorithm. The problem is … whether creativity and innovation can rise out of homogeneity, even the type of homogeneity that Google is practicing.

Human innovation has evolutionary dynamics, cycles of change and selection. My research suggests that innovation and creativity are manifestations of the underlying evolutionary process. In this case, the diversity is crucial as it is one of the main prerequisites for evolution. This is also supported by results of experimental research suggesting that diverse teams of ordinary individuals outperform homogeneous teams of elite individuals (Hong, 2004). So, from evolutionary point of view the loss of diversity is quite dangerous. Google shares this problem with many top Universities.

From the other point of view (my research in social synthesis), diversity is just one way to increase chances to achieve complementarity of resources needed for synergetic exchange. For example, if backgrounds of two people are too similar, they can have little misunderstandings, but don’t have much chance to benefit from mutual learning. On the other hand, if their interests are complementary, they have a great opportunity to learn from each other if they will be able to overcome misunderstandings.

See also my previous post suggesting another way for employee selection that allows to identify creative solutions and people.

Reference:

Lu Hong and Scott E. Page (2004) Groups of diverse problem solvers can outperform groups of high-ability problem solvers, Proceedings of the National Academy of Sciences, 101(46), 16385-16389 [link]

So how do you pick good programmers if you’re not a programmer?

Friday, January 5th, 2007

I borrowed a title of this post from Paul Graham’s essay The 18 Mistakes That Kill Startups:

So how do you pick good programmers if you’re not a programmer? I don’t think there’s an answer. I was about to say you’d have to find a good programmer to help you hire people. But if you can’t recognize good programmers, how would you even do that?

It follows logically from here that software engineering startup should have at least one good programmer to begin with.

This was the case (at least we, founders, believed so) for our first startup in 1991. But we got an opposite and yet analogous problem to solve “How can our new startup can pick a good accountant if we are not accountants?” We didn’t find a better solution at that time than to pick one among us who will learn accounting. However, this general problem is common to many new startups: “how can a group of people pick a good expert in an area where none of the existing members has any expertise?” The previous solution becomes more difficult if you need many experts in different areas and only few founders.

I found another solution unexpectedly several years later. I was using multiple-choice tests to screen candidates for employment. I was unsatisfied with the inability of such tests to identify creative people — exactly the kind of people we wanted to hire. The problem is that multiple choice question offer no opportunity for a creative answer. Therefore, I was experimenting with ways of accepting and efficiently grading of write-in answers. My friends also asked me to prepare some finance questions to test candidates for their company. While helping them and writing questions, I got an idea that this work can be easily outsourced to candidates themselves. Not only it will save my time, but it also will save their time: instead of solving useless test problems that I made up for them, they will solve real problems that their peers need to solve anyway. In other words, it won’t be testing just for testing, but a collaborative problem-solving process that tests their expertise as a side effect. In other words, we invite many candidates and let them ask each other questions about problems they are facing, submit solutions to those problems, and peer-review and evaluate the submitted solutions concurrently and anonymously, using a web service.

At the very beginning, I was not sure that the idea will work. But when it was finally implemented, it turned out surprisingly effective. Not only it discovered good experts, but it also created a useful repository of problems and solutions, sometimes very innovative. Some participants suggested that knowledge exchange and learning experience that this system provides is no less important than the test results. This led to the creation of this website in summer 1998. 3form Free Knowledge Exchange was created to give everyone the opportunity to participate in this process. Over several years, I was doing research that attempts to explain the operation of this system in terms of human-based evolutionary computation and identify the factors that contribute to the efficiency of such computation. The model I had developed is equally applicable to describe the processes happening in another evolving knowledge repository—Wikipedia.

Web services needs and business trends

Sunday, December 31st, 2006

Richard MacManus has a post about the biggest web trend predictions for 2007. It is somewhat relevant to my previous post What internet services do you need? My post was about needs people have, regardless whether entrepreneurs have a trend to satisfy those needs. On the other hand, needs determine many future trends, even if it often takes years for a need to become a trend. So I decided to post here the list of top needs that 3form members would like internet entrepreneurs to address:

  1. Free Internet University, where you can take courses on any subject of your interest, without any kind of obligations and registering. This would allow many people to discover the fun of learning for themselves.
  2. Internet facility should be more free, commercially, and more magnanimous in nature like, the ocean, air, sun for the use by the global human-race as the knowledge is the natural heritage of mankind.True, not in empty stomach but commerce should be ‘just’ for the ‘just’ food-shelter-safety, with an attitude of spiritualizing all the act of our lives.
  3. Free searchable repository of common algorithms (e.g. in C++)
  4. A service that would help you to discover your talents by comparing your abilities with abilities of other people in different areas and telling you what unique combination of skills you possess.
  5. Free email forwarding service with spam-filter capability. It is even ok for it to insert 1% of its own advertisement once in a while, but removing 99% of spam I am receiving.

Again I will try to list some services that partially satisfy those needs (and you can help me with your comments):

  1. Wikipedia and MIT Open courseware
  2. Google wifi and featherwifi.net
  3. Google code search
  4. Bix.com, recently a part of Yahoo
  5. Gmail, Ymail, and most online services provide spam filtering, but it only works if you read your mail online

Despite there are services addressing these needs partially, the supply doesn’t seem to fit demand adequately at this time.

What internet service do you need?

Sunday, December 31st, 2006

I stumbled on the question “What internet service [do] you need, but still searching… ?” at 3form today. Somebody posted this question back in 1999. As it is one of those questions that are of continued interest to many people (esp. internet entrepreneurs), it pops up from time to time, and here is the version that I got today:

Q4: What internet service you need, but still searching… ?

  1. There are many knowledge only available in the book today, but you can’t find them on the web. For example, photoshop tutorial. So far, I think the free online tutorial can’t compare with those in the book.
  2. Internet facility should be more free, commercially, and more magnanimous in nature like, the ocean, air, sun for the use by the global human-race as the knowledge is the natural heritage of mankind.True, not in empty stomach but commerce should be ‘just’ for the ‘just’ food-shelter-safety, with an attitude of spiritualizing all the act of our lives.
  3. Collaborative platform like www.wiki.org or www.twiki.org that is scalable, in a way that allows people to edit the same pages simultaneously without waiting to obtain lock, etc.
  4. Search engine that actively uses feedback from its multiple users to arrive at better search results.
  5. <A place to add your option>

All these options were posted several years ago. It is interesting to look how those needs were satisfied by internet entrepreneurs and businesses after these options were posted at 3form. For every option, at least one internet service now provides or aims to provide it:

  1. The closest service I know is Google book search, but the quality of online tutorials improved as well (maybe partly due to Google Adsense providing an easy way to monetize and reward high quality content)
  2. There are some initiatives that are aimed to provide free internet service, like Google wifi and Featherwifi.net. In addition, many people leave their wireless routers open to share their internet access with others, esp. in Boston, however, the Internet is still not free in most places
  3. Mediawiki software by Wikimedia foundation solved this and removed many barriers to mass collaboration present in early wikis. It is now powering Wikipedia and many other wiki sites.
  4. I don’t know a search engine that is doing exactly this, but Google started experimenting with user evaluation recently (see also this blog post), maybe there are other search engines playing with these ideas. Please post a comment if you know more. This need, however, is more fully satisfied by websites that don’t call themselves search engines: digg and stumbleupon, both heavily based on user evaluation.

Only one of these four needs that 3form members identified and reported is fully satisfied over these years (the one about locks in wikis). Also it is quite interesting that the need was satisfied not by a business, but by a non-profit organization. Honestly, I am so used to mediawiki software now, so I almost forgot that early wikis were quite different from what we call wiki now. In early wiki, you had to obtain a write lock to edit an article and other user couldn’t edit it while this lock is in place. This mechanism is feasible only for small groups of people editing infrequently. Early wiki also didn’t have a revision history, so there was no reverting mechanism that proven to be crucial to deal with vandalism in wikipedia.

It still takes years from realizing a need to its satisfaction. One of the goals of 3form Free Knowledge Exchange project was to shorten this period and make innovation go faster. I hope next year we will see more cool internet services that help us to satisfy our needs better.

Happy New Year!