menu

Insights on Collective Problem-Solving, Part 2: StackExchange and Online Q&A

Henry Farrell — April 25, 2016

Over the last two years, a group of scholars from disciplines including political science, political theory, cognitive psychology, information science, statistics and computer science have met under the auspices of the MacArthur Foundation Research Network on Opening Governance. The goal of these meetings has been to bring the insights of different disciplines to bear on fundamental problems of collective problem solving. How do we best solve collective problems? How should we study and think about collective intelligence? How can we apply insights to real world problems? A wide body of work leads us to believe that complex problems are most likely to be solved when people with different viewpoints and sets of skills come together. This means that we can expect that the science of collective problem solving too will be improved when people from diverse disciplinary perspectives work together to generate new insights on shared problems.

Kristina Lerman is an Associate Research Professor at the University of Southern California’s Viterbi School of Engineering’s Computer Science Department. One of her major research interests is online problem-solving sites like StackExchange, where people ask questions aimed at solving e.g. programming problems, and others try to help them. These sites provide rich data that we can use to study problem-solving in the wild. I asked her a series of questions aimed at discovering what we can learn from StackExchange, and how Stack Exchange users face characteristic problems in identifying good answers.

Henry: What is StackExchange?

Kristina:  StackExchange is a question-answer forum where participants ask questions on a variety of topics for other participants to answer them.

Henry: What can researchers interested in collective problem-solving learn from studying it and other similar sites?

Kristina: StackExchange has many features for collaborative knowledge creation, which help community as a whole to discover best questions and answers. Specifically, participants can vote for answers (and questions) they find helpful, and question askers can accept the answer they consider the best. StackExchange then features prominently the answers that the community liked, making the best answers “float to the top” so to speak.

Henry:  Your research finds that the more answers there are to a given question, the more likely it is that users will rely on simple heuristics to figure out what the best answers are. Which heuristics they use, and why might this be problematic?

Kristina: The order in which the answer is presented affects how many votes it receives. That’s because people pay more attention to items in top positions – this is known as position bias to psychologists. People are also more likely to choose answers that occupy a larger fraction of screen space as the best answer. We link this to an ‘availability heuristic’ – wordier answers are considered more salient. Next in importance is social proof bias – after an answer is accepted by the asker, voters are more likely to vote for it than before. While these heuristics could sometimes be useful proxies for answer quality – for example, wordier answers may be more comprehensive, that’s why people like them – it might be better for participants to read and evaluate all answers to the question, not just to automatically choose the most visible answer.

Henry:   How can the builders of collective problem-solving sites and institutions mitigate these problems?

Kristina: Knowing the limitations of collective computation points to designs that could mitigate these limits. For example, perhaps StackExchange should initially randomize the order in which answers are presented to ensure the answers are thoroughly evaluated. As enough evidence accumulates about answer quality, they could then rank order them to make it easier to discover good answers.

Click here for Part 1.