On May 27th and 28th, a group of scholars met under the auspices of the MacArthur Foundation's Opening Governance Research Network, at the Center for Advanced Studies in the Behavioral Sciences at Stanford University. Over the past number of years, researchers and scholars in statistics, computer science, political science, economics and political theory have tried to confront fundamental problems in collective intelligence and problem solving in different ways. These problems are the problems that plague open problem solving more generally; solving them will require not only practical experiments but also serious and rigorous thinking across the academic disciplines. The workshop was designed to bring people from very different backgrounds together. Each participant was invited to write a short paper about key problems in the realm of collective intelligence. Participants were then grouped together so as to encourage conversation and argument across different perspectives. The workshop confronted the problem of how best to harness multiple perspectives for problem solving as itself a problem that could be best solved through bringing multiple perspectives together. The high quality of the ensuing conversation demonstrated the plausibility of the underlying approach. Over the two days, the conversation identified some key questions for the broader enterprise of building a science of (or sciences of) collective problem solving.

We need a new science of institutions

Existing approaches to institutions do not tell us very much about how good they are at problem solving. They tend to assume simple and straightforward problems. However, collective problem solving has to start from the basis that many important problems are fundamentally complex. We cannot just assume away the difficulty of problems (whether we are political theorists, economists, data scientists, or, most importantly, policy makers). Complex problems have many moving parts, which interact in highly complicated ways. We don't have good theories of when institutions will or will not be good at helping people solve such problems. We do have theories that suggest that the best resources for problem solving are the different perspectives and understandings of a problem that are distributed across the human population. Bringing these perspectives to bear so that they usefully inform each other will help us to better solve truly complex problems. What we need is a computational theory of institutions, which would examine how institutions are more or less suited to host or facilitate social computation to extract, organize and make use of the dispersed information across the population so as to solve problems. This theory would ideally distinguish between different kinds of problems, providing some guidance as to which kind of institutional framework (and perhaps which population of problem solvers) is better suited to which problem.

Such a theory will require better accounts of learning and innovation

We do not know very much about how human beings actually learn from each other and engage in problem solving in the real world. Our knowledge is a mishmash of experimental results and agent based modeling at the micro level, and simulations of social influence and contagion at the macro level. Social media provide us with swathes of new data that allow us to see how human beings solve problems in the wild. However, current statistical techniques have enormous difficulty in drawing helpful inferences about this data. What we need are: better tools of statistical inference; better experimental evidence about what works and what does not; better methods for simulating problem solving; better feedback between academia and policy makers about what works and what does not.

Assessment is a hard challenge

The task of building a new science of institutions and of learning and innovation is a tough one. However, it isn't the only challenge. Not only that it is not easy to find optimal solutions to complex problems, but it's hard to assess solutions, to tell whether they are optimal, or even very good in an absolute sense. There is no easy external metric for success or failure. This is compounded by the fact that we often don't have any universal standards by which we can judge success or failure. A success by some people's measure may be a dismal failure by others. Both new mathematical techniques and old political theoretic approaches (the pragmatism of John Dewey) face the challenge of having to pull themselves up by their own bootstraps: coming up with approaches that will lead to better problem solving without any universal definition of what 'better problem solving' involves. The task becomes even more difficult when we have to evaluate not only solutions, but the relative efficacy of the institutions that come up with these solutions. Practical dialogue across people with different disciplinary backgrounds can help us arrive at better metrics and means of assessment - but they will be far from perfect.

Comment