Whether to refine a hypothesis or identify possible blindspots, social scientists can benefit from gathering predictions from their peers in advancing of conducting a research study. And through the process of comparing research results to what was forecast, researchers can shed new light on gaps in understanding. Yet until recently, there was no tool specifically designed to facilitate the collection of forecasts about research in economics, political science, psychology, or other social sciences.
That has now changed, thanks to the development of the Social Science Prediction Platform, a new resource designed to help researchers and policymakers gather predictions about their findings in advance. Led by Stefano DellaVigna, the Daniel Koshland, Sr. Distinguished Professor of Economics and Professor of Business Administration at UC Berkeley, the prediction platform is designed to "allow for the systematic collection and assessment of expert forecasts of the effects of untested social programs,” according to the platform's website. “In turn, this should help both policy makers and social scientists by improving the accuracy of forecasts, allowing for more effective decision-making and improving experimental design and analysis."
The Social Science Prediction Platform was developed as part of the Berkeley Initiative for Transparency in the Social Sciences, and was spearheaded by DellaVigna, together with Eva Vivalt, Assistant Professor in the Department of Economics at the University of Toronto, whose work at the World Bank and the Inter-American Development Bank included collecting predictions from researchers, officials, and practitioners. The project is funded by the Alfred P. Sloan Foundation and an anonymous foundation, whose support enables forecasters to receive compensation as an incentive for participating. We interviewed Stefano DellaVigna to learn more about this new resource. (Note the interview has been lightly edited.)
What led to the development of the Social Science Prediction Index?
Our motivation was the experience of being at a typical academic conference, when you present a set of results you have worked on for years, and then somebody raises their hand and says, hey, we knew that already, or that was expected. Often this is pure hindsight bias, but it’s a very deflating experience. Someone could say that the car-sharing service Uber is an obvious idea as a company, but only after it’s created. The only way to address this is to tell people, before any of the results are known, this is what we're going to do, we don't know the results yet, what do you think we will find? If everybody says you'll find x and you find y, people can’t say, we expected y.
This is useful in a number of other ways when we think about research. A researcher is always in a process of updating. There is an initial view, there is a new piece of research, and we update based on that. This honors that perspective and says, let’s capture people's perspectives ex ante, so when a piece of information comes in, you can consider updating the research. In principle, this can even help in the design of an experiment. It's a very simple idea, and somewhat surprising that it doesn't already happen. And that's why we feel that there has been a lot of positive response.
There has been some work done in psychology where people set up prediction markets to determine which experiments should be replicated to see if they stand up. It turns out, people have some intuitive sense about which ones to replicate and which ones not. Our idea is that this is much more broadly applicable when you have a set of studies, and that it’s often very useful to be able to not only say what you find, but also, how does it compare to what most people in the discipline believed ex ante?
What’s an example of how somebody would use this to inform their research?
As part of my own research, I've tried to better understand what people do when they're unemployed. We don't typically have information on people's search efforts, how hard they search for employment, because there is no way to measure that. So we spent years planning a survey of unemployed workers in Germany to trace how people change their search efforts over an unemployment spell, and as benefits expire, because it informs a number of models. We asked labor experts, what do you think our findings will be? In our study, we did not find any evidence that people who had a job offer waited to start until their benefits expired. But almost everybody was expecting they would do this — including ourselves. It was useful to know that it wasn't just us, and this was a more unexpected result.
Do you find that researchers change their research questions based on the initial predictions they receive — for example, if they get the same prediction from everybody?
It’s really interesting, because it could go either way. As an example, consider Marianne Bertrand and Sendhil Mullainathan's famous study of CVs, where they randomized the names on resumes to be either African-American or White names. They found the call-back rate for African Americans was about 30% lower. Afterward, Mullainathan said, I almost didn't run this study, because I asked my colleagues at MIT, what do you think I will find? And they said, you're probably going to find reverse discrimination, that Blacks will get more call backs, and it may not look good to publish that. And so they ran the experiment and found the opposite. When he went back to those colleagues, he recollects that many of them said, yeah, I told you so. This is an example of hindsight bias.
Even if everybody has the same prior, you might still want to run it, because what they're agreeing on may not necessarily be true. All the evidence we have collected so far suggests that more accomplished researchers don't really do any better than PhD students in predicting. So you might want to use that information and crowdsource your forecasts to more PhD students, rather than trying to find one famous expert professor.
Does the survey allow people to explain why they think something will happen?
We always have an open box at the end for feedback. It's an interesting trade-off. We want to tell authors to be very mindful of people's time; people’s attention fades quickly, so we encourage them to have a small number of key questions. But in my experience, forecasters often provide really valuable comments in their responses. I should also mention that when a survey forecast period closes, the forecasters can see where their forecast was in the distribution of forecasts. So even before we know the result, we have a tool where you can see, I was just like everybody else, or, I was actually more optimistic about this. That's an intermediate piece of feedback we can give people that can be valuable.
You’ve talked about this potentially being used in policymaking or in the development of social programs. How would that work?
This approach has been used quite a bit in the context of policy-relevant experiments. Suppose that you're going to do some conditional cash transfer program or some intervention to improve education in, say, a rural area in Kenya, and you have funding to run three arms, but you have five things you'd like to do. You could either rely on your intuition, or combine that with predictions from people about what might happen in the different arms. You might decide to conduct the study for which people have more dispersed priors, or where people think the chances are higher of having an impact. Almost all of us as researchers often have more ideas than we can bring to the field, and this is frustrating when you're basing the decision on your intuition, not data. And so here, at least, you can say, we selected it based on some kind of preliminary feedback.
What do researchers need to do before submitting a request for forecasts?
The researcher has to develop a Qualtrics survey, an easy-to-go-through survey that is really not supposed to take longer than 15 minutes. Then it gets vetted, and when it’s ready to go, anybody can click on it and go through the survey and their response is recorded. There is a minimal obligation for the author of the study to come back later and ultimately fill in what they found, so later we're able to use all the studies and answer questions like predictive accuracy. We set a flag on some key predictive questions, but otherwise, it's pretty simple.
Right now, we're able to offer some incentives, so if you forecast a number of studies as a graduate student, you get some reward. We think this can be gratifying and interesting, but it would be really good to have a pool of graduate students so that when a study comes in, they're like, oh that’s interesting, here are my priors — as opposed to the researcher targeting people that maybe don't necessarily want to be targeted. We actually have more than 1000 people signed up to do forecasts. Not all the accounts are active, but it's grown a lot. We started thinking about this three or four ears ago, but the platform has only been operational for the last couple months. It’s a baby, but it's a baby that is growing fast.
Learn more about the Social Science Prediction Platform at https://socialscienceprediction.org/.
Department
- Economics
- Psychology
- Sociology
Article Type
- Faculty Spotlight
- Research Highlights
Add a Comment