Call for Participation

Register as a Research Team and contribute your algorithm for prediction aggregation to the Wisdom of the Crowd: Crowd Analysis Project.

Download Call for Participation

Overview

The term “wisdom of the crowd” describes the tendency for the collective intelligence or judgment of a group to be more accurate than that of any single member of the group. We are conducting the Wisdom of the Crowd - Crowd Analysis Project to determine which aggregation mechanisms best harness this effect to predict future outcomes. We invite anyone who has experience in fields such as (but not limited to) decision science, behavioral science, economics, psychology or computer science to propose algorithms that they believe will produce the most accurate aggregate predictions in a number of domains. Participation does not require a large time commitment and can be done individually or in a research team (RT) of up to two people.

Description

The project coordinators will recruit a large panel of gender-balanced, US-based participants (predictors) from Prolific and administer monthly prediction surveys to them for six consecutive months. Each month, these predictors will be asked to make a set of related predictions about outcomes one month in the future in the domains of economics, climate, politics and sports. Eligible RTs will submit up to four different algorithms that will be used to aggregate the predictions in each domain. We accept algorithms at any level of complexity, from those with a few lines of code to more complex ones. In addition to the main predictions, RTs will be provided with various additional measures about the predictors, such as confidence in their prediction, domain-relevant knowledge, demographic characteristics, as well as their predictions from previous months of data collection, which can be used as a measure of prediction accuracy. Finally, RTs will also be given a chance to propose additional measures to include in the prediction survey. RTs will be able to use all available measures in their algorithms (e.g., as weighting criteria). RTs will not be involved in the data collection or analysis.

Workload

RTs will be responsible for two tasks: (1) They will submit one algorithm per domain. A demo dataset and a testing environment will be available on the project website to help RTs develop and test these algorithms before submission. (2) They will be asked to evaluate several algorithms proposed by other RTs based on a short algorithm description (it will not be necessary to read code). Finally, teams will have the option to propose additional measures to be added to the prediction survey.

Competition

The paper will include a list of all participating RTs and their members. Specifically, all RT members will become consortium co-authors. The members of the RTs that submit the most accurate prediction algorithms (per domain, hence maximum of four winning RTs) will receive a monetary award of EUR 2,500 per team (WoCCAP Award).

We determine the most accurate prediction algorithm in each domain as follows: We apply the algorithm to each question of each run of each wave. For each one, we determine the absolute deviation from the respective true values. Then, we take the mean of the 128 (4 questions X 8 runs X 4 waves) absolute deviations. In each domain, the team that submitted the algorithm with the lowest mean absolute deviation wins.

Eligibility criteria

RTs can have up to two researchers; each researcher can only participate in one RT. At least one member of the RT should have a PhD in a related field or an affiliation with an academic or research institution. RT members do not need to have experience in Wisdom of the Crowd research specifically.

Download the Call for Participation as a PDF File.

Your Task

As a Research Team, you will design algorithms that aggregate predictions from 80 individuals for four different topics: Politics, economics, climate change, and sports. For each topic, your algorithm will be applied to four separate prediction questions.

You will receive codebooks and demo datasets to develop and test your algorithms.

Our Part

Over the course of half a year, every single month we will collect predictions from hundreds of individuals (predictors) on the four specific domains.

Once data collection has completed, we will take all submitted aggregation mechanisms and run them on the crowdsourced predictions. We rank the algorithms based on their prediction accuracy and publish the results (but not the names of the team members as we adhere to strict anonymization).

Incentives

All participating Research Teams will become Woc-CAP consortium co-authors on the peer-reviewed publication of the project results.

The Research Teams with the best performing aggregation mechanisms in one of the four domains receive a prize of EUR 2.500 (for a total maximum of EUR 10.000).

Requirements

Research Teams always consist of up to two reasearchers.

At least one team member must hold a PhD or equivalent degree or be affiliated with a research institution