A new "mega-study" consisting of dozens of simultaneous, independently designed experiments–shows that competitions have no automatic impact on our morality.
Academic research is tapping into the power of the crowd, thanks to an emerging paradigm known as the “mega-study.” Instead of one experimental design with a single set of parameters, mega-studies utilize a diverse array of simultaneous studies submitted by independent research teams, with a shared research question and participant pool.
In addition to efficiencies of scale, mega-studies can help scholars more quickly come to grips with the slippery issue of generalizability–i.e., the degree to which findings in one context apply to other contexts. For example, something that motivates people to do one prosocial behavior (e.g., recycling) might be different from what motivates people to do another prosocial behavior (e.g., donating to charity). Observing the full range of results across dozens of studies provides a clearer sense of how different environmental conditions may affect research outcomes.
The mega-study model is ideal for the field of behavioral economics, which explores how various factors in the world around us influence our daily decision-making. Human behavior, after all, is extremely complicated and changeable.
Einav Hart, an assistant professor of management at George Mason University School of Business, participated in the first-ever crowdsourced mega-study in behavioral economics, recently published in Proceedings of the National Academy of Sciences (PNAS). Hart’s experimental design was one of 45 selected for the mega-study. The broad research question was “Does competition erode, promote or not affect moral behavior?”
The mega-study was conducted to better understand and reconcile previous findings about the relationship between competition and moral behaviors. Previous (individual) studies have yielded mixed results. While some studies show that competition promotes moral behaviors such as trust and reciprocity, others point to a moral erosion.
In the mega-study, 18,123 online participants were randomly assigned to the 45 research designs. Notably, the 45 experiments were quite eclectic in their interpretations of “competition” and “moral behavior”. One research team, for instance, proposed a game in which participants solved puzzles either in or out of a competitive scenario, and were asked to self-report their performance. In this case, the honesty or dishonesty of their self-reported scores was used to measure “moral behavior”. Another proposed design was an online game in which an “investor” granted points to an “investee”, who could then choose whether or not to return the favor.
The pooled data from all 45 experiments showed that competitive conditions led to a minor decline in participants’ moral behavior. However, the size of the decline varied considerably from study to study. Further analysis revealed that the majority of the variance was due to the design differences among the 45 experiments.
The authors conclude that when it comes to broad research questions that admit a wide range of interpretations, there may be limitations to what a single study with a specific context–even one with a very large sample size–can reveal. Hart says, “This shows that in any individual study, you could observe very different results even with the same ‘ground truth,’ and this variance is largely dependent on how competition and moral behavior are operationalized.” In these cases, true generalizability may arise only from a greater diversity of experimental approaches, such as the mega-study and other crowdsourced research paradigms. As the PNAS paper states, “Our findings provide an argument for moving toward much larger data collections and more team science.”
More School of Business News
- October 31, 2024
- October 22, 2024
- October 14, 2024
- September 19, 2024
- September 4, 2024