Skip to main content
Log in

Subject Pools and Deception in Agricultural and Resource Economics Experiments

  • Published:
Environmental and Resource Economics Aims and scope Submit manuscript

Abstract

The use of student subjects and deception in experiments are two controversial issues that often raise concerns among editors and reviewers, which might prevent quality research from being published in agricultural and resource economics (ARE) journals. We provide a self-contained methodological discussion of these issues. We argue that field professionals are the most appropriate subjects for questions related to policy or measurement, and students are the most appropriate subjects for scientific research questions closely tied to economic theory. Active deception, where subjects are provided with explicitly misleading information, has been avoided in the mainstream economics discipline because it can lead to a loss of experimental control, lead to subject selection bias, and impose negative externalities on other researchers. Disciplinary ARE journals may want to abide by these norms against deception to maintain credibility. Interdisciplinary ARE journals may have more flexibility, although it is important to provide guidelines to avoid too much reviewer-specific variation in standards. For ARE researchers, we suggest employing a deception-free experimental design whenever possible because we know of no field in which deception is encouraged.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. In the authors’ experience, it is not unusual to receive referee reports that are broadly critical of the use of students as subjects rather than more specific aspects of a paper. For example, a report received by one of the authors did not mention anything specific to the paper but included the statement: “It is a matter of taste, I suppose, but I find experiments like these [using college students] uninteresting (unconvincing). I don’t think the play of these games, where little is at stake, people don’t have long to think about their strategies, and where the rules of the game are much clearer than in real life, tell us much about the real world”.

  2. For example, student subjects are better able to follow neutral and abstract instructions relative to field professionals, who appear to find non-neutral framing helpful since it allows them to draw on their experience (Cooper et al. 1999; Alatas et al. 2009). Non-neutral framing is more likely to activate experimenter demand effects, however, and can lead to reduced control (Zizzo 2010).

  3. External validity refers to the ability of a causal relation identified in the experiment to generalize over subjects and environments (Fréchette 2015).

  4. Al-Ubaydli and List (2015) point out that natural field experiments provide researchers with more control over who actually participates in an experiment because selection into participation is irrelevant. Subjects all participate because they do not realize they are in an experiment. Selection into participation cannot affect treatment comparisons, however, for laboratory experiments that employ random assignment to treatments.

  5. Besides their high financial costs, in his handbook chapter Fréchette (2016) also identifies subject availability, replicability and limits to control as the four main disadvantages of studying representative samples and professionals.

  6. For this study, Palacios-Huerta and Volij (2008) compare students and professional soccer players in zero-sum games with only mixed strategy equilibria. Wooders (2010) re-analyzes the same data and reaches the opposite conclusion—that students conform more closely to the mixed-strategy equilibrium.

  7. In a very recent study comparing students to a representative sample of Danes in a carefully controlled lab experiment, however, Fosgaard (2018) finds that students’ cooperation levels decay much more to the selfish level predicted by standard economic theory.

  8. For example, subjects might be rematched into new pairs each round to play a 2-person game, with no information about their pair provided in order to avoid reputation formation and minimize any repeated game incentives. Subjects might participate in groups of 24 in the lab, but the rematching only occurs within two separate groups of 12 subjects. This increases the independence across the two groups who never interact. The detail about the subgroups is simply omitted from the instructions.

  9. Subjects could consider aspects of a surprise restart objectionable, however, if earnings opportunities in the second part depended on behavior in the first part in ways that were not revealed. In such cases the omission could be considered deceptive. For example, suppose that in a social dilemma such as a trust game or a public goods provision game subjects were regrouped in part 2 so that the most cooperative players in part 1 all interacted in part 2 and therefore earned considerably more in part 2. Had subjects known about this matching procedure they might have changed their part 1 behavior. The omission about matching groups’ size described in the previous footnote is much less likely to have an impact on behavior.

  10. We thank an audience member at the 2018 World Congress of Environmental and Resource Economists for bringing these points to our attention.

  11. “…we only consider studies that do not employ deception of participants…” (https://link.springer.com/journal/10683, accessed 24 August 2018).

  12. The code states “8.07(a) Psychologists do not conduct a study involving deception unless they have determined that the use of deceptive techniques is justified by the study’s significant prospective scientific, educational, or applied value and that effective nondeceptive alternative procedures are not feasible” (APA 2010, p. 11).

References

  • Alatas V, Cameron L, Chaudhuri A, Erkal N, Gangadharan L (2009) Subject pool effects in a corruption experiment: a comparison of Indonesian public servants and Indonesian students. Exp Econ 12:113–132

    Article  Google Scholar 

  • Allcott H (2015) Site selection bias in program evaluation. Q J Econ 130:1117–1165

    Article  Google Scholar 

  • Al-Ubaydli O, List JA (2015) Do natural field experiment afford researchers more or less control than laboratory experiments? Am Econ Rev (Papers Proc) 105:266–462

    Google Scholar 

  • American Psychological Association (2010) Ethical Principles of Psychologists and Code of Conduct, APA 0003-066X, effective June 1, 2003, amended effective June 1, 2010. https://www.apa.org/ethics/code/principles.pdf. Accessed 24 May 2017

  • Andersen S, Harrison GW, Lau MI, Ruström EE (2010) Preference heterogeneity in experiments: comparing the field and laboratory. J Econ Behav Organ 73:209–224

    Article  Google Scholar 

  • Butler JM, Vossler CA (2018) What is an unregulated and potentially misleading label worth? The case of ‘natural’-labeled groceries. Environ Resour Econ 70:545–564

    Article  Google Scholar 

  • Camerer CF (2015) The promise and success of lab-field generalizability in experimental economics: a critical reply to Levitt and List. In: Fréchette GR, Schotter A (eds) Handbook of experimental economic methodology. Oxford University Press, Oxford, pp 249–295

    Chapter  Google Scholar 

  • Carpenter J, Seki E (2011) Do social preferences increase productivity? Field experimental evidence from fisherman in Toyama Bay. Econ Inq 49:612–630

    Article  Google Scholar 

  • Cason TN, de Vries FP (2018) Dynamic efficiency in experimental emissions trading markets with investment uncertainty. Environ Resour Econ. https://doi.org/10.1007/s10640-018-0247-7

    Google Scholar 

  • Cason TN, Plott CR (2014) Misconceptions and game form recognition: challenges to theories of revealed preference and framing. J Polit Econ 122:1235–1270

    Article  Google Scholar 

  • Chamberlin EH (1948) An experimental imperfect market. J Polit Econ 56:95–108

    Article  Google Scholar 

  • Charness G, Villeval M-C (2009) Cooperation and competition in intergenerational experiments in the field and in the laboratory. Am Econ Rev 99:956–978

    Article  Google Scholar 

  • Colson G, Corrigan JR, Grebitus C, Loureiro ML, Rousu MC (2015) Which deceptive practice, if any, should be allowed in experimental economics research? Results from surveys of applied experimental economists and students. Am J Agric Econ 98:610–621

    Article  Google Scholar 

  • Cooper DJ (2014) A note on deception in economic experiments. J Wine Econ 9:111–114

    Article  Google Scholar 

  • Cooper DJ, Kagel J, Lo W, Liang Gu Q (1999) Gaming against managers in incentive systems: experimental results with Chinese students and Chinese managers. Am Econ Rev 89:781–804

    Article  Google Scholar 

  • Falk A, Heckman JJ (2009) Lab experiments are a major source of knowledge in the social sciences. Science 326:535–538

    Article  Google Scholar 

  • Fosgaard TR (2018) Cooperation stability: a representative sample in the lab. IFRO Working Paper No. 2018/08, University of Copenhagen

  • Fréchette GR (2015) Laboratory experiments: professionals versus students. In: Fréchette GR, Schotter A (eds) Handbook of experimental economic methodology. Oxford University Press, Oxford, pp 360–390

    Chapter  Google Scholar 

  • Fréchette GR (2016) Experimental economics across subject populations. In: Kagel JH, Roth AE (eds) Handbook of experimental economics, vol 2. Princeton University Press, Princeton, pp 435–480

    Google Scholar 

  • Harrison GW, List JA (2004) Field experiments. J Econ Lit 42:1009–1055

    Article  Google Scholar 

  • Henrich J, Heine SJ, Norenzayan A (2010) The weirdest people in the world? Behav Brain Sci 33:61–135

    Article  Google Scholar 

  • Herberich DH, List JA (2012) Digging into background risk: experiments with farmers and students. Am J Agric Econ 94:457–463

    Article  Google Scholar 

  • Higgins N, Hellerstein D, Wallander S, Lynch L (2017) Economic experiments for policy analysis and program design: a guide for agricultural decision makers. Economic Research Report 236, United States Department of Agriculture-Economic Research Service

  • Jamison J, Karlan D, Schechter L (2008) To deceive or not to deceive: the effect of deception on behavior in future laboratory experiments. J Econ Behav Organ 68:477–488

    Article  Google Scholar 

  • Just DR, Wu SY (2009) Experimental economics and the economics of contracts. Am J Agric Econ 91:1382–1388

    Article  Google Scholar 

  • Kessler JB, Vesterlund L (2015) The external validity of laboratory experiments: the misleading emphasis on quantitative effects. In: Fréchette GR, Schotter A (eds) Handbook of experimental economic methodology. Oxford University Press, Oxford, pp 390–406

    Google Scholar 

  • Kröll M, Rustagi D (2016) Shades of dishonesty and cheating in informal milk markets in India. SAFE Working Paper No. 134, Goethe University

  • Kuhfuss L, Préget R, Thoyer S, Hanley N (2016) Nudging farmers to enrol land into agri-environmental schemes: the role of a collective bonus. Eur Rev Agric Econ 43:609–636

    Article  Google Scholar 

  • Levitt SD, List JA (2007) What do laboratory experiments measuring social preferences reveal about the real world? J Econ Perspect 21:153–174

    Article  Google Scholar 

  • Lusk JL (2018, forthcoming) The costs and benefits of deception in economic experiments. Food Policy

  • Maart-Noelck SC, Musshoff O (2013) Measuring the risk attitude of decision-makers: are there differences between groups of methods and persons? Aust J Agric Resour Econ 58:336–352

    Article  Google Scholar 

  • Maniadis Z, Tufano F, List JA (2017) To replicate or not replicate? Exploring reproducibility in economics through the lens of a model and pilot study. Econ J 127:F209–F235

    Article  Google Scholar 

  • Mitra A, Moore MR (2018) Green electricity markets as mechanisms of public-goods provision: theory and experimental evidence. Environ Resour Econ. https://doi.org/10.1007/s10640-017-0136-5

    Google Scholar 

  • Nichols L, Brako L, Rivera SM, Tahmassian A, Jones MF, Pierce HH, Bierer BE (2017) What do revised U.S. rules mean for human research? Science 357:650–651

    Article  Google Scholar 

  • Offerman T (2015) Discussion of ‘Psychology and economics: areas of convergence and divergence. In: Fréchette GR, Schotter A (eds) Handbook of experimental economic methodology. Oxford University Press, Oxford, pp 200–204

    Chapter  Google Scholar 

  • Ortmann A (2018, forthcoming) Deception. In: Schram A, Ule A (eds) Handbook of research methods and applications in experimental economics

  • Ortmann A, Hertwig R (2002) The costs of deception: evidence from psychology. Exp Econ 5:111–131

    Article  Google Scholar 

  • Palacios-Huerta I, Volij O (2008) Experientia Docet: professionals play minimax in laboratory experiments. Econometrica 76:71–115

    Article  Google Scholar 

  • Roe BE (2015) The risk attitudes of U.S. farmers. Appl Econ Perspect Policy 37:553–574

    Article  Google Scholar 

  • Roth AE (2001) Form and function in experimental design. Behav Brain Sci 24:427–428

    Google Scholar 

  • Rousu MC, Colson G, Corrigan JR, Grebitus C, Loureiro ML (2015) Deception in experiments: towards guidelines on use in applied economics research. Appl Econ Perspect Policy 37:524–536

    Article  Google Scholar 

  • Safarzynska K (2018) The impact of resource uncertainty and intergroup conflict on harvesting in the common-pool resource experiment. Environ Resour Econ. https://doi.org/10.1007/s10640-017-0193-9

    Google Scholar 

  • Smith VL (1962) An experimental study of competitive market behavior. J Polit Econ 70:111–137

    Article  Google Scholar 

  • Smith VL (1982) Microeconomic systems as an experimental science. Am Econ Rev 72:923–955

    Google Scholar 

  • Suter JF, Vossler CA (2013) Towards an understanding of the performance of ambient tax mechanisms in the field: evidence from upstate New York dairy farmers. Am J Agric Econ 96:92–107

    Article  Google Scholar 

  • Svorencik A (2015) The experimental turn in economics: a history of experimental economics. Ph.D. dissertation, University of Utrecht

  • Tyler TR, Amodio DM (2015) Psychology and economics: areas of convergence and difference. In: Fréchette GR, Schotter A (eds) Handbook of experimental economic methodology. Oxford University Press, Oxford, pp 181–196

    Chapter  Google Scholar 

  • Wilson BJ (2016) The meaning of Deceive in experimental economic science. In: DeMartino G, McCloskey D (eds) Oxford handbook of professional economic ethics. Oxford University Press, Oxford

    Google Scholar 

  • Wooders J (2010) Does experience teach? Professionals and minimax play in the lab. Econometrica 78:1143–1154

    Article  Google Scholar 

  • Zizzo DJ (2010) Experimenter demand effects in economic experiments. Exp Econ 13:75–98

    Article  Google Scholar 

Download references

Acknowledgements

For helpful comments, we would like to thank (without implicating) participants at the CBEAR-MAAP and WCERE conferences, and Simanti Banerjee, Carola Grebitus, David Cooper, Guillaume Fréchette, Nick Hanley, Leah Palm-Forster, Marco Palma, Sharon Raszap, Stephanie Rosch, Matt Rousu, Christian Vossler, and two anonymous referees. Wu gratefully acknowledges financial support from USDA-NIFA HATCH project IND010580.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Timothy N. Cason.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Cason, T.N., Wu, S.Y. Subject Pools and Deception in Agricultural and Resource Economics Experiments. Environ Resource Econ 73, 743–758 (2019). https://doi.org/10.1007/s10640-018-0289-x

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10640-018-0289-x

Keywords

JEL Classification

Navigation