Abstract
Methodological knowledge on surveying young adolescents is scarce and researchers often rely on theories and methodological studies based on adult respondents. However, young adolescents are in the process of developing their cognitive, psychological, emotional and social skills, therefore present a unique set of considerations. Question characteristics including; question type and format, question difficulty, wording, ambiguity, the number of response options, and the inclusion of a neutral mid-point, play a pivotal role in the response quality of young adolescents. Failure to address these factors is likely to encourage young adolescents to use satisficing techniques. In this article, we provide a science based guide for developing surveys for use with adolescents aged 11–16 years. The guide considers the characteristics and developmental stages of adolescents as survey responders and incorporates advice on appropriate question characteristics, survey layout and question sequence, approaches to pre-testing surveys and mode of survey administration. The guide provides recommendations for developmentally appropriate survey design to improve response quality in survey research with young adolescents.
Similar content being viewed by others
References
Alm-Roijer, C., Stagmo, M., Uden, G., & Erhardt, L. (2004). Better knowledge improves adherence to lifestyle changes and medication in patients with coronary heart disease. European Journal of Cardiovascular Nursing, 3(4), 321–330. https://doi.org/10.1016/j.ejcnurse.2004.05.002.
Alwin, D. F. (2007). Margins of error. Wiley series in survey methodology. Hoboken: Wiley. https://doi.org/10.1002/9780470146316.
Arthur, A. M., Smith, M. H., White, A. S., Hawley, L., & Koziol, N. A. (2017). Age-sensitive instrument design for youth: A developmental approach. Retrieved from http://cyfs.unl.edu/resources/downloads/working-papers/MAP-working-paper-2017-1.pdf. Accessed November, 2 2017.
Ayre, C., & Scally, A. J. (2013). Critical values for Lawshe’s content validity ratio revisiting the original methods of calculation. Measurement and Evaluation in Counseling and Development, 47(1), 79–86. https://doi.org/10.1177/0748175613513808.
Beatty, P. C., & Willis, G. B. (2007). The practice of cognitive interviewing. Public Opinion Quarterly, 71(2), 287–311. https://doi.org/10.1093/poq/nfm006.
Bell, A. (2007). Designing and testing questionnaires for children. Journal of Research in Nursing, 12(5), 461–469. https://doi.org/10.1177/1744987107079616.
Bland, J. M., & Altman, D. G. (1986). Statistical methods for assessing agreement between two methods of clinical measurement. The Lancet, 327(8476), 307–310. https://doi.org/10.1016/s0140-6736(86)90837-8.
Borgers, N., de Leeuw, E., & Hox, J. J. (2000). Children as respondents in survey research: Cognitive development and response quality. Bulletin of Sociological Methodology/Bulletin de Mathodologie Sociologique, 66(1), 60–75. https://doi.org/10.1177/075910630006600106.
Borgers, N., & Hox, J. J. (2000). Reliability of responses in questionnaire research with children. In The fifth international conference on logic and methodology. Cologne, Germany.
Borgers, N., & Hox, J. J. (2001). Item nonresponse in questionnaire research with children. Journal of Official Statistics, 17(2), 321–335.
Borgers, N., Hox, J. J., & Sikkel, D. (2003). Response quality in survey research with children and adolescents: The effect of labeled response options and vague quantifiers. International Journal of Public Opinion Research, 15(1), 83–94. https://doi.org/10.1093/ijpor/15.1.83.
Borgers, N., Hox, J. J., & Sikkel, D. (2004). Response effects in surveys on children and adolescents: The effect of number of response options, negative wording, and neutral mid-point. Quality and Quantity, 38(1), 17–33. https://doi.org/10.1023/b:ququ.0000013236.29205.a6.
Bowling, A. (2005). Mode of questionnaire administration can have serious effects on data quality. Journal of Public Health, 27(3), 281–291. https://doi.org/10.1093/pubmed/fdi031.
Bowling, A. (2014). Research methods in health: Investigating health and health services (4th edn.). Maidenhead: McGraw-Hill Open University Press.
Boynton, P. M., & Greenhalgh, T. (2004). Selecting, designing, and developing your questionnaire. BMJ, 328(7451), 1312–1315. https://doi.org/10.1136/bmj.328.7451.1312.
Bradburn, N. M., Sudman, S., & Wansink, B. (2004). Asking questions the definitive guide to questionnaire design—for market research, political polls, and social and health questionnaires. San Francisco: Jossey-Bass Inc.
Bryman, A. (2012). Social research methods (4th edn.). Oxford: University Press.
Calderon, J. L., Morales, L. S., Liu, H., & Hays, R. (2006). Variation in the readability of items within surveys. American Journal of Medical Quality, 21(1), 49–56. https://doi.org/10.1177/1062860605283572.
Cauffman, E., & Steinberg, L. (2000). Im) maturity of judgment in adolescence: Why adolescents may be less culpable than adults. Behavioral Sciences & the Law, 18(6), 741–760.
Cohen, J. (1960). A Coefficient of Agreement for Nominal Scales. Educational and Psychological Measurement, 20(1), 37–46. https://doi.org/10.1177/001316446002000104.
Cohen, J. (1968). Weighted kappa: Nominal scale agreement provision for scaled disagreement or partial credit. Psychological Bulletin, 70(4), 213–220. https://doi.org/10.1037/h0026256.
Collins, D. (2003). Pretesting survey instruments: An overview of cognitive methods. Quality of Life Research, 12(3), 229–238. https://doi.org/10.1023/a:1023254226592.
Colosi, R. (2005). Negatively worded questions cause respondent confusion. In U. B. o. t. Census (pp. 2896–2903). Suitland: ASA Section on Survey Research Methods.
Cronbach, L. J. (1950). Further evidence on response sets and test design. Educational and Psychological Measurement, 10(1), 3–31. https://doi.org/10.1177/001316445001000101.
Cronbach, L. J. (1988). Five perspectives on validity argument. In H. Wainer & H. I. Braun (Eds.), Test validity (pp. 3–16). New York: Routledge.
Cronbach, L. J., & Meehl, P. E. (1955). Construct validity in psychological tests. Psychological Bulletin, 52(4), 281–302. https://doi.org/10.1037/h0040957.
de Leeuw, E., Natacha, B., & Astrid, S. (2004). Pretesting questionnaires for children and adolescents. In S. Presser, J. M. Rothgeb, M. P. Couper, J. T. Lessler, E. Martin, J. Martin & E. Singer (Eds.), Methods for testing and evaluating survey questionnaires (pp. 409–429). New Jersey: John Wiley & Sons.
de Leeuw, E. D. (2011). Improving data quality when surveying children and adolescents: Cognitive and social development and its role in questionnaire construction and pretesting. In Report prepared for the Annual Meeting of the Academy of Finland: Research Programs Public Health, Finland.
De Vaus, D. (2014). Surveys in social research. New York: Routledge.
Deutskens, E., de Ruyter, K., Wetzels, M., & Oosterveld, P. (2004). Response rate and response quality of internet-based surveys: An experimental study. Marketing Letters, 15(1), 21–36. https://doi.org/10.1023/b:mark.0000021968.86465.00.
DeVellis, R. F. (2016). Scale development: Theory and applications (4th edn.). London: SAGE.
Dillman, D. A. (2006). Why choice of survey mode makes a difference. Public Health Reports, 121(1), 11–13. https://doi.org/10.1177/003335490612100106.
Dillman, D. A., & Smyth, J. D. (2007). Design effects in the transition to web-based surveys. American Journal of Preventive Medicine, 32(5), S90–S96. https://doi.org/10.1016/j.amepre.2007.03.008.
Dillman, D. A., Smyth, J. D., & Christian, L. M. (2014). Internet, phone mail and mixed-mode surveys: The Tailored design method (4th edn.). New York: Wiley.
DuBay, W. H. (2007). Smart language: Readers, readability, and the grading of text. Costa Mesa: Booksurge Publishing.
Fargas-Malet, M., McSherry, D., Larkin, E., & Robinson, C. (2010). Research with children: Methodological issues and innovative techniques. Journal of Early Childhood Research, 8(2), 175–192. https://doi.org/10.1177/1476718x09345412.
Feldman, D. H. (2004). Piaget’s stages: The unfinished symphony of cognitive development. New Ideas in Psychology, 22(3), 175–231.
Ferketich, S. (1991). Focus on psychometrics. Aspects of item analysis. Research in Nursing & Health, 14(2), 165–168. https://doi.org/10.1002/nur.4770140211.
Flesch, R. (1948). A new readability yardstick. Journal of Applied Psychology, 32(3), 221–233. https://doi.org/10.1037/h0057532.
Flesch, R. (1962). The art of plain talk. Revised edition. New York: Harper and Row.
Fuchs, M. (2005). Children and adolescents as respondents. Experiments on question order, response order, scale effects and the effect of numeric values associated with response options. Journal of Official Statistics, 21(4), 701–725.
Galesic, M., & Bosnjak, M. (2009). Effects of questionnaire length on participation and indicators of response quality in a web survey. Public Opinion Quarterly, 73(2), 349–360. https://doi.org/10.1093/poq/nfp031.
Greig, A., Taylor, J., & MacKay, T. (2013). Doing research with children: A practical guide. Ltd: SAGE Publications. https://doi.org/10.4135/9781526402219.
Haeger, H., Lambert, A. D., Kinzie, J., & Gieser, J. (2012). Using cognitive interviews to improve survey instruments. New Orleans: The Association for Institutional Research Annual Forum.
Hardesty, D. M., & Bearden, W. O. (2004). The use of expert judges in scale development. Journal of Business Research, 57(2), 98–107. https://doi.org/10.1016/s0148-2963(01)00295-8.
Harter, S. (2012a). Manual for the self-perception profile for adolescents. Denver, CO: University of Denver.
Harter, S. (2012b). Manual for the self-perception profile for children. Denver, CO: University of Denver.
Hattie, J., & Cooksey, R. W. (1984). Procedures for assessing the validities of tests using the known-groups method. Applied Psychological Measurement, 8(3), 295–305.
Jakobsson, U., & Westergren, A. (2005). Statistical methods for assessing agreement for ordinal data. Scandinavian Journal of Caring Sciences. https://doi.org/10.1111/j.1471-6712.2005.00368.x.
Kelley, K., Clark, B., Brown, V., & Sitzia, J. (2003). Good practice in the conduct and reporting of survey research. International Journal for Quality in Health Care, 15(3), 261–266. https://doi.org/10.1093/intqhc/mzg031.
Kincaid, J. P., Fishburne, R., Rogers, R., & Chissom, B. S. (1975). Derivation of new readability formulas (Automated Readability Index, Fog Count and Flesch Reading Ease Formula) for navy enlisted personnel. Defense Technical Information Center. https://doi.org/10.21236/ada006655.
Kitzinger, J. (1995). Qualitative research: Introducing focus groups. BMJ, 311(7000), 299–302. https://doi.org/10.1136/bmj.311.7000.299.
Kline, P. (2000). The handbook of psychological testing (2nd edn.). London: Routledge.
Krosnick, J. A. (1991). Response strategies for coping with the cognitive demands of attitude measures in surveys. Applied Cognitive Psychology, 5(3), 213–236. https://doi.org/10.1002/acp.2350050305.
Krosnick, J. A., & Fabrigar, L. R. (1997). Designing rating scales for effective measurement in surveys. In P. B. L. Lyberg, M. Collins, E. de Leeuw, C. Dippo, N. Schwarz & D. Trewin (Eds.), Survey measurement and process quality (pp. 141–164). New York: Wiley.
Krosnick, J. A., & Presser, S. (2010). Question and questionnaire design. In J. D. Wright & P. V. Marsden (Ed.), Handbook of survey research (pp. 263–314). Bingley: Emerald Group Publishing Ltd.
Lawshe, C. H. (1975). A quantitative approach to content validity. Personnel Psychology, 28(4), 563–575. https://doi.org/10.1111/j.1744-6570.1975.tb01393.x.
Lenzner, T. (2014). Are readability formulas valid tools for assessing survey question difficulty? Sociological Methods and Research, 43(4), 677–698. https://doi.org/10.1177/0049124113513436.
Leshem, R. (2016). Brain development, impulsivity, risky decision making, and cognitive control: Integrating cognitive and socioemotional processes during adolescence—an introduction to the special issue. Developmental Neuropsychology, 41(1–2), 1–5. https://doi.org/10.1080/87565641.2016.1187033.
Likert, R. (1932). A technique for the measurement of attitudes. Archives of Psychology, 22(140), 1–55.
Lissitz, R. W., & Hou, X. (2012). Multiple choice items and constructed response items: Does it matter? Retrieved from http://www.education.umd.edu/MARC/multiplechoiceitemsandconstructedresponseitems.pdf. Accessed June 25, 2016.
Lozano, L. M., García-Cueto, E., & Muñiz, J. (2008). Effect of the number of response categories on the reliability and validity of rating scales. Methodology, 4(2), 73–79. https://doi.org/10.1027/1614-2241.4.2.73.
Marcus, B., Bosnjak, M., Lindner, S., Pilischenko, S., & Schütz, A. (2007). Compensating for low topic interest and long surveys. Social Science Computer Review, 25(3), 372–383. https://doi.org/10.1177/0894439307297606.
Marsh, H. W. (1986). Negative item bias in ratings scales for preadolescent children: A cognitive-developmental phenomenon. Developmental Psychology, 22(1), 37–49. https://doi.org/10.1037//0012-1649.22.1.37.
Mavletova, A., & Lynn, P. (2017). Data quality in the understanding society youth self-completion questionnaire. Understanding Society at the Institute for Social and Economic Research (No. 2017–8).
McCabe, M. P., & Ricciardelli, L. A. (2003). Body image and strategies to lose weight and increase muscle among boys and girls. Health Psychology, 22(1), 39–46. https://doi.org/10.1037//0278-6133.22.1.39.
McLaughlin, G. H. (1969). SMOG grading: A new readability formula. Journal of Reading, 12(8), 639–646.
Mellor, D., & Moore, K. A. (2003). The questionnaire on teacher interaction: Assessing information transfer in single and multi-teacher environments. Journal of Classroom Interaction, 38(2), 29–35.
Mellor, D., & Moore, K. A. (2014). The use of Likert scales with children. Journal of Pediatric Psychology, 39(3), 369–379. https://doi.org/10.1093/jpepsy/jst079.
Menold, N., Kaczmirek, L., Lenzner, T., & Neusar, A. (2014). How do respondents attend to verbal labels in rating scales? Field Methods, 26(1), 21–39. https://doi.org/10.1177/1525822x13508270.
Michalos, A. C., Creech, H., Swayze, N., Kahlke, M., Buckler, P., C., & Rempel, K. (2011). Measuring knowledge, attitudes and behaviours concerning sustainable development among tenth grade students in Manitoba. Social Indicators Research, 106(2), 213–238. https://doi.org/10.1007/s11205-011-9809-6.
Mitchell, M. L., & Jolley, J. M. (2013). Research design explained (8th edn.). Boston: Wadsworth Publishing Co.
Mondak, J. J., & Davis, B. C. (2001). Asked and answered: knowledge levels when we won’t take “don’t know” for an answer. Political Behavior, 23(3), 199–224. https://doi.org/10.1023/a:1015015227594.
Moore, K. A., & Mellor, D. J. (2003). The nature of children’s social interactions at school. School Psychology International, 24(3), 329–339. https://doi.org/10.1177/01430343030243005.
Moors, G., Kieruj, N. D., & Vermunt, J. K. (2014). The effect of labeling and numbering of response scales on the likelihood of response bias. Sociological Methodology, 44(1), 369–399. https://doi.org/10.1177/0081175013516114.
Morgan, D. L. (1996). Focus groups. Annual Review of Sociology, 22(1), 129–152. https://doi.org/10.1146/annurev.soc.22.1.129.
Moser, C., & Kalton, G. (1985). Survey methods in social investigation (2nd edn.). Aldershot: Gower.
Nevo, B. (1985). Face validity revisited. Journal of Educational Measurement, 22(4), 287–293. https://doi.org/10.1111/j.1745-3984.1985.tb01065.x.
Norris, A. E., Torres-Thomas, S., & Williams, E. T. (2014). Adapting cognitive interviewing for early adolescent hispanic girls and sensitive topics. Hispanic Health Care International, 12(3), 111–119. https://doi.org/10.1891/1540-4153.12.3.111.
Ogan, C., Karakuay, T., & Kurayun, E. (2013). Methodological issues in a survey of children’s online risk-taking and other behaviours in Europe. Journal of Children and Media, 7(1), 133–150. https://doi.org/10.1080/17482798.2012.739812.
Osgood, C. E. E., Suci, G. J., & Tannenbaum, P. H. (1957). The measurement of meaning. Champaign: University of Illinois Press.
Partridge, B. C. (2010). Adolescent psychological development, parenting styles, and pediatric decision making. Journal of Medicine and Philosophy, 35(5), 518–525.
Piaget, J. (1929). Child’s conception of the world (international library of psychology). London: Routledge and Kegan Paul.
Pole, C., & Lampard, R. (2013). Practical social investigation. Qualitative and quantitative methods in social research. Harlow: Printice Hall.
Presser, S., Couper, M. P., Lessler, J. T., Martin, E., Martin, J., Rothgeb, J. M., & Singer, E. (2004). Methods for testing and evaluating survey questions. Public Opinion Quarterly, 68(1), 109–130. https://doi.org/10.1093/poq/nfh008.
Preston, C. C., & Colman, A. M. (2000). Optimal number 0f response categories in rating scales: Reliability, validity, discriminating power, and respondent preferences. Acta Psychologica, 104(1), 1–15. https://doi.org/10.1016/s0001-6918(99)00050-5.
Priest, J., Thomas, L., & Bond, S. (1995). Developing and refining a new measurement tool. Nurse Researcher, 2(4), 69–81. https://doi.org/10.7748/nr.2.4.69.s8.
Quaigrain, K., & Arhin, A. K. (2017). Using reliability and item analysis to evaluate a teacher-developed test in educational measurement and evaluation. Cogent Education, 4(1). https://doi.org/10.1080/2331186x.2017.1301013.
Rattray, J., & Jones, M. C. (2007). Essential elements of questionnaire design and development. Journal of Clinical Nursing, 16(2), 234–243. https://doi.org/10.1111/j.1365-2702.2006.01573.x.
Rea, L. M., & Parker, R. A. (2005). Designing and conducting survey research: A comprehensive guide (3rd edn.). San Francisco: Jossey-Bass.
Reja, U., Manfreda, K. L., Hlebec, V., & Vehovar, V. (2003). Open-ended vs. close-ended questions in web questionnaires. Developments in Applied Statistics, 19, 159–177.
Reyna, V. F., & Rivers, S. E. (2008). Current theories of risk and rational decision making. Developmental Review, 28(1), 1–11.
Robson, C. (2011). Real world research: A resource for users of social research methods in applied settings (3rd edn.). Chichester: Wiley.
Roediger, H. L., & Marsh, E. J. (2005). The positive and negative consequences of multiple-choice testing. Journal of Experimental Psychology: Learning, Memory, and Cognition, 31(5), 1155–1159. https://doi.org/10.1037/0278-7393.31.5.1155.
Rohrmann, B. (2007). Verbal qualifiers for rating scales: Sociolinguistic considerations and psychometric data. University of Melbourne/Australia. Retrieved from http://www.rohrmannresearch.net/pdfs/rohrmann-vqs-report.pdf. Accessed: 15 April, 2016.
Rossi, P. H., Wright, J. D., & Anderson, A. B. (2013). Handbook of survey research. New York: Academic Press.
Saffi, M. A. L., Macedo Junior Jacques de, L. J., Trojahn, M. M., Polanczyk, C. A., & Rabelo-Silva, E. (2013). Validity and reliability of a questionnaire on knowledge of cardiovascular risk factors for use in Brazil. Revista Da Escola de Enfermagem Da USP, 47(5), 1083–1089. https://doi.org/10.1590/s0080-623420130000500011.
Schuman, H., & Presser, S. (1996). Questions and answers in attitude surveys: Experiments on question form, wording, and context. New York: Academic Press.
Scott, E. S., & Steinberg, L. (2008). Adolescent development and the regulation of youth crime. The Future of Childre, 18(2), 15–33. https://doi.org/10.1353/foc.0.0011.
Scott, J. (1997). Children as respondents: Methods for improving data quality. In P. B. L. Lyberg, M. Collins, E. de Leeuw, C. Dippo, N. Schwarz & D. Trewin (Eds.), Survey measurement and process quality (pp. 331–350). New York: Wiley.
Seale, C. (2004). Social research methods: A reader. London: Routledge.
Sim, J., & Wright, C. C. (2005). The Kappa statistic in reliability studies: Use, interpretation, and sample size requirements. Physical Therapy, 85(3), 257–268. https://doi.org/10.1093/ptj/85.3.257.
Smith, K., & Platt, L. (2013). How do children answer questions about frequencies and quantities? Evidence from a large-scale field test. Centre for Longitudinal Studies, Institute of Education, University of London.
Somerville, L. H. (2013). The teenage brain: Sensitivity to social evaluation. Current Directions in Psychological Science, 22(2), 121–127. https://doi.org/10.1177/0963721413476512.
Tan, C. L., Hassali, M. A., Saleem, F., Shafie, A. A., Aljadhey, H., & Gan, V. B. (2015). Development, test-retest reliability and validity of the pharmacy value-added services questionnaire (PVASQ). Pharmacy Practice, 13(3), 598. https://doi.org/10.18549/pharmpract.2015.03.598.
Thurstone, L. L. (1928). Attitudes can be measured. American Journal of Sociology, 33(4), 529–554. https://doi.org/10.1086/214483.
Tourangeau, R., Rips, L. J., & Rasinski, K. (2000). The psychology of survey response. Cambridge: Cambridge University Press. https://doi.org/10.1017/cbo9780511819322.
Turner, J. (1975). Cognitive development. London: Methue.
Vaillancourt, P. M. (1973). Stability of children’s survey responses. Public Opinion Quarterly, 37(3), 373. https://doi.org/10.1086/268099.
van Hattum, M. J. C., & de Leeuw, E. (1999). A disk by mail survey of pupils in primary schools: Data quality and logistics. Journal of Official Statistics, 15(3), 413–429.
Vogt, D. S., King, D. W., & King, L. A. (2004). Focus groups in psychological assessment: Enhancing content validity by consulting members of the target population. Psychological Assessment, 16(3), 231–243. https://doi.org/10.1037/1040-3590.16.3.231.
Wallace, C. M. (2009). Measuring changes in attitude, skill and knowledge of undergraduate nursing students after receiving an educational intervention in intimate partner violence.
Weir, J. P. (2005). Quantifying test-retest reliability using the intraclass correlation coefficient and the SEM. Journal of Strength and Conditioning Research, 19(1), 231–240. https://doi.org/10.1519/00124278-200502000-00038.
Williams, B., Onsman, A., & Brown, T. (2010). Exploratory factor analysis: A five-step guide for novices. Australasian Journal of Paramedicine, 8(3), 1–13.
Willis, G. G. B. (2005). Cognitive interviewing: A tool for improving questionnaire design. In PsycEXTRA dataset. Thousand Oaks: SAGE Publications.
Zanolin, M. E., Visentin, M., Trentin, L., Saiani, L., Brugnolli, A., & Grassi, M. (2007). A questionnaire to evaluate the knowledge and attitudes of health care providers on pain. Journal of Pain and Symptom Management, 33(6), 727–736. https://doi.org/10.1016/j.jpainsymman.2006.09.032.
Funding
There were no forms of financial support, funding, or involvement.
Author information
Authors and Affiliations
Contributions
AO created the first draft of the article, and JWS, JS and NB provided substantive feedback on subsequent drafts. After several iterations where all authors contributed, all authors approved the final version of the article.
Corresponding author
Ethics declarations
Conflict of interest
The authors report no conflict of interests.
Rights and permissions
About this article
Cite this article
Omrani, A., Wakefield-Scurr, J., Smith, J. et al. Survey Development for Adolescents Aged 11–16 Years: A Developmental Science Based Guide. Adolescent Res Rev 4, 329–340 (2019). https://doi.org/10.1007/s40894-018-0089-0
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s40894-018-0089-0