Skip to main content

Some Methodological and Statistical “Bugs” in Research on Children’s Learning

  • Chapter
Cognitive Learning and Memory in Children

Part of the book series: Springer Series in Cognitive Development ((2116))

Abstract

This chapter provides me with the opportunity to discuss a number of methodological and statistical “bugs” that I have detected creeping into psychological research in general, and into research on children’s learning in particular. Naturally, one cannot hope to exterminate all such bugs with but a single essay. Rather, it is hoped that this chapter will leave a trail of pellets that is sufficiently odorific to get to the source of these potentially destructive little creatures. It also goes without saying that different people in this trade have different entomological lists that they would like to see presented. Although all cannot be presented here, I intend to introduce you to nearly 20 of my own personal favorites. At the same time, it must be stated at the outset that present space limitations do not permit a complete specification and resolution of the problems that these omnipresent bugs can create for cognitive-developmental researchers. Consequently, in most cases I will only allude to a problem and its potential remedies, placing the motivation for additional inquiry squarely in the lap of the curious reader.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  • Appelbaum, M. I., & McCall, R. B. (1983). Design and analysis in developmental psychology. In P. H. Mussen (Ed.), Handbook of child psychology (4th ed.): Volume 1: History, theory, and methods (W. Kessen, Ed.). New York: Wiley.

    Google Scholar 

  • Baltes, P. B., Reese, H. W., & Nesselroade, J. R. (1977). Life-span developmental psychology: Introduction to research methods. Monterey: CA: Brooks/Cole.

    Google Scholar 

  • Barber, T. X. (1973). Pitfalls in research: Nine investigator and experimenter effects. In R. M. W. Travers (Ed.), Second handbook of research on teaching. Chicago: Rand McNally.

    Google Scholar 

  • Barcikowski, R. S. (1981). Statistical power with group mean as the unit of analysis. Journal of Educational Statistics, 6, 267–285.

    Article  Google Scholar 

  • Betz, M. A., & Levin, J. R (1982). Coherent analysis-of-variance hypothesis-testing strategies: A general model. Journal of Educational Statistics, 7, 192–206.

    Article  Google Scholar 

  • Bird, K. D., & Hadzi-Pavlovic, D. (1983). Simultaneous test procedures and the choice of a test statistic in MANOVA. Psychological Bulletin, 93, 167–178.

    Article  Google Scholar 

  • Bogartz, R S. (1976). On the meaning of statistical interactions. Journal of Experimental Child Psychology, 22, 178–183.

    Article  Google Scholar 

  • Brainerd, C. J., & Howe, M. L. (1980). Developmental invariance in a mathematical model of associative learning. Child Development, 51, 349–362.

    Article  Google Scholar 

  • Brown, J. S., & Burton, R R (1978). Diagnostic models for procedural bugs in basic mathematical skills. Cognitive Science, 2, 153–192.

    Article  Google Scholar 

  • Campbell, D. T., & Boruch, R F. (1975). Making the case for randomized assignment to treatments by considering the alternatives: Six ways in which quasi-experimental evaluations in compensatory education tend to underestimate effects. In C. A. Bennett & A. Lumsdaine (Eds.), Central issues in social program evaluation. New York: Academic Press.

    Google Scholar 

  • Campbell, D. T., & Stanley, J. C. (1966). Experimental and quasi-experimental designs for research. Chicago: Rand McNally.

    Google Scholar 

  • Cohen, J. (1977). Statistical power analysis for the behavioral sciences (2nd ed.). New York: Academic Press.

    Google Scholar 

  • Cohen, J., & Cohen, P. (1975). Applied multiple regression correlation analysis for the behavioral sciences. Hillsdale, NJ: Erlbaum.

    Google Scholar 

  • Cook, T. D., & Campbell, D. T. (1976). The design and conduct of quasi-experiments and true experiments in field settings. In M. D. Dunnette & J. P. Campbell (Eds.), Handbook of industrial and organizational research. Chicago: Rand McNally.

    Google Scholar 

  • Cronbach, L. J., & Snow, R E. (1977). Aptitudes and instructional methods. New York: Irvington.

    Google Scholar 

  • Elashoff, J. D. (1969). Analysis of covariance: A delicate instrument. American Educational Research Journal, 6, 381–401.

    Google Scholar 

  • Evans, S. H., & Anastasio, E. J. (1968). Misuse of analysis of covariance when treatment effect and covariate are confounded. Psychological Bulletin, 69, 225–234.

    Article  PubMed  Google Scholar 

  • Gabriel, K. R. (1969). Simultaneous test procedures: Some theory of multiple comparisons. Annals of Mathematical Statistics, 40, 224–250.

    Article  Google Scholar 

  • Gaito, J. (1965). Unequal intervals and unequal n in trend analyses. Psychological Bulletin, 63, 125–127.

    Article  PubMed  Google Scholar 

  • Games, P. A. (1978). Nesting, crossing, Type IV errors and the role of statistical models. American Educational Research Journal, 15, 253–258.

    Google Scholar 

  • Glass, G. V. (1977). Integrating findings: The meta-analysis of research. In L. S. Shulman (Ed.), Review of Research in Education (Vol. 5). Itasca, IL: Peacock.

    Google Scholar 

  • Glass, G. V., Peckham, P. D., & Sanders, J. R (1972). Consequences of failure to meet assumptions underlying the fixed effects analyses of variance and covariance. Review of Educational Research, 42, 237–288.

    Google Scholar 

  • Green, D. M., & Swets, J. A. (1966). Signal detection theory andpsychophysics. New York: Wiley.

    Google Scholar 

  • Greenwald, A. G. (1975). Consequences of prejudice against the null hypothesis. Psychological Bulletin, 82, 1–20.

    Article  Google Scholar 

  • Huitema, B. E. (1980). The analysis of covariance and alternatives. New York: Wiley.

    Google Scholar 

  • Humphreys, L. G. (1980). The statistics of failure to replicate: A comment on Buriel’s (1978) conclusions. Journal of Educational Psychology, 72, 71–75.

    Article  Google Scholar 

  • Jaccard, J., Becker, M. A., & Wood, G. (1984). Pairwise multiple comparison procedures: A review. Psychological Bulletin, 96, 589–596.

    Article  Google Scholar 

  • Jensen, A. R. (1979, April). Personal communication.

    Google Scholar 

  • Kirk, R. E. (1982). Experimental design (2nd ed.). Belmont: CA: Brooks/Cole.

    Google Scholar 

  • Levin, J. R. (1975). Determining sample size for planned and post hoc analysis of variance comparisons. Journal of Educational Measurement, 12, 99–108.

    Article  Google Scholar 

  • Levin, J. R. (1977, April). Data analysis by the numbers. Paper presented at the annual meeting of the American Educational Research Association, New York.

    Google Scholar 

  • Levin, J. R., Marascuilo, L. A., & Hubert, L. J. (1978) N= nonparametric randomization tests. In T. R. Kratochwill (Ed.), Single subject research: Strategies for evaluating change. New York: Academic Press.

    Google Scholar 

  • Levin, J. R., & Peterson, P. L. (1984). Classroom aptitude-by-treatment interactions: An alternative analysis strategy. Educational Psychologist 19, 43–47.

    Article  Google Scholar 

  • Levin, J. R., & Pressley, M. (1983). Understanding mnemonic imagery effects: A dozen “obvious” outcomes. In M. L. Fleming & D. W. Hutton (Eds.), Mental imagery and learning. Englewood Cliffs, NJ: Educational Technology Publications.

    Google Scholar 

  • Levin, J. R, Pressley, M., McCormick, C. B., Miller, G. E., & Shriberg, L. K. (1979). Assessing the classroom potential of the keyword method. Journal of Educational Psychology, 71, 583–594.

    Article  Google Scholar 

  • Loftus, G. R. (1978). On interpretation of interactions. Memory & Cognition, 6, 312–319.

    Article  Google Scholar 

  • Lord, F. M. (1967). A paradox in the interpretation of group comparisons. Psychological Bulletin, 68, 304–305.

    Article  PubMed  Google Scholar 

  • Lubin, A. (1961). The interpretation of significant interaction. Educational and Psychological Measurement, 21, 807–817.

    Article  Google Scholar 

  • Marascuilo, L. A., & Levin, J. R. (1970). Appropriate post hoc comparisons for interaction and nested hypotheses in analysis of variance designs: The elimination of Type IV errors. American Educational Research Journal, 7, 397–421.

    Google Scholar 

  • Marascuilo, L. A., & Levin, J. R. (1976). A note on the simultaneous investigation of interaction and nested hypotheses in two-factor analysis of variance designs. American Educational Research Journal, 13, 61–65.

    Google Scholar 

  • Marascuilo, L. A., & Levin, J. R. (1983). Multivariate statistics in the social sciences: A researcher’s guide. Monterey, CA: Brooks/Cole.

    Google Scholar 

  • Marascuilo, L. A., & McSweeney, M. (1977). Nonparametric and distribution-free methods for the social sciences. Monterey, CA: Brooks/Cole.

    Google Scholar 

  • Maxwell, S. E., Delaney, H. D., & Dill, C. A. Another look at ANCOVA versus blocking. (1984). Psychological Bulletin, 95, 136–147.

    Article  Google Scholar 

  • McCall, R B., & Appelbaum, M. I. (1973). Bias in the analysis of repeated-measures designs: Some alternative approaches. Child Development, 44, 401–415.

    Article  Google Scholar 

  • Morrison, D. E., & Henkel, R. E. (Eds.). (1970). The significance test controversy. Chicago: Aldine.

    Google Scholar 

  • Olson, C. L. (1974). Comparative robustness of six tests in multivariate analysis of variance. Journal of the American Statistical Association, 69, 894–908.

    Article  Google Scholar 

  • Overall, J. E., & Spiegel, D. K. (1969). Concerning least squares analysis of experimental data. Psychological Bulletin, 72, 311–322.

    Article  Google Scholar 

  • Page, E. B. (1965, February). Recapturing the richness within the classroom. Paper presented at the annual meeting of the American Educational Research Association, Chicago.

    Google Scholar 

  • Peckham, P. D., Glass, G. V., & Hopkins, K. D. (1969). The experimental unit in statistical analysis. Journal of Special Education, 3, 337–349.

    Article  Google Scholar 

  • Petrinovich, L., & Hardyck, C. D. (1969). Error rates for multiple comparison methods: Some evidence concerning the frequency of erroneous conclusions. Psychological Bulletin, 71, 43–54.

    Article  Google Scholar 

  • Porter, A. C., & Chibucos, T. R (1974). Analysis issues in summative evaluation. In G. Borich (Ed.), Evaluating educational programs and products. Englewood Cliffs, NJ: Educational Technology Press.

    Google Scholar 

  • Rogan, J. C., Keselman, H. J., & Mendoza, J. L. (1979). Analysis of repeated measurements. British Journal of Mathematical and Statistical Psychology, 32, 269–286.

    Article  Google Scholar 

  • Rohwer, W. D., Jr. (1973). Elaboration and learning in childhood and adolescence: In H. W. Reese (Ed), Advances in child development and behavior (Vol. 8). New York: Academic Press.

    Google Scholar 

  • Romaniuk, J. G., Levin, J. R, & Hubert, L. J. (1977). Hypothesis-testing procedures in repeated-measures designs: On the road map not taken. Child Development, 48, 1757–1760.

    Article  Google Scholar 

  • Salthouse, T. A., & Kausler, D. H. (1985). Memory methodology in maturity. In C. J. Brainerd & M. Pressley (Eds.), Basic processes in memory development. New York: Springer-Verlag.

    Google Scholar 

  • Serlin, R C., & Lapsley, D. K. (1983, April). Rationality in psychological research: The good-enough principle. Paper presented at the annual meeting of the American Educational Research Association, Montreal, Canada.

    Google Scholar 

  • Serlin, R C., & Lapsley, D. K. (1984). A unified framework for hypothesis testing. Unpublished manuscript, Department of Educational Psychology, University of Wisconsin, Madison.

    Google Scholar 

  • Tomarken, A. J., & Serlin, R. C. (in press). A comparison of ANOVA alternatives under variance heterogeneity and with specific noncentrality structures. Psychological Bulletin.

    Google Scholar 

  • Walster, G. W., & Cleary, T. A. (1970). Statistical significance as a decision-making rule. In E. F. Borgatta & G. W. Bohrnstedt (Eds.), Sociological methodology. San Francisco: Jossey-Bass.

    Google Scholar 

  • Winer, B. J. (1971). Statistical principles in experimental design (2nd ed.). New York: McGraw-Hill.

    Google Scholar 

Download references

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1985 Springer-Verlag New York Inc.

About this chapter

Cite this chapter

Levin, J.R. (1985). Some Methodological and Statistical “Bugs” in Research on Children’s Learning. In: Pressley, M., Brainerd, C.J. (eds) Cognitive Learning and Memory in Children. Springer Series in Cognitive Development. Springer, New York, NY. https://doi.org/10.1007/978-1-4613-9544-7_7

Download citation

  • DOI: https://doi.org/10.1007/978-1-4613-9544-7_7

  • Publisher Name: Springer, New York, NY

  • Print ISBN: 978-1-4613-9546-1

  • Online ISBN: 978-1-4613-9544-7

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics