Skip to main content
Log in

Precision of Curriculum-Based Measurement Reading Data: Considerations for Multiple-Baseline Designs

  • Original Paper
  • Published:
Journal of Behavioral Education Aims and scope Submit manuscript

Abstract

Single-case designs provide an established technology for evaluating the effects of academic interventions. Researchers interested in studying the long-term effects of reading interventions often use curriculum-based measures of reading (CBM-R) as they possess many of the desirable characteristics for use in a time-series design. The reliability of CBM-R scores is often supported by research from group designs, but making idiographic interpretations regarding the change in a student’s oral reading rate requires attention to the precision of static scores and growth estimates. The purpose of this paper is twofold. First, we discuss how recent empirical work on the technical adequacy of CBM-R scores has revealed multiple threats to the data-evaluation validity when CBM-R passages are used to measure oral reading rate. Second, we identify pertinent considerations for conducting a visual analysis of intervention effects based on CBM-R data. We conclude with a brief discussion of implications for researchers considering the use of CBM-R within multiple-baseline designs.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

References

  • Ardoin, S. P., & Christ, T. J. (2009). Curriculum-based measurement of oral reading: Standard errors associated with progress monitoring outcomes from DIBELS, AIMSweb, and an experimental passage set. School Psychology Review, 38, 266–283.

    Google Scholar 

  • Ardoin, S. P., Christ, T. J., Morena, L. S., Cormier, D. C., & Klingbeil, D. A. (2013). A systematic review and summarization of the recommendations and research surrounding curriculum-based measurement of oral reading fluency (CBM-R) decision rules. Journal of School Psychology, 51, 1–18. doi:10.1016/j.jsp.2012.09.004.

    Article  PubMed  Google Scholar 

  • Ardoin, S. P., Roof, C. M., Klubnick, C., & Carfolite, J. (2008). Evaluating curriculum-based measurement from a behavioral assessment perspective. The Behavior Analyst Today, 9(1), 36–49.

    Article  Google Scholar 

  • Ardoin, S. P., Wagner, L., & Bangs, K. E. (2016). Applied behavior analysis: A foundation for response to intervention. In S. R. Jimerson, M. K. Burns, & A. M. VanDerHeyden (Eds.), Handbook of response to intervention (pp. 29–42). New York: Springer.

    Chapter  Google Scholar 

  • Barnett, D. W., Daly, E. J., Jones, K. M., & Lentz, F. E. (2004). Response to intervention: Empirically based special service decisions from single-case designs of increasing and decreasing intensity. Journal of Special Education, 38, 66–79. doi:10.1177/00224669040380020101.

    Article  Google Scholar 

  • Christ, T. J. (2006). Short-term estimates of growth using curriculum-based measurement of oral reading fluency: Estimating standard error of the slope to construct confidence intervals. School Psychology Review, 35, 128–133.

    Google Scholar 

  • Christ, T. J., Van Norman, E. R., & Nelson, P. M. (2016). Foundations of fluency-based assessments in behavioral and psychometric paradigms. In K. D. Cummings & Y. Petscher (Eds.), The fluency construct (pp. 143–163). New York, NY: Springer.

    Chapter  Google Scholar 

  • Christ, T. J., Zopluoglu, C., Monaghen, B. D., & Van Norman, E. R. (2013). Curriculum-based measurement of oral reading: Multi-study evaluation of schedule, duration, and dataset quality on progress monitoring outcomes. Journal of School Psychology, 51, 19–57. doi:10.1016/j.jsp.2012.11.001.

    Article  PubMed  Google Scholar 

  • Cohen, J., & Cohen, P. (1983). Applied multiple regression/ correlational analysis for the behavioral sciences (2nd ed.). Hillsdale, NJ: Lawrence Erlbaum Associates.

    Google Scholar 

  • Cone, J. D. (1977). The relevance of reliability and validity for behavioral assessment. Behavior Therapy, 8, 411–426. doi:10.1016/S0005-7894(77)80077-4.

    Article  Google Scholar 

  • Cooper, J. O. (1982). Applied behavior analysis in education. Theory into Practice, 21, 114–118. doi:10.1080/00405848209542992.

    Article  Google Scholar 

  • Culpepper, S. A. (2013). The reliability and precision of total scores and irt estimates as a function of polytomous IRT parameters and latent trait distribution. Applied Psychological Measurement, 37, 201–225. doi:10.1177/0146621612470210.

    Article  Google Scholar 

  • Cummings, K. D., Park, Y., & Bauer Schaper, H. A. (2013). Form effects on DIBELS Next oral reading fluency progress-monitoring passages. Assessment for Effective Intervention, 38, 91–104.

    Article  Google Scholar 

  • Daly, E. J., Lentz, F. E., Jr., & Boyer, J. (1996). The instructional hierarchy: A conceptual model for understanding the effective components of reading interventions. School Psychology Quarterly, 11, 369–386.

    Article  Google Scholar 

  • Daly, E. J., & Martens, B. K. (1994). A comparison of three interventions for increasing oral reading performance: Application of the instructional hierarchy. Journal of Applied Behavior Analysis, 27, 459–469.

    Article  PubMed  PubMed Central  Google Scholar 

  • Daly, E. J., Martens, B. K., Hamler, K. R., Dool, E. J., & Eckert, T. L. (1999). A brief experimental analysis for identifying instructional components needed to improve oral reading fluency. Journal of Applied Behavior Analysis, 32, 83–94.

    Article  PubMed Central  Google Scholar 

  • Daly, E. J., III, Witt, J. C., Martens, B. K., & Dool, E. J. (1997). A model for conducting a functional analysis of academic performance problems. School Psychology Review, 26, 554–574.

    Google Scholar 

  • Deno, S. L. (1985). Curriculum-based measurement: The emerging alternative. Exceptional Children, 52, 219–232. doi:10.1177/001440298505200303.

    Article  PubMed  Google Scholar 

  • Deno, S. L. (1986). Formative evaluation of individual student programs: A new role for school psychologists. School Psychology Review, 15, 358–374.

    Google Scholar 

  • Deno, S. L., Fuchs, L. S., Marston, D., & Shin, J. (2001). Using curriculum-based measurements to establish growth standards for students with learning disabilities. School Psychology Review, 30, 507–524.

    Google Scholar 

  • Dewey, E. N., Kaminski, R. A., & Good, R. H. (2014). DIBELS Next® National Norms 2012-2013. (Technical Report No. 17). Eugene, OR: Dynamic Measurement Group.

  • Eckert, T. L., Ardoin, S. P., Daly, E. J., & Martens, B. K. (2002). Improving oral reading fluency: A brief experimental analysis of combining an antecedent intervention with consequences. Journal of Applied Behavior Analysis, 35, 271–281.

    Article  PubMed  PubMed Central  Google Scholar 

  • Fisher, W. W., Kelley, M. E., & Lomas, J. E. (2003). Visual aids and structured criteria for improving visual inspection and interpretation of single-case designs. Journal of Applied Behavior Analysis, 36, 387–406.

    Article  PubMed  PubMed Central  Google Scholar 

  • Fuchs, L. S., Fuchs, D., Hosp, M. K., Jenkins, J. R. (2001). Oral reading fluency as an indicator of reading competence: A theoretical, empirical, and historical analysis. Scientific studies of reading, 5(3), 239–256.

    Article  Google Scholar 

  • Fuchs, L. S. (2004). The past, present, and future of curriculum-based measurement research. School Psychology Review, 33, 188–193.

    Google Scholar 

  • Fuchs, L. S., & Deno, S. L. (1991). Paradigmatic distinctions between instructionally relevant measurement models. Exceptional Children, 57, 488–500.

    Article  Google Scholar 

  • Fuchs, D., Fuchs, L. S., & Vaughn, S. (2014). What is intensive instruction and why is it important? Teaching Exceptional Children, 46, 13–18. doi:10.1177/0040059914522966.

    Article  Google Scholar 

  • Gast, D. L. (2014). General factors in measurement and evaluation. In D. L. Gast & J. R. Ledford (Eds.), Single case research methodology: Applications in special education and behavioral sciences (pp. 85–104). New York, NY: Routledge.

    Google Scholar 

  • Gast, D. L., Lloyd, B. P., & Ledford, J. R. (2014). Multiple baseline and multiple probe designs. In D. L. Gast & J. R. Ledford (Eds.), Single case research methodology: Applications in special education and behavioral sciences (pp. 251–296). New York, NY: Routledge.

    Google Scholar 

  • Gersten, R., Beckmann, S., Clarke, B., Foegen, A., Marsh, L., Star, J. R., et al. (2009). Assisting students struggling with mathematics: Response to intervention (RtI) for elementary and middle schools (NCEE 2009-4060). Washington, DC: US Department of Education, National Center for Education Evaluation and Regional Assistance, Institute of Education Sciences.

    Google Scholar 

  • Good, R. H., Kaminski, R. A., Dewey, E. N., Wallin, J., Powell-Smith, K. A., & Latimer, R. J. (2013). DIBELS next technical manual Eugene, OR: Dynamic Measurement Group. Retrieved from http://DIBELS.org/next.

  • Good, R. H., & Shinn, M. R. (1990). Forecasting accuracy of slope estimates for reading curriculum-based measurement: Empirical evidence. Behavioral Assessment, 12, 179–193.

    Google Scholar 

  • Harvill, L. M. (1991). Standard error of measurement. Educational Measurement: Issues and Practice, 10, 33–41. doi:10.1111/j.1745-3992.1991.tb00195.x.

    Article  Google Scholar 

  • Hintze, J. M., & Christ, T. J. (2004). An examination of variability as a function of passage variance in CBM progress monitoring. School Psychology Review, 33, 204–217.

    Google Scholar 

  • Hixson, M. D., Christ, T. J., & Bruni, T. (2014). Best practices in the analysis of progress monitoring data and decision making. In A. Thomas & P. Harris (Eds.), Best practices in school psychology: Foundations. National Association of School Psychologists: Silver Springs, MD.

    Google Scholar 

  • Horner, R. H., Carr, E. G., Halle, J., McGee, G., Odom, S., & Wolery, M. (2005). The use of single-subject research to identify evidence-based practice in special education. Exceptional Children, 71, 165–179. doi:10.1177/001440290507100203.

    Article  Google Scholar 

  • Horner, R. H., & Odom, S. L. (2014). Constructing single-case research designs: Logic and options. In T. R. Kratochwill & J. R. Levin (Eds.), Single-case intervention research: Methods and statistical advances (pp. 27–52). Washington, DC: American Psychological Association.

    Chapter  Google Scholar 

  • Horner, R. H., Swaminathan, H., Sugai, G., & Smolkowski, K. (2012). Considerations for the systematic analysis and use of single-case research. Education and Treatment of Children, 35, 269–290.

    Article  Google Scholar 

  • Kazdin, A. E. (2011). Single-case research designs: Methods for clinical and applied settings. Oxford: New York, NY.

    Google Scholar 

  • Kennedy, C. H. (2005). Single-case designs for educational research. New York: Pearson.

  • Kratochwill, T. R., Hitchcock, J. R., Horner, R. H., Levin, J. R., Odom, S. L., Rindskopf, D., & Shadish, W. M. (2010). Single case designs technical documentation. Retrieved from http://ies.ed.gov/ncee/wwc/pdf/wwc_scd.pdf.

  • LaBerge, D., & Samuels, S. J. (1974). Toward a theory of automatic information processing in reading. Cognitive Psychology, 6, 293–323.

    Article  Google Scholar 

  • Lane, J. D., & Gast, D. L. (2014). Visual analysis in single case experimental design studies: Brief review and guidelines. Neuropsychological Rehabilitation, 23, 445–463. doi:10.1080/09602011.2013.815636.

    Article  Google Scholar 

  • Lentz, F. E. (1988). Effective reading interventions in the regular classroom. In J. L. Graden, J. Zins, & M. J. Curtis (Eds.), Alternative educational delivery systems: Enhancing instructional options for all students (pp. 351–370). Washington, DC: National Association of School Psychologists.

    Google Scholar 

  • Lieberman, R. G., Yoder, P. J., Reichow, B., & Wolery, M. (2010). Visual analysis of multiple baseline across participants graphs when change is delayed. School Psychology Quarterly, 25, 28–44. doi:10.1037/a0018600.

    Article  Google Scholar 

  • National Institute of Child Health & Human Development. (2000). Report of the national reading panel: Teaching children to read: An evidence-based assessment of the scientific research literature on reading and its implications for reading instruction: Reports of the subgroups. Washington, DC: Author.

    Google Scholar 

  • Nese, J. F. T., Biancarosa, G., Anderson, D., Lai, C.-F., Alonzo, J., & Tindal, G. (2012). Within- year oral reading fluency with CBM: A comparison of models. Reading and Writing, 25, 887–915. doi:10.1007/s11145-011-9304-0.

    Article  Google Scholar 

  • Noell, G. H., Freeland, J. T., Witt, J. C., Gansle, K. A. (2001). Using brief assessments to identify effective interventions for individual students. Journal of School Psychology, 39(4), 335–355.

    Article  Google Scholar 

  • O’Keeffe, B. V., Bundock, K., Kladis, K. L., Yan, R., & Nelson, K. (2017). Variability in DIBELS next progress monitoring measures for students at risk for reading difficulties. Remedial and Special Education. doi:10.1177/0741932517713310.

    Google Scholar 

  • Parsonson, B., & Baer, D. (1978). The analysis and presentation of graphic data. In T. Kratchowill (Ed.), Single subject research: Strategies for evaluating change (pp. 101–166). New York, NY: Academic Press.

    Chapter  Google Scholar 

  • Pearson, Inc. (2012). Aimsweb technical manual. Bloomington, MN: Author.

    Google Scholar 

  • Poncy, B. C., Skinner, C. H., & Axtell, P. K. (2005). An investigation of the reliability and standard error of measurement of words read correctly per minute using curriculum based measurement. Journal of Psychoeducational Assessment, 23, 226–238. doi:10.1177/073428290502300403.

    Article  Google Scholar 

  • Reschly, A. L., Busch, T. W., Betts, J., Deno, S. L., & Long, J. D. (2009). Curriculum-based measurement oral reading as an indicator of reading achievement: A meta-analysis of the correlational evidence. Journal of School Psychology, 47, 427–469. doi:10.1016/j.jsp.2009.07.001.

    Article  PubMed  Google Scholar 

  • Riley-Tillman, T. C., & Maggin, D. (2016). Using single-case design in a response to intervention model. In S. R. Jimerson, M. K. Burns, & A. M. VanDerHeyden (Eds.), Handbook of response to intervention (2nd ed., pp. 455–472). New York, NY: Springer.

    Chapter  Google Scholar 

  • Ross, S. G., & Begeny, J. C. (2014). Single-case effect size calculation: Comparing regression and non-parametric approaches across previously published reading intervention data sets. Journal of School Psychology, 52, 419–431. doi:10.1016/j.jsp.2014.06.003.

    Article  PubMed  Google Scholar 

  • Samuels, S. J. (2012). Reading fluency: Its past, present, and future. In T. Rasinski, C. Blachowicz, & K. Lems (Eds.), Fluency instruction: Evidence-based practices (2nd ed.). New York: Guilford.

    Google Scholar 

  • Shinn, M. R., Good, R. G., & Stein, S. (1989). Summarizing trend in student achievement: A comparison of methods. School Psychology Review, 18, 356–370.

    Google Scholar 

  • Shinn, M. R., Good, R. H., Knutson, N., Tilly, W. C., & Collins, V. L. (1992). Curriculum-based measurement of oral reading fluency: A confirmatory analysis of its relation to reading. School Psychology Review, 21, 459–479.

    Google Scholar 

  • Skinner, C. H., McCleary, D. F., Skolits, G. L., Poncy, B. C., & Cates, G. L. (2013). Emerging opportunities for school psychologists to enhance our remediation procedure evidence base as we apply response to intervention. Psychology in the Schools, 50, 272–289. doi:10.1002/pits.21676.

    Article  Google Scholar 

  • Thornblad, S. C., & Christ, T. J. (2014). Curriculum-based measurement of reading: Is 6 weeks of daily progress monitoring enough? School Psychology Review, 43, 19–29.

    Google Scholar 

  • Van Norman, E. R., & Christ, T. J. (2016). Curriculum-based measurement of reading: Accuracy of recommendations from three-point decision rules. School Psychology Review, 45, 296–309.

    Article  Google Scholar 

  • Van Norman, E. R., Christ, T. J., & Zopluoglu, C. (2013a). The effects of baseline estimation on the reliability, validity, and precision of CBM-R growth estimates. School Psychology Quarterly, 28, 239–255.

    Article  PubMed  Google Scholar 

  • Van Norman, E. R., Nelson, P. M., Shin, J. E., & Christ, T. J. (2013b). An evaluation of the effects of graphic aids in improving decision accuracy in a continuous treatment design. Journal of Behavioral Education, 22, 283–301.

    Article  Google Scholar 

  • Van Norman, E.R., Christ, T.J., & Newell, K. (2017). Research Brief: Curriculum-based measurement progress monitoring: The importance of growth magnitude and goal setting in decision making. School Psychology Review. doi:10.17105/SPR-2017-0065.

  • Wayman, M. M., Wallace, T., Wiley, H. I., Tichá, R., & Espin, C. A. (2007). Literature synthesis on curriculum-based measurement in reading. Journal of Special Education, 41, 85–120.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to David A. Klingbeil.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Human and Animal Rights

This article does not contain any studies with human participants or animals performed by any of the authors.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Klingbeil, D.A., Van Norman, E.R. & Nelson, P.M. Precision of Curriculum-Based Measurement Reading Data: Considerations for Multiple-Baseline Designs. J Behav Educ 26, 433–451 (2017). https://doi.org/10.1007/s10864-017-9282-7

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10864-017-9282-7

Keywords

Navigation