Skip to main content

Advertisement

Log in

Fit for purpose and modern validity theory in clinical outcomes assessment

  • Special Section: Test Construction (by invitation only)
  • Published:
Quality of Life Research Aims and scope Submit manuscript

“In casual terms, we can define validity as measuring the right thing, and reliability as measuring the thing right.” [1] (p. 11).

Abstract

Purpose

The US Food and Drug Administration (FDA), as part of its regulatory mission, is charged with determining whether a clinical outcome assessment (COA) is “fit for purpose” when used in clinical trials to support drug approval and product labeling. In this paper, we will provide a review (and some commentary) on the current state of affairs in COA development/evaluation/use with a focus on one aspect: How do you know you are measuring the right thing? In the psychometric literature, this concept is referred to broadly as validity and has itself evolved over many years of research and application.

Review

After a brief introduction, the first section will review current ideas about “fit for purpose” and how it has been viewed by FDA. This section will also describe some of the unique challenges to COA development/evaluation/use in the clinical trials space. Following this, we provide an overview of modern validity theory as it is currently understood in the psychometric tradition. This overview will focus primarily on the perspective of validity theorists such as Messick and Kane whose work forms the backbone for the bulk of high-stakes assessment in areas such as education, psychology, and health outcomes.

Conclusions

We situate the concept of fit for purpose within the broader context of validity. By comparing and contrasting the approaches and the situations where they have traditionally been applied, we identify areas of conceptual overlap as well as areas where more discussion and research are needed.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

Notes

  1. What a test measures goes by many names: construct, trait, latent variable, dimension, or domain. We use “construct” throughout the remainder of this document as the generic referent to what tests measure. It is a commonly used term and nicely conveys the core idea that what we are trying to measure is a theoretical construction.

  2. We use terms like assessment, scale, inventory, and test interchangeably in this paper. While “test” is the dominant term in the educational arena (from where much validity theory has emanated) it is generic with respect to the larger points being made here.

References

  1. Thissen, D., & Wainer, H. (2001). Test scoring. Mahwah, NJ: Lawrence Erlbaum Associates.

    Book  Google Scholar 

  2. U.S. Department of Health and Human Services, Food and Drug Administration, Center for Drug Evaluation and Research, Center for Biologics Evaluation and Research, Center for Devices and Radiological Health. (2009). Guidance for industry patient-reported outcome measures: Use in medical product development to support labeling claims. Retrieved January 30, 2017, from http://www.fda.gov/downloads/Drugs/Guidances/UCM193282.pdf. Published December 2009

  3. FDA-NIH Biomarker Working Group. (2016). BEST (Biomarkers, EndpointS, and other Tools) Resource. Retrieved January 30, 2017, from https://www.ncbi.nlm.nih.gov/books/NBK338448/

  4. Patrick, D. L., Burke, L. B., Gwaltney, C. J., Kline Leidy, N., Martin, M. L., Molsen, E., et al. (2011). Content validity— establishing and reporting the evidence in newly-developed patient-reported outcomes (PRO) instruments for medical product evaluation: ISPOR PRO good research practices task force report: Part 1—eliciting concepts for a new PRO instrument. Value in Health, 14, 967–977.

    Article  PubMed  Google Scholar 

  5. Patrick, D. L., Burke, L. B., Gwaltney, C. J., Kline Leidy, N., Martin, M. L., Molsen, E., et al. (2011). Content validity—establishing and reporting the evidence in newly developed patient-reported outcomes (PRO) instruments for medical product evaluation: ISPOR PRO good research practices task force report: Part 2—assessing respondent understanding. Value in Health, 14, 978–988.

    Article  PubMed  Google Scholar 

  6. U.S. Department of Health and Human Services, Food and Drug Administration. (2016). Clinical outcome assessment (COA): Frequently asked questions. Retrieved January 30, 2017, from http://www.fda.gov/Drugs/DevelopmentApprovalProcess/DrugDevelopmentToolsQualificationProgram/ucm370261.htm

  7. U.S. Department of Health and Human Services, Food and Drug Administration, Center for Drug Evaluation and Research. (2015). Gastroparesis: Clinical Evaluation of Drugs for Treatment Guidance for Industry. Retrieved January 30, 2017, from https://www.fda.gov/downloads/Drugs/GuidanceComplianceRegulatoryInformation/Guidances/UCM455645.pdf

  8. American Educational Research Association, American Psychological Association, National Council on Measurement in Education. (2014). Standards for educational and psychological testing. Washington, DC: American Educational Research Association.

    Google Scholar 

  9. Thorndike, E. L. (1918). The nature, purposes, and general methods of measurements of educational products. In G. M. Whipple (Ed.), The measurement of educational products. Seventeenth yearbook of the National Society for the Study of Education, Part II (pp. 16–24). Bloomington, IL: Public School Publishing Company.

  10. American Psychological Association. (1954). Technical recommendations for psychological tests and diagnostic techniques. Psychological Bulletin Supplement, 51(2), 1–38.

    Article  Google Scholar 

  11. Cronbach, L. J., & Meehl, P. E. (1955). Construct validity in psychological tests. Psychological Bulletin, 52, 281–302.

    Article  PubMed  CAS  Google Scholar 

  12. Pitoniak, M. J., Sireci, S. G., & Luecht, R. M. (2002). A multitrait-multimethod validity investigation of scores from a professional licensure examination. Educational and Psychological Measurement, 62(3), 498–516.

    Article  Google Scholar 

  13. Ebel, R. L. (1956). Obtaining and reporting evidence on content validity. Educational and Psychological Measurement, 16(3), 269–282.

    Article  Google Scholar 

  14. Sireci, S. G. (1998). The construct of content validity. Social Indicators Research, 45, 83–117.

    Article  Google Scholar 

  15. Messick, S. (1975). The standard program: Meaning and values in measurement and evaluation. American Psychologist, 30, 955–966.

    Article  Google Scholar 

  16. American Educational Research Association, American Psychological Association, & National Council on Measurement in Education. (1985). Standards for educational and psychological testing. Washington, DC: American Psychological Association.

    Google Scholar 

  17. Messick, S. (1988). The once and future issues of validity. Assessing the meaning and consequences of measurement. In H. Wainer and H. Braun (Eds.), Test validity (pp. 33–45). Hillsdale, NJ: Lawrence Erlbaum.

  18. Messick, S. (1989). Validity. In R. L. Linn (Ed.), Educational measurement (3rd ed., pp. 13–103). New York, NY: American Council on Education and Macmillan.

    Google Scholar 

  19. Messick, S. (1995). Validity of psychological assessment: Validation of inferences from persons’ responses and performances as scientific inquiry into score meaning. American Psychologist, 50, 741–749.

    Article  Google Scholar 

  20. Kane, M. T. (2001). Current concerns in validity theory. Journal of Educational Measurement, 38(4), 319–342.

    Article  Google Scholar 

  21. Cronbach, L. J. (1980). Selection theory for a political world. Public Personnel Management, 9(1), 37–50.

    Article  Google Scholar 

  22. House, E. R. (1980). Evaluating with validity. Beverly Hills, CA: Sage.

    Google Scholar 

  23. Cronbach, L. J. (1988). Five perspectives on validity argument. In H. Wainer & H. Braun (Eds.), Test validity (pp. 3–17). Hillsdale, NJ: Lawrence Erlbaum.

    Google Scholar 

  24. Kane, M. T. (1992). An argument-based approach to validation. Psychological Bulletin, 112, 527–535.

    Article  Google Scholar 

  25. Kane, M. T. (2013). Validating the Interpretations and Uses of Test Scores. Journal of Educational Measurement, 50(1), 1–73.

    Article  Google Scholar 

  26. Kane, M. (2006). Validation. In R. Brennan (Ed.), Educational measurement (4th ed., pp. 17–64). Westport, CT: American Council on Education and Praeger.

    Google Scholar 

  27. Borsboom, D., Mellenbergh, G. J., & van Heerden, J. (2004). The concept of validity. Psychological Review, 111(4), 1061–1071.

    Article  PubMed  Google Scholar 

  28. Hays, R. D., & Hadorn, D. (1992). Responsiveness to change: An aspect of validity, not a separate dimension. Quality of Life Research, 1, 73–75.

    Article  PubMed  CAS  Google Scholar 

  29. Terwee, C. B., Dekker, F. W., Wiersinga, W. M., Prummel, M. F., & Bossuyt, P. M. (2003). On assessing responsiveness of health-related quality of life instruments: Guidelines for instrument evaluation. Quality of Life Research, 12(4), 349–362.

    Article  PubMed  CAS  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Michael C. Edwards.

Ethics declarations

Conflict of interest

The authors declare that they have no conflicts of interest.

Ethical approval

This article does not contain any studies with human participants performed by the authors.

Additional information

Ashley Slagle is a former FDA employee. The regulatory perspective offered in this manuscript is her own and, while reflecting her experience with FDA, is not intended to present any official FDA position.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Edwards, M.C., Slagle, A., Rubright, J.D. et al. Fit for purpose and modern validity theory in clinical outcomes assessment. Qual Life Res 27, 1711–1720 (2018). https://doi.org/10.1007/s11136-017-1644-z

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11136-017-1644-z

Keywords

Navigation