Skip to main content

Process Control and Quality Measures

  • Chapter
  • First Online:
Practical Tools for Designing and Weighting Survey Samples

Part of the book series: Statistics for Social and Behavioral Sciences ((SSBS,volume 51))

  • 4226 Accesses

Abstract

So far we have described a wide variety of tools and tasks necessary for sampling and weighting. Key to a successful project, however, is not only the mastery of the tools, and knowing which tool to use when, but also the monitoring of the actual process, as well as the careful documentation of the steps taken, and the possibility to replicate each of those steps. For any project, certain quality control measures should be taken prior to data collection during sample frame construction and sample selection and after data collection during editing, weight calculation, and database construction. Well-planned projects are designed so that quality control is possible during the data collection process and that steps to improve quality can be taken before the end of the data collection period. Obviously the specific quality control measures will vary by the type of project conducted. For example, repeated longitudinal data collection efforts allow comparisons to prior years, whereas one-time cross-sectional surveys often suffer from uncertainty with respect to procedures and outcomes. However, we have found a core set of tools to be useful for almost all survey designs and will introduce those in this chapter. We do want to emphasize that while it is tempting to think that assurance of reproducibility and good documentation is only worth the effort for complex surveys that will be repeated, in our experience, even the smallest survey “runs” better when the tools introduced here are used.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 64.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 84.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    http://www.whitehouse.gov/sites/default/files/omb/inforeg/statpolicy/standards∖_stat∖_surveys.pdf

  2. 2.

    http://www.ons.gov.uk/ons/guide-method/best-practice/gss-best-practice/gss-qua lity-good-practice/index.html

  3. 3.

    http://www.aapor.org/Best_Practices1.htm

  4. 4.

    http://www.hcahpsonline.org/home.aspx

  5. 5.

    http://nces.ed.gov/surveys/pisa/pdf/2011025.pdf

  6. 6.

    http://ccsg.isr.umich.edu/quality.cfm

  7. 7.

    A free online lecture on using CPM with a survey example can be accessed here: http://gunston.gmu.edu/healthscience/ProjectManagementInIT/CriticalPathMethod.asp

  8. 8.

    http://www.processdox.com/pix/ImprovingQuality.pdf

  9. 9.

    In R this would be: duplicated(x, incomparables = FALSE, ).

  10. 10.

    In the case of unequal sampling variance this equation changes to reflect the design weights. Sampling weights are not included in the estimation of the propensity models but are used when the variance is constructed.

  11. 11.

    \(1600/(1 + 0.09 {\ast} (74)) = 208.88;\; and\;1600/(1 + 0.0130 {\ast} (74)) = 815.49\)

  12. 12.

    PIAAC Technical Standards and Guidelines, Second Draft presented at the Meeting of the National Project Managers, 23–27 March 2009, Barcelona, Spain.

References

  • Aitken A., Hörngren J., Jones N., Lewis D., Zilhão M.J. (2004). Handbook on improving quality by analysis of process variables. Tech. rep., Luxembourg, URL http://epp.eurostat.ec.europa.eu/portal/page/portal/quality/documents/HANDBOOK%20ON%20IMPROVING%20QUALITY.pdf

  • Bethlehem J., Cobben F., Schouten B. (2011). Handbook in Nonresponse in Household Surveys. John Wiley & Sons, Inc., New Jersey

    Book  Google Scholar 

  • Biemer P., Lyberg L. (2003). Introduction to Survey Quality. John Wiley & Sons, Inc., New Jersey

    Book  Google Scholar 

  • Blasius J., Thiessen V. (2012). Assessing the Quality of Survey Data. SAGE Publications Ltd., London

    Google Scholar 

  • Blom A. (2008). Measuring nonresponse cross-nationally. ISER Working Paper Series URL http://ideas.repec.org/p/ese/iserwp/2008-41.html, no. 2008–41

  • Canada S. (2009). Statistics Canada quality guidelines. Tech. rep., Ottawa CA

    Google Scholar 

  • Defense Manpower Data Center (2004). May 2004 Status of Forces Survey of Reserve component members: Administration, datasets, and codebook. Tech. Rep. No. 2004–013, Defense Manpower Data Center, Arlington, VA

    Google Scholar 

  • DeMeyer A., Loch C.H., Pick M.T. (2002). Managing project uncertainty: From variation to chaos. MIT Sloan Management Review 30:60–67

    Google Scholar 

  • Deming W.E. (1982). Out of the Crisis. Cambridge University Press, Cambridge

    Google Scholar 

  • Durrant G.B., Steele F. (2009). Multilevel modelling of refusal and non-contact in household surveys: Evidence from six UK government surveys. Journal Of The Royal Statistical Society, Series A 172(2):361–381

    MathSciNet  Google Scholar 

  • Eckman S., O’Muircheartaigh C. (2011). Performance of the half–open interval missed housing unit procedure. Survey Research Methods 5(3):125–131

    Google Scholar 

  • Groves R.M., Peytcheva E. (2008). The impact of nonresponse rates on nonresponse bias. Public Opinion Quarterly 72:167–189

    Article  Google Scholar 

  • Herzog T.N., Scheuren F.J., Winkler W.E. (2007). Data Quality and Record Linkage. Springer, New York

    Google Scholar 

  • Iannacchione V.G. (2011). Research synthesis: The changing role of address-based sampling in surveys. Public Opinion Quarterly 75(3):556–576

    Article  Google Scholar 

  • International Organization for Standardization (1985). Information processing – documentation symbols and conventions for data, program and system flowcharts, program network charts and system resources charts. Tech. rep., Geneva, Switzerland, URL http://www.iso.org/iso/iso_catalogue/catalogue_tc/catalogue_detail.htm?csnumber=11955

  • Jans M., Sirkis R., Morgan D. (2013). Managing data quality indicators with paradata-based statistical quality control tools. In: Kreuter F. (ed) Improving Surveys with Paradata: Making use of Process Information, John Wiley & Sons, Inc., New York

    Google Scholar 

  • Kirgis N., Lepkowski J. (2010). A management model for continuous data collection: Reflections from the National Survey of Family Growth, 2006–2010. NSFG Paper No 10–011 URL http://www.psc.isr.umich.edu/pubs/pdf/ng10-011.pdf

  • Kohler U. (2007). Surveys from inside: An assessment of unit nonresponse bias with internal criteria. Survey Research Methods 1(2):55–67

    Google Scholar 

  • Kohler U., Kreuter F. (2012). Data Analysis using Stata, 3rd edn. StataPress, College Station TX

    Google Scholar 

  • Kreuter F. (2002) Kriminalitätsfurcht: Messung und methodische Probleme. Leske and Budrich, Berlin

    Google Scholar 

  • Kreuter F., Couper M., Lyberg L. (2010). The use of paradata to monitor and manage survey data collection. In: Proceedings of the Survey Research Methods Section, American Statistical Association, pp 282–296

    Google Scholar 

  • Lepkowski J., Axinn W.G., Kirgis N., West B.T., Ndiaye S.K., Mosher W., Groves R.M. (2010). Use of paradata in a responsive design framework to manage a field data collection. NSFG Survey Methodology Working Papers (10–012), URL http://www.psc.isr.umich.edu/pubs/pdf/ng10-012.pdf

  • Little R.J.A., Rubin D.B. (2002). Statistical Analysis with Missing Data. John Wiley & Sons, Inc., New Jersey

    Google Scholar 

  • Long J.S. (2009). The Workflow of Data Analysis Using Stata. StataPress, College Station TX

    Google Scholar 

  • Lyberg L., Biemer P., Collins M., de Leeuw E., Dippo C.S., Schwarz N., Trewin D. (1997). Survey Measurement and Process Quality. John Wiley & Sons, Inc., New York

    Book  Google Scholar 

  • Morganstein D.R., Marker D.A. (1997). Continuous quality improvement in statistical agencies. In: Lyberg L., Biemer P., Collins M., De Leeuw E.D., Dippo C.S., Schwarz N., Trewin D. (eds) Survey Measurement and Process Quality, John Wiley & Sons, Inc., New York

    Google Scholar 

  • Müller G. (2011). Fieldwork monitoring in pass. Tech. rep., Institut für Arbeitsmarkt und Berufsforschung, URL http://www.iab.de/de/veranstaltungen/konferenzen-und-workshops-2011/paradata.aspx

  • Olson K., Peytchev A. (2007). Effect of interviewer experience on interview pace and interviewer attitudes. Public Opinion Quarterly 71:273–286

    Article  Google Scholar 

  • O’Muircheartaigh C., Campanelli P. (1998). The relative impact of interviewer effects and sample design effects on survey precision. Journal of the Royal Statistical Society, Series A 161(1):63–77

    Google Scholar 

  • O’Muircheartaigh C., Campanelli P. (1999). A multilevel exploration of the role of interviewers in survey non-response. Journal of the Royal Statistical Society, Series A 162(3):437–446

    Google Scholar 

  • Porter E.H., Winkler W.E. (1997). Approximate string comparison and its effect in an advanced record linkage system. In: Alvey W., Jamerson B. (eds) Record Linkage – 1997: Proceedings of an International Workshop and Exposition, U.S. Office of Management and Budget, pp 190–199

    Google Scholar 

  • Rubin D.B. (1987). Multiple Imputation for Nonresponse in Surveys. John Wiley & Sons, New York

    Book  Google Scholar 

  • Särndal C., Lundström S. (2008). Assessing auxiliary vectors for control of nonresponse bias in the calibration estimator. Journal of Official Statistics 24:167–191

    Google Scholar 

  • Schnell R., Kreuter F. (2005). Separating interviewer and sampling-point effects. Journal of Official Statistics 21(3):389–410

    Google Scholar 

  • Schnell R., Bachteler T., Bender S. (2004). A toolbox for record linkage. Austrian Journal of Statistics 33(1–2):125–133

    Google Scholar 

  • Schouten B., Cobben F. (2007). R-indexes for the comparison of different fieldwork strategies and data collection modes. Tech. Rep. Discussion Paper 07002, Voorburg, The Netherlands, URL http://www.risq-project.eu/papers/schouten-cobben-2007-a.pdf

  • Schouten B., Cobben F., Bethlehem J. (2009). Indicators for the representativeness of survey response. Survey Methodology 35(1):101–113

    Google Scholar 

  • Shewhart W.A. (1931). Economic Control of Quality of Manufactured Product. Van Nostrand Reinhold Co., Princeton, NJ, republished in 1981 by the American Society for Quality Control, Milwaukee, WI

    Google Scholar 

  • Thomas B. (1999). Probabilistic record linkage software: A Statistics Canada evaluation of GRLS and Automatch. In: Proceedings of the Survey Research Methods Section, American Statistical Association, pp 187–192

    Google Scholar 

  • Tufte E. (1990). Envisioning Information. Graphics Press, Cheshire CT

    Google Scholar 

  • Wagner J. (2010). The fraction of missing information as a tool for monitoring the quality of survey data. Public Opinion Quarterly 74(2):223–243

    Article  Google Scholar 

  • West B.T., Groves R.M. (2013). A propensity-adjusted interviewer performance indicator. Public Opinion Quarterly 77:to be published

    Google Scholar 

  • West B.T., Olson K. (2010). How much of interviewer variance is really nonresponse error variance? Public Opinion Quarterly 74(5):1027–1045

    Article  Google Scholar 

  • Willenborg L., Heerschap H. (2012). Matching. Tech. rep., The Hague, URL http://www.cbs.nl/NR/rdonlyres/0EDC70A4-C776-43F6-94AD-A173EFE58915/0/2012Matchingart.pdf, method Series no. 12

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer Science+Business Media New York

About this chapter

Cite this chapter

Valliant, R., Dever, J.A., Kreuter, F. (2013). Process Control and Quality Measures. In: Practical Tools for Designing and Weighting Survey Samples. Statistics for Social and Behavioral Sciences, vol 51. Springer, New York, NY. https://doi.org/10.1007/978-1-4614-6449-5_18

Download citation

Publish with us

Policies and ethics