Skip to main content
Log in

A Note on Weaker Conditions for Identifying Restricted Latent Class Models for Binary Responses

  • Theory and Methods
  • Published:
Psychometrika Aims and scope Submit manuscript

Abstract

Restricted latent class models (RLCMs) are an important class of methods that provide researchers and practitioners in the educational, psychological, and behavioral sciences with fine-grained diagnostic information to guide interventions. Recent research established sufficient conditions for identifying RLCM parameters. A current challenge that limits widespread application of RLCMs is that existing identifiability conditions may be too restrictive for some practical settings. In this paper we establish a weaker condition for identifying RLCM parameters for multivariate binary data. Although the new results weaken identifiability conditions for general RLCMs, the new results do not relax existing necessary and sufficient conditions for the simpler DINA/DINO models. Theoretically, we introduce a new form of latent structure completeness, referred to as dyad-completeness, and prove identification by applying Kruskal’s Theorem for the uniqueness of three-way arrays. The new condition is more likely satisfied in applied research, and the results provide researchers and test-developers with guidance for designing diagnostic instruments.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Allman, E. S., Matias, C., & Rhodes, J. A. (2009). Identifiability of parameters in latent structure models with many observed variables. Annals of Statistics, 37, 3099–3132.

    Article  Google Scholar 

  • Balamuta, J. J. , & Culpepper, S. A. (2021). Exploratory restricted latent class models with monotonicity requirements under pólya-gamma data augmentation. Psychometrika.

  • Chen, Y., & Culpepper, S. A. (2020). A multivariate probit model for learning trajectories: A fine-grained evaluation of an educational intervention. Applied Psychological Measurement, 44(7–8), 515–530. https://doi.org/10.1177/0146621620920928.

    Article  PubMed  PubMed Central  Google Scholar 

  • Chen, Y., Culpepper, S. A., Chen, Y., & Douglas, J. (2018). Bayesian estimation of the DINA Q-matrix. Psychometrika, 83, 89–108.

    Article  PubMed  Google Scholar 

  • Chen, Y., Culpepper, S., & Liang, F. (2020). A sparse latent class model for cognitive diagnosis. Psychometrika, 85, 121–153.

    Article  PubMed  Google Scholar 

  • Chen, Y., Culpepper, S. A., Wang, S., & Douglas, J. (2018). A hidden Markov model for learning trajectories in cognitive diagnosis with application to spatial rotation skills. Applied Psychological Measurement, 42(1), 5–23.

    Article  PubMed  Google Scholar 

  • Chen, Y., Liu, Y., Culpepper, S. A., & Chen, Y. (2021). Inferring the number of attributes for the exploratory DINA model. Psychometrika, 86(1), 30–64.

    Article  PubMed  Google Scholar 

  • Chen, Y., Liu, J., Xu, G., & Ying, Z. (2015). Statistical analysis of Q-matrix based diagnostic classification models. Journal of the American Statistical Association, 110(510), 850–866.

    Article  PubMed  Google Scholar 

  • Chiu, C. Y., Douglas, J. A., & Li, X. (2009). Cluster analysis for cognitive diagnosis: Theory and applications. Psychometrika, 74(4), 633–665.

    Article  Google Scholar 

  • Culpepper, S. A. (2015). Bayesian estimation of the DINA model with Gibbs sampling. Journal of Educational and Behavioral Statistics, 40(5), 454–476.

    Article  Google Scholar 

  • Culpepper, S. A. (2019). Estimating the cognitive diagnosis Q matrix with expert knowledge: Application to the fraction-subtraction dataset. Psychometrika, 84, 333–357. https://doi.org/10.1007/s11336-018-9643-8.

    Article  PubMed  Google Scholar 

  • Culpepper, S. A. (2019). An exploratory diagnostic model for ordinal responses with binary attributes: Identifiability and estimation. Psychometrika, 84(4), 921–940.

    Article  PubMed  Google Scholar 

  • de la Torre, J. (2009). DINA model and parameter estimation: A didactic. Journal of Educational and Behavioral Statistics, 34(1), 115–130.

    Article  Google Scholar 

  • de la Torre, J. (2011). The generalized DINA model framework. Psychometrika, 76(2), 179–199.

    Article  Google Scholar 

  • de la Torre, J., & Douglas, J. A. (2004). Higher-order latent trait models for cognitive diagnosis. Psychometrika, 69(3), 333–353.

    Article  Google Scholar 

  • DeCarlo, L. T. (2011). On the analysis of fraction subtraction data: The DINA model, classification, latent class sizes, and the Q-matrix. Applied Psychological Measurement, 35(1), 8–26.

    Article  Google Scholar 

  • Fang, G., Liu, J., & Ying, Z. (2019). On the identifiability of diagnostic classification models. Psychometrika, 84(1), 19–40.

    Article  PubMed  Google Scholar 

  • Gu, Y., & Xu, G. (2019). The sufficient and necessary condition for the identifiability and estimability of the DINA model. Psychometrika, 84(2), 468–483.

    Article  PubMed  Google Scholar 

  • Gu, Y., & Xu, G. (2020). Partial identifiability of restricted latent class models. The Annals of Statistics, 48(4), 2082–2107.

    Article  Google Scholar 

  • Gu, Y., & Xu, G. (2021). Sufficient and necessary conditions for the identifiability of the Q-matrix. Statistica Sinica, 31, 449–472.

    Google Scholar 

  • Haertel, E. H. (1989). Using restricted latent class models to map the skill structure of achievement items. Journal of Educational Measurement, 26(4), 301–321.

    Article  Google Scholar 

  • Hartz, S. (2002). A Bayesian framework for the unified model for assessing cognitive abilities: Blending theory with practicality (Unpublished doctoral dissertation). University of Illinois at Urbana-Champaign.

  • Henson, R. A., Templin, J. L., & Willse, J. T. (2009). Defining a family of cognitive diagnosis models using log-linear models with latent variables. Psychometrika, 74(2), 191–210.

    Article  Google Scholar 

  • Junker, B. W., & Sijtsma, K. (2001). Cognitive assessment models with few assumptions, and connections with nonparametric item response theory. Applied Psychological Measurement, 25(3), 258–272.

    Article  Google Scholar 

  • Köhn, H. F., & Chiu, C. Y. (2016). A proof of the duality of the DINA model and the DINO model. Journal of Classification, 33(2), 171–184.

    Article  Google Scholar 

  • Kruskal, J. B. (1976). More factors than subjects, tests and treatments: An indeterminacy theorem for canonical decomposition and individual differences scaling. Psychometrika, 41(3), 281–293.

    Article  Google Scholar 

  • Kruskal, J. B. (1977). Three-way arrays: Rank and uniqueness of trilinear decompositions, with application to arithmetic complexity and statistics. Linear Algebra and its Applications, 18(2), 95–138.

    Article  Google Scholar 

  • Liu, J., Xu, G., & Ying, Z. (2012). Data-driven learning of Q-matrix. Applied Psychological Measurement, 36(7), 548–564.

    Article  PubMed  Google Scholar 

  • Madison, M. J., & Bradshaw, L. P. (2018). Assessing growth in a diagnostic classification model framework. Psychometrika, 83, 963–990.

    Article  PubMed  Google Scholar 

  • Masiero, B., & Nascimento, V. H. (2017). Revisiting the Kronecker array transform. IEEE Signal Processing Letters, 24(5), 525–529.

    Article  Google Scholar 

  • Shute, V. J., Hansen, E. G., & Almond, R. G. (2008). You can’t fatten a hog by weighing it-or can you? Evaluating an assessment for learning system called ACED. International Journal of Artificial Intelligence in Education, 18(4), 289–316.

    Google Scholar 

  • Sorrel, M. A., Olea, J., Abad, F. J., de la Torre, J., Aguado, D., & Lievens, F. (2016). Validity and reliability of situational judgement test scores: A new approach based on cognitive diagnosis models. Organizational Research Methods, 19(3), 506–532.

    Article  Google Scholar 

  • Templin, J. L., & Henson, R. A. (2006). Measurement of psychological disorders using cognitive diagnosis models. Psychological Methods, 11(3), 287.

    Article  PubMed  Google Scholar 

  • von Davier, M. (2008). A general diagnostic model applied to language testing data. British Journal of Mathematical and Statistical Psychology, 61(2), 287–307.

    Article  Google Scholar 

  • Wang, S., Yang, Y., Culpepper, S. A., & Douglas, J. (2017). Tracking skill acquisition with cognitive diagnosis models: A higher-order hidden Markov model with covariates. Journal of Educational and Behavioral Statistics, 43(1), 57–87.

    Article  Google Scholar 

  • Xu, G. (2017). Identifiability of restricted latent class models with binary responses. Annals of Statistics, 45(2), 675–707.

    Article  Google Scholar 

  • Xu, G., & Shang, Z. (2018). Identifying latent structures in restricted latent class models. Journal of the American Statistical Association, 113(523), 1284–1295.

    Article  Google Scholar 

Download references

Acknowledgements

This research was partially supported by National Science Foundation Methodology, Measurement, and Statistics program Grants 1758631, 1951057, 21-50628.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Steven Andrew Culpepper.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix A

1.1 Proof of Proposition 1

Without loss of generality, suppose that items X and \(X'\) form a saturated dyad so that \({\varvec{q}}^\top {\varvec{q}}'=2\) and \(\Vert {\varvec{q}}\Vert =\Vert {\varvec{q}}'\Vert =2\) and \(\mathcal C({\varvec{q}})=\mathcal C({\varvec{q}}')=\{c_0,c_1,c_2,c_3\}\). The \(4\times 4\) sub-table of probabilities for the item response patterns and attribute profile configurations for the base columns, \(({\varvec{\theta }}{\varvec{*}}{\varvec{\theta }}')_{\mathcal C({\varvec{q}},{\varvec{q}}')}=\varvec{\theta }_{\mathcal C({\varvec{q}})}{\varvec{*}}\varvec{\theta }_{\mathcal C({\varvec{q}}')}'\), is

$$\begin{aligned} \begin{bmatrix} (1-\theta _{c_0})(1-\theta _{c_0}') &{} (1-\theta _{c_1})(1-\theta _{c_1}') &{} (1-\theta _{c_2})(1-\theta _{c_2}') &{} (1-\theta _{c_3})(1-\theta _{c_3}')\\ (1-\theta _{c_0})\theta _{c_0}' &{} (1-\theta _{c_1})\theta _{c_1}' &{} (1-\theta _{c_2})\theta _{c_2}' &{} (1-\theta _{c_3})\theta _{c_3}'\\ \theta _{c_0}(1-\theta _{c_0}') &{} \theta _{c_1}(1-\theta _{c_1}') &{} \theta _{c_2}(1-\theta _{c_2}') &{} \theta _{c_3}(1-\theta _{c_3}')\\ \theta _{c_0}\theta _{c_0}' &{} \theta _{c_1}\theta _{c_1}' &{} \theta _{c_2}\theta _{c_2}' &{} \theta _{c_3}\theta _{c_3}'\\ \end{bmatrix}. \end{aligned}$$
(A1)

We show \(({\varvec{\theta }}{\varvec{*}}{\varvec{\theta }}')_{\mathcal C({\varvec{q}},{\varvec{q}}')}\) is full column rank by showing \(\det \left( ({\varvec{\theta }}{\varvec{*}}{\varvec{\theta }}')_{\mathcal C({\varvec{q}},{\varvec{q}}')} \right) \ne 0\). In particular, we define

$$\begin{aligned} \mathbf{A} = \mathbf{O}_3 \mathbf{O}_2 \mathbf{O}_1 \left( ({\varvec{\theta }}{\varvec{*}}{\varvec{\theta }}')_{\mathcal C({\varvec{q}},{\varvec{q}}')} \right) \end{aligned}$$
(A2)

and note that, if \(\det \left( \mathbf{O}_1\right) =\det \left( \mathbf{O}_2\right) =\det \left( \mathbf{O}_3\right) =1\), then \(\det \left( \mathbf{A}\right) =\det \left( \varvec{\theta }_{\mathcal C({\varvec{q}})}{\varvec{*}}\varvec{\theta }_{\mathcal C({\varvec{q}}')}'\right) \). Consequently, \(({\varvec{\theta }}{\varvec{*}}{\varvec{\theta }}')_{\mathcal C({\varvec{q}},{\varvec{q}}')}\) is full rank if \(\det \left( \mathbf{A}\right) \ne 0\). Let \(\mathbf{O}_1\), \(\mathbf{O}_2\), and \(\mathbf{O}_3\) be defined as

$$\begin{aligned} \mathbf{O}_1= \begin{bmatrix} 1 &{} 1 &{} 1 &{} 1\\ 0 &{} 1 &{} 0 &{} 1\\ 0 &{} 0 &{} 1 &{} 1\\ 0 &{} 0 &{} 0 &{} 1\\ \end{bmatrix}, \mathbf{O}_2= \begin{bmatrix} 1 &{} 0 &{} 0 &{} 0\\ -\theta _{c_0}' &{} 1 &{} 0 &{} 0\\ -\theta _{c_0} &{} 0 &{} 1 &{} 0\\ -\theta _{c_0}\theta _{c_0}' &{} 0 &{} 0 &{} 1\\ \end{bmatrix}, \mathbf{O}_3= \begin{bmatrix} 1 &{} 0 &{} 0 &{} 0\\ 0 &{} 1 &{} 0 &{} 0\\ 0 &{} -\frac{\theta _{c_1}-\theta _{c_0}}{\theta _{c_1}'-\theta _{c_0}'} &{} 1 &{} 0\\ 0 &{} -\theta _{c_1} &{} -\theta _{c_0}' &{} 1\\ \end{bmatrix}. \end{aligned}$$
(A3)

where \(\mathbf{O}_3\) assumes that \(\theta _{c_1}'\ne \theta _{c_0}'\). We therefore find that \(\mathbf{A}\) and \(\det \left( ({\varvec{\theta }}{\varvec{*}}{\varvec{\theta }}')_{\mathcal C({\varvec{q}},{\varvec{q}}')} \right) \) are:

$$\begin{aligned} \mathbf{A}= & {} \begin{bmatrix} 1 &{} 1 &{} 1 &{} 1\\ 0 &{} \theta _{c_1}'-\theta _{c_0}' &{} \theta _{c_2}'-\theta _{c_0}' &{} \theta _{c_3}'-\theta _{c_0}'\\ 0 &{} 0&{} (\theta _{c_2}-\theta _{c_0})-\frac{\theta _{c_1}-\theta _{c_0}}{\theta _{c_1}'-\theta _{c_0}'}(\theta _{c_2}'-\theta _{c_0}') &{} (\theta _{c_3}-\theta _{c_0})-\frac{\theta _{c_1}-\theta _{c_0}}{\theta _{c_1}'-\theta _{c_0}'}(\theta _{c_3}'-\theta _{c_0}')\\ 0 &{} 0 &{} (\theta _{c_2}'-\theta _{c_0}')(\theta _{c_2}-\theta _{c_1}) &{} (\theta _{c_3}'-\theta _{c_0}')(\theta _{c_3}-\theta _{c_1}) \end{bmatrix},\nonumber \\ \end{aligned}$$
(A4)
$$\begin{aligned} \det \left( ({\varvec{\theta }}{\varvec{*}}{\varvec{\theta }}')_{\mathcal C({\varvec{q}},{\varvec{q}}')} \right)= & {} \left[ (\theta _{c_1}'-\theta _{c_0}')(\theta _{c_2}-\theta _{c_0})-(\theta _{c_1}-\theta _{c_0})(\theta _{c_2}'-\theta _{c_0}')\right] (\theta _{c_3}'-\theta _{c_0}')(\theta _{c_3}-\theta _{c_1})\nonumber \\&\quad -\left[ (\theta _{c_1}'-\theta _{c_0}')(\theta _{c_3}-\theta _{c_0})-(\theta _{c_1}-\theta _{c_0})(\theta _{c_3}'-\theta _{c_0}')\right] (\theta _{c_2}'-\theta _{c_0}')(\theta _{c_2}-\theta _{c_1}). \end{aligned}$$
(A5)

Appendix B

1.1 Proof of Proposition 3

We prove Proposition 3 by first considering the case where K is even. Let \(\mathbb {P}_{jj'}=(\varvec{\theta }_j{\varvec{*}}\varvec{\theta }_{j'})_{\mathcal C({\varvec{q}}_j,{\varvec{q}}_{j'})}\) be the \(4\times 4\) matrix of response probabilities by classes for items \(X_j\) and \(X_{j'}\). The columns of \(\mathbb {P}_{jj'}\) correspond with the four possible patterns for the two attributes that load onto \(X_j\) and \(X_{j'}\). Note that \(\mathrm{rank}(\mathbb {P}_{jj'})=4\) given \(X_j\) and \(X_{j'}\) form a full rank dyad. Consequently, if K is even, \(\mathbb {P}_{12},\dots ,\mathbb {P}_{K/2-1,K/2}\) each correspond with distinct pairs of items and attributes, which implies that

(B1)

and \(\mathrm{rank}\left( \mathbf{T}_1\right) =2^K\).

If K is odd there are two cases defined by \(a\in \{0,1\}\). The \(a=0\) case is a direct extension of the even K case. Specifically, \(a=0\) implies that \(\varvec{\theta }_K\) for \(X_K\) has a simple structure pattern so that \(\varvec{\theta }_{\mathcal C({\varvec{q}}_K)}\) is a \(2\times 2\) matrix with \(\det \left( \varvec{\theta }_{\mathcal C({\varvec{q}}_K)}\right) =(\theta _{K,2^{K-1}}-\theta _{K0})\) and \(\mathrm{rank}(\varvec{\theta }_{\mathcal C({\varvec{q}}_K)})=2\) if \(\theta _{K,2^{K-1}}\ne \theta _{K0}\). The dyad-complete structure for the first \(K-1\) items implies that

$$\begin{aligned} \mathbb {P}_{1:(K-1)}=\bigotimes _{d=1}^{(K-1)/2} \mathbb {P}_{2d-1,2d} \end{aligned}$$
(B2)

and \(\mathrm{rank}\left( \mathbb {P}_{1:(K-1)}\right) =2^{K-1}\). Note that \(\mathbb {P}_{1:(K-1)}\) describes how \(X_1,\dots , X_{K-1}\) relate to attributes \(\alpha _2,\dots ,\alpha _K\) and \(X_1,\dots , X_{K-1}\) are unrelated to \(\alpha _1\). The fact that \(X_K\) is the only item loading onto \(\alpha _1\) implies that

$$\begin{aligned} \mathbf{T}_1=\varvec{\theta }_{\mathcal C({\varvec{q}}_K)}\otimes \left( \bigotimes _{d=1}^{(K-1)/2} \mathbb {P}_{2d-1,2d}\right) \end{aligned}$$
(B3)

and \(\mathrm{rank}\left( \mathbf{T}_1\right) =2\cdot 2^{K-1}=2^K\).

For \(a=1\), \(X_K\) loads onto both \(\alpha _1\) and \(\alpha _2\), so, as shown in Example 2, the \(2\times 2^K\) matrix \(\varvec{\theta }_K\) has four unique values so that

$$\begin{aligned} \varvec{\theta }_K=\left( \varvec{\theta }_{K0},\varvec{\theta }_{K,2^{K-2}},\varvec{\theta }_{K,2^{K-1}},\varvec{\theta }_{K,3\cdot 2^{K-2}}\right) \otimes {\varvec{1}}_{2^{K-2}}^\top . \end{aligned}$$
(B4)

Note that we can partition the attribute profile as \({\varvec{\alpha }}=(\alpha _1,{\varvec{\alpha }}_{2:K})\) so that \((0,{\varvec{\alpha }}_{2:K})\) corresponds with \(2^{K-1}\) patterns for \({\varvec{\alpha }}_{2:K}\) when \(\alpha _1=0\) and \((1,{\varvec{\alpha }}_{2:K})\) similarly includes \(2^{K-1}\) patterns for \({\varvec{\alpha }}_{2:K}\) with \(\alpha _1=1\). The fact that \(X_1,\dots ,X_{K-1}\) are unrelated to \(\alpha _1\) implies that the probability of response patterns for \(X_1,\dots ,X_{K-1}\) given \((0,{\varvec{\alpha }}_{2:K})\) is \(\mathbb {P}_{1:(K-1)}\) as is the probability of \(X_1,\dots ,X_{K-1}\) given \((1,{\varvec{\alpha }}_{2:K})\). Notice that \((0,{\varvec{\alpha }}_{2:K}^\top ){\varvec{v}}\in \{0,\dots ,2^{K-1}-1\}\) and \((1,{\varvec{\alpha }}_{2:K}^\top ){\varvec{v}}\in \{2^{K-1},\dots ,2^K-1\}\). Therefore, the \(2^{K-1}\times 2^K\) matrix \(\varvec{\theta }_{1:(K-1)}\) for \(X_1,\dots , X_{K-1}\) is defined as

$$\begin{aligned} \varvec{\theta }_{1:(K-1)} = {\varvec{1}}_{2}^\top \otimes \mathbb {P}_{1:(K-1)}. \end{aligned}$$
(B5)

So, \(\varvec{\theta }_{1:(K-1)}\) is a block matrix with the first and last \(2^{K-1}\) columns equal to \(\mathbb {P}_{1:(K-1)}\). Therefore, we can write \(\mathbf{T}_1\) as

(B6)

where

$$\begin{aligned} \mathbf{D}_{11}&= \begin{bmatrix} 1-\theta _{K0}&{}0\\ 0&{} 1-\theta _{K,2^{K-2}} \end{bmatrix}\otimes \mathbf{I}_{2^{K-2}} \end{aligned}$$
(B7)
$$\begin{aligned} \mathbf{D}_{12}&= \begin{bmatrix} 1-\theta _{K,2^{K-1}}&{}0\\ 0&{} 1-\theta _{K,3\cdot 2^{K-2}} \end{bmatrix} \otimes \mathbf{I}_{2^{K-2}} \end{aligned}$$
(B8)
$$\begin{aligned} \mathbf{D}_{21}&= \begin{bmatrix} \theta _{K0}&{}0\\ 0&{} \theta _{K,2^{K-2}} \end{bmatrix} \otimes \mathbf{I}_{2^{K-2}} \end{aligned}$$
(B9)
$$\begin{aligned} \mathbf{D}_{22}&= \begin{bmatrix} \theta _{K,2^{K-1}}&{}0\\ 0&{} \theta _{K,3\cdot 2^{K-2}} \end{bmatrix} \otimes \mathbf{I}_{2^{K-2}}. \end{aligned}$$
(B10)

Recall the determinant of a block matrix

$$\begin{aligned} \mathbf{Y} = \begin{bmatrix} \mathbf{A}&{}\mathbf{B}\\ \mathbf{C}&{}\mathbf{D} \end{bmatrix} \end{aligned}$$
(B11)

is \(\det (\mathbf{Y})=\det (\mathbf{A})\det (\mathbf{D}-\mathbf{C}\mathbf{A}^{-1}\mathbf{B})\). Therefore,

(B12)

which is nonzero if the conditions for \(a=1\) of Proposition 3 are satisfied.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Culpepper, S.A. A Note on Weaker Conditions for Identifying Restricted Latent Class Models for Binary Responses. Psychometrika 88, 158–174 (2023). https://doi.org/10.1007/s11336-022-09875-5

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11336-022-09875-5

Keywords

Navigation