Skip to main content
Log in

DeepConnection: classifying momentary relationship state from images of romantic couples

  • Research Article
  • Published:
Journal of Computational Social Science Aims and scope Submit manuscript

Abstract

Detecting momentary relationship state and quality in romantic couples is an important endeavor for relationship research, couple therapy, and of course couples themselves. Yet current methods to achieve this are intrusive, asynchronous, plagued by ceiling effects, and only assess subjective responses to questionnaires while trying to capture the objective state of a relationship. According to social appraisal theory, human beings rely on emotional responses to assess interpersonal situations, a key element for relationship functioning in couples. Using couples is particularly advantageous as strong emotional reactions are triggered in romantic relationships. Here, we employ deep learning methods to assess the momentary relationship state of romantic couples from predominantly stock images via facial and bodily emotion expression and other features. Our new model, DeepConnection, comprises pre-trained residual neural networks, spatial pyramid pooling layers, and power mean transformations to extract relevant features from images for binary classification. With this, we achieved an average accuracy of nearly 97% on a separate validation dataset. We also engaged in model interpretation using Gradient-weighted Class Activation Mapping (Grad-CAM) to identify which features allow DeepConnection to detect binarized momentary relationship state. To demonstrate generalizability and robustness, we used DeepConnection to analyze videos of couples exhibiting a range of different postures and facial expressions. Here, we achieved an average accuracy of about 85% with a trained DeepConnection model. The work presented here could inform couples, advance relationship research, and find application in couple therapy to assist the therapist.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

Availability of data and material

All data and materials can be obtained when using the code provided below.

References

  1. Ainsworth, M. D. S., & Bell, S. M. (1970). Attachment, Exploration, and Separation: Illustrated by the Behavior of One-Year-Olds in a Strange Situation. Child Development, 41(1), 49. https://doi.org/10.2307/1127388.

    Article  Google Scholar 

  2. Barrett, L. F., Adolphs, R., Marsella, S., Martinez, A. M., & Pollak, S. D. (2019). Emotional Expressions Reconsidered: Challenges to Inferring Emotion From Human Facial Movements. Psychological Science in the Public Interest, 20(1), 1–68. https://doi.org/10.1177/1529100619832930.

    Article  Google Scholar 

  3. Barrett, L. F., Mesquita, B., & Gendron, M. (2011). Context in Emotion Perception. Current Directions in Psychological Science, 20(5), 286–290. https://doi.org/10.1177/0963721411422522.

    Article  Google Scholar 

  4. Barrett, L. F., Robin, L., Pietromonaco, P. R., & Eyssell, K. M. (1998). Are Women the “More Emotional” Sex? Evidence From Emotional Experiences in Social Context. Cognition & Emotion, 12(4), 555–578. https://doi.org/10.1080/026999398379565.

    Article  Google Scholar 

  5. Black, R. M. (2011). Cultural Considerations of Hand Use. Journal of Hand Therapy, 24(2), 104–111. https://doi.org/10.1016/j.jht.2010.09.067.

    Article  Google Scholar 

  6. Bowlby, J. (1988). A secure base: Parent-child attachment and healthy human development. Basic Books.

  7. Butler, E. A. (2011). Temporal Interpersonal Emotion Systems: The “TIES” That Form Relationships. Personality and Social Psychology Review, 15(4), 367–393. https://doi.org/10.1177/1088868311411164.

    Article  Google Scholar 

  8. Bylander, T. (2002). Estimating Generalization Error on Two-Class Datasets Using Out-of-Bag Estimates. Machine Learning, 48, 287–297.

    Article  Google Scholar 

  9. Carrere, S., & Gottman, J. M. (1999). Predicting divorce among newlyweds from the first three minutes of a marital conflict discussion. Family Process, 38(3), 293–301. https://doi.org/10.1111/j.1545-5300.1999.00293.x.

    Article  Google Scholar 

  10. Carstensen, L. L., Gottman, J. M., & Levenson, R. W. (1995). Emotional behavior in long-term marriage. Psychology and Aging, 10(1), 140–149. https://doi.org/10.1037/0882-7974.10.1.140.

    Article  Google Scholar 

  11. Chandra, A., Mosher, W. D., Copen, C., & Sionean, C. (2011). Sexual behavior, sexual attraction, and sexual identity in the United States: Data from the 2006–2008 National Survey of Family Growth. National Health Statistics Reports, 36, 1–36.

    Google Scholar 

  12. Chen, I., Johansson, F. D., & Sontag, D. (2018). Why is my classifier discriminatory? http://arxiv.org/abs/1805.12002

  13. Coan, J. A., & Gottman, J. M. (2007). The specific affect coding system (SPAFF). In Handbook of emotion elicitation and assessment (pp. 267–285).

  14. Cohan, C. L., & Bradbury, T. N. (1997). Negative life events, marital interaction, and the longitudinal course of newlywed marriage. Journal of Personality and Social Psychology, 73(1), 114–128. https://doi.org/10.1037/0022-3514.73.1.114.

    Article  Google Scholar 

  15. Cordaro, D. T., Sun, R., Keltner, D., Kamble, S., Huddar, N., & McNeil, G. (2018). Universals and cultural variations in 22 emotional expressions across five cultures. Emotion, 18(1), 75–93. https://doi.org/10.1037/emo0000302.

    Article  Google Scholar 

  16. Dachapally, P. R. (2017). Facial Emotion Detection Using Convolutional Neural Networks and Representational Autoencoder Units. http://arxiv.org/abs/1706.01509

  17. Deng, J., Dong, W., Socher, R., Li, L., Li, K., & Fei-fei, L. (2009). Imagenet: A large-scale hierarchical image database. In CVPR.

  18. Ekman, P., & Friesen, W. V. (1975). Unmasking the face: A guide to recognizing emotions from facial clues. Prentice-Hall.

  19. Ekman, P., & Friesen, W. V. (1978). Facial action coding system: Manual. Consulting Psychologists Press.

  20. Fischer, A. H., & Manstead, A. S. R. (2016). Social functions of emotion and emotion regulation. In Handbook of emotion (pp. 456–469).

  21. Floyd, F. J., Baucom, D. H., Godfrey, J. J., & Palmer, C. (1998). Observational methods. In comprehensive clinical psychology (pp. 1–21). Elsevier. https://doi.org/https://doi.org/10.1016/B0080-4270(73)00223-6

  22. Funk, J. L., & Rogge, R. D. (2007). Testing the ruler with item response theory: Increasing precision of measurement for relationship satisfaction with the Couples Satisfaction Index. Journal of Family Psychology, 21(4), 572–583. https://doi.org/10.1037/0893-3200.21.4.572.

    Article  Google Scholar 

  23. Gable, S. L., Reis, H. T., Impett, E. A., & Asher, E. R. (2004). What do you do when things go right? The intrapersonal and interpersonal benefits of sharing positive events. Journal of Personality and Social Psychology, 87(2), 228–245. https://doi.org/10.1037/0022-3514.87.2.228.

    Article  Google Scholar 

  24. Glorot, X., & Bengio, Y. (2010). Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256. http://proceedings.mlr.press/v9/glorot10a.html

  25. Gottman, J. M., Coan, J., Carrere, S., & Swanson, C. (1998). Predicting marital happiness and stability from newlywed interactions. Journal of Marriage and the Family, 60(1), 5–22. https://doi.org/10.2307/353438.

    Article  Google Scholar 

  26. Gottman, John M., Murray, J. D., Swanson, C. C., Tyson, R., & Swanson, K. R. (2002). The mathematics of marriage: Dynamic nonlinear models. MIT Press.

  27. Gottman, J. M., Levenson, R. W., Gross, J., Frederickson, B. L., McCoy, K., Rosenthal, L., et al. (2003). Correlates of gay and lesbian couples’ relationship satisfaction and relationship dissolution. Journal of Homosexuality, 45(1), 23–43. https://doi.org/10.1300/J082v45n01_02.

    Article  Google Scholar 

  28. Grandey, A., Rafaeli, A., Ravid, S., Wirtz, J., & Steiner, D. D. (2010). Emotion display rules at work in the global service economy: The special case of the customer. Journal of Service Management, 21(3), 388–412. https://doi.org/10.1108/09564231011050805.

    Article  Google Scholar 

  29. Gross, J. J., & John, O. P. (2003). Individual differences in two emotion regulation processes: Implications for affect, relationships, and well-being. Journal of Personality and Social Psychology, 85(2), 348–362. https://doi.org/10.1037/0022-3514.85.2.348.

    Article  Google Scholar 

  30. Grossmann, I., Ellsworth, P. C., & Hong, Y. (2012). Culture, attention, and emotion. Journal of Experimental Psychology: General, 141(1), 31–36. https://doi.org/10.1037/a0023817.

    Article  Google Scholar 

  31. Gu, Y., Mai, X., & Luo, Y. (2013). Do bodily expressions compete with facial expressions? time course of integration of emotional signals from the face and the body. PLoS ONE, 8(7), e66762. https://doi.org/10.1371/journal.pone.0066762.

    Article  Google Scholar 

  32. He, K., Zhang, X., Ren, S., & Sun, J. (2014). Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition. 8691, 346–361. https://doi.org/https://doi.org/10.1007/978-3-319-10578-9_23

  33. He, K., Zhang, X., Ren, S., & Sun, J. (2015). Deep Residual Learning for Image Recognition. http://arxiv.org/abs/1512.03385

  34. Hénaff, O. J., Razavi, A., Doersch, C., Eslami, S. M. A., & Oord, A. van den. (2019). Data-Efficient Image Recognition with Contrastive Predictive Coding. http://arxiv.org/abs/1905.09272

  35. Houben, M., Van Den Noortgate, W., & Kuppens, P. (2015). The relation between short-term emotion dynamics and psychological well-being: A meta-analysis. Psychological Bulletin, 141(4), 901–930. https://doi.org/10.1037/a0038822.

    Article  Google Scholar 

  36. Huang, G., Liu, Z., van der Maaten, L., & Weinberger, K. Q. (2016). Densely Connected Convolutional Networks. http://arxiv.org/abs/1608.06993

  37. Inoue, H. (2019). Multi-Sample Dropout for Accelerated Training and Better Generalization. http://arxiv.org/abs/1905.09788

  38. Ioffe, S., & Szegedy, C. (2015). Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. http://arxiv.org/abs/1502.03167

  39. Jack, R. E., Caldara, R., & Schyns, P. G. (2012). Internal representations reveal cultural diversity in expectations of facial expressions of emotion. Journal of Experimental Psychology: General, 141(1), 19–25. https://doi.org/10.1037/a0023463.

    Article  Google Scholar 

  40. Johnson, S. M., Hunsley, J., Greenberg, L., & Schindler, D. (2006). Emotionally Focused Couples Therapy: Status and Challenges. Clinical Psychology: Science and Practice, 6(1), 67–79. https://doi.org/10.1093/clipsy.6.1.67.

    Article  Google Scholar 

  41. Karney, B. R., & Bradbury, T. N. (1995). The longitudinal course of marital quality and stability: A review of theory, methods, and research. Psychological Bulletin, 118(1), 3–34. https://doi.org/10.1037/0033-2909.118.1.3.

    Article  Google Scholar 

  42. Kashdan, T. B., Volkmann, J. R., Breen, W. E., & Han, S. (2007). Social anxiety and romantic relationships: The costs and benefits of negative emotion expression are context-dependent. Journal of Anxiety Disorders, 21(4), 475–492. https://doi.org/10.1016/j.janxdis.2006.08.007.

    Article  Google Scholar 

  43. Keltner, D., & Haidt, J. (2001). Social functions of emotions. In Emotions: Currrent issues and future directions (pp. 192–213). Guilford Press.

  44. Keltner, D., & Kring, A. M. (1998). Emotion, Social Function, and Psychopathology. Review of General Psychology, 2(3), 320–342. https://doi.org/10.1037/1089-2680.2.3.320.

    Article  Google Scholar 

  45. Kingma, D. P., & Ba, J. (2014). Adam: A Method for Stochastic Optimization. http://arxiv.org/abs/1412.6980

  46. Kirby, M. (1972). On Acting and Not-Acting. The Drama Review: TDR, 16(1), 3. https://doi.org/10.2307/1144724.

    Article  Google Scholar 

  47. Kirschbaum, C., Pirke, K.-M., & Hellhammer, D. H. (1993). The ‘Trier Social Stress Test’ – a tool for investigating psychobiological stress responses in a laboratory setting. Neuropsychobiology, 28(1–2), 76–81. https://doi.org/10.1159/000119004.

    Article  Google Scholar 

  48. Kitayama, S., Mesquita, B., & Karasawa, M. (2006). Cultural affordances and emotional experience: Socially engaging and disengaging emotions in Japan and the United States. Journal of Personality and Social Psychology, 91(5), 890–903. https://doi.org/10.1037/0022-3514.91.5.890.

    Article  Google Scholar 

  49. Kret, M. E., Pichon, S., Grèzes, J., & de Gelder, B. (2011). Similarities and differences in perceiving threat from dynamic faces and bodies An fMRI study. NeuroImage, 54(2), 1755–1762. https://doi.org/10.1016/j.neuroimage.2010.08.012.

    Article  Google Scholar 

  50. Kurdek, L. A. (2005). What do we know about gay and lesbian couples? Current Directions in Psychological Science, 14(5), 251–254. https://doi.org/10.1111/j.0963-7214.2005.00375.x.

    Article  Google Scholar 

  51. Kurien, D. N. (2010). Body language: Silent communicator at the workplace., 4, 29–36.

    Google Scholar 

  52. Laurenceau, J.-P., Barrett, L. F., & Rovine, M. J. (2005). The interpersonal process model of intimacy in marriage: a daily-diary and multilevel modeling approach. Journal of Family Psychology, 19(2), 314–323. https://doi.org/10.1037/0893-3200.19.2.314.

    Article  Google Scholar 

  53. Laursen, B., & Hafen, C. A. (2009). Future directions in the study of close relationships: conflict is bad (except when it’s not): conflict is bad (except when it’s not). Social Development, 19(4), 858–872. https://doi.org/10.1111/j.1467-9507.2009.00546.x.

    Article  Google Scholar 

  54. Lazarus, R. S. (1991). Emotion and adaptation. Oxford: Oxford University Press.

    Google Scholar 

  55. Lecun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11), 2278–2324. https://doi.org/10.1109/5.726791.

    Article  Google Scholar 

  56. Levenson, R. W. (1994). Human emotions: A functional view. In P. Ekman & R. J. Davidson (Eds.), The nature of emotion: Fundamental questions (pp. 123–126). Oxford: Oxford University Press.

    Google Scholar 

  57. Li, S., & Deng, W. (2018). Deep Facial Expression Recognition: A Survey. http://arxiv.org/abs/1804.08348

  58. Li, Z., Dekel, T., Cole, F., Tucker, R., Snavely, N., Liu, C., & Freeman, W. T. (2019). Learning the Depths of Moving People by Watching Frozen People. http://arxiv.org/abs/1904.11111

  59. Loshchilov, I., & Hutter, F. (2016). SGDR: Stochastic Gradient Descent with Warm Restarts.. http://arxiv.org/abs/1608.03983

  60. Luginbuehl, T., & Schoebi, D. (2019). Emotion dynamics and responsiveness in intimate relationships. Emotion. https://doi.org/10.1037/emo0000540.

    Article  Google Scholar 

  61. Mauss, I. B., Levenson, R. W., McCarter, L., Wilhelm, F. H., & Gross, J. J. (2005). The tie that binds? Coherence among emotion experience, behavior, and physiology. Emotion, 5(2), 175–190. https://doi.org/10.1037/1528-3542.5.2.175.

    Article  Google Scholar 

  62. Mikolajczyk, A., & Grochowski, M. (2018). Data augmentation for improving deep learning in image classification problem. International Interdisciplinary PhD Workshop (IIPhDW), 2018, 117–122. https://doi.org/10.1109/IIPHDW.2018.8388338.

    Article  Google Scholar 

  63. Mohammadpour, M., Khaliliardali, H., Hashemi, S. Mohammad. R., & AlyanNezhadi, Mohammad. M. (2017). Facial emotion recognition using deep convolutional networks. 2017 IEEE 4th International Conference on Knowledge-Based Engineering and Innovation (KBEI), 0017–0021. https://doi.org/https://doi.org/10.1109/KBEI.2017.8324974

  64. Moors, A., Ellsworth, P. C., Scherer, K. R., & Frijda, N. H. (2013). Appraisal theories of emotion: state of the art and future development. Emotion Review, 5(2), 119–124. https://doi.org/10.1177/1754073912468165.

    Article  Google Scholar 

  65. Nair, V., & Hinton, G. (2010). Rectified Linear Units Improve Restricted Boltzmann Machines. Proceedings of the 27th International Conference on International Conference on Machine Learning, 807–814.

  66. Parkinson, B. (2001). Putting appraisal in context. In Series in affective science. Appraisal processes in emotion: Theory, methods, research (pp. 173–186).

  67. Paszke, A., Gross, S., Chintala, S., Chanan, G., Yang, E., DeVito, Z., Lin, Z., Desmaison, A., Antiga, L., & Lerer, A. (2017). Automatic Differentiation in PyTorch. In NIPS Autodiff Workshop.

  68. Peplau, L. A., & Fingerhut, A. W. (2007). The close relationships of lesbians and gay men. Annual Review of Psychology, 58(1), 405–424. https://doi.org/10.1146/annurev.psych.58.110405.085701.

    Article  Google Scholar 

  69. Prechelt, L. (2012). Early Stopping—But When? In G. Montavon, G. B. Orr, & K.-R. Müller (Eds.), Neural Networks: Tricks of the Trade (Vol. 7700, pp. 53–67). Springer Berlin, Heidelberg. https://doi.org/https://doi.org/10.1007/978-3-642-35289-8_5

  70. Reis, H. T., & Gable, S. L. (2015). Responsiveness. Current Opinion in Psychology, 1, 67–71. https://doi.org/10.1016/j.copsyc.2015.01.001.

    Article  Google Scholar 

  71. Ronneberger, O., Fischer, P., & Brox, T. (2015). U-Net: Convolutional Networks for Biomedical Image Segmentation. http://arxiv.org/abs/1505.04597

  72. Roseman, I. J., Wiest, C., & Swartz, T. S. (1994). Phenomenology, behaviors, and goals differentiate discrete emotions. Journal of Personality and Social Psychology, 67(2), 206–221. https://doi.org/10.1037/0022-3514.67.2.206.

    Article  Google Scholar 

  73. Rosenbusch, H., Aghaei, M., Evans, A. M., & Zeelenberg, M. (2020). Psychological trait inferences from women’s clothing: Human and machine prediction. Journal of Computational Social Science. https://doi.org/10.1007/s42001-020-00085-6.

    Article  Google Scholar 

  74. Saslow, L. R., Muise, A., Impett, E. A., & Dubin, M. (2013). Can you see how happy we are? facebook images and relationship satisfaction. Social Psychological and Personality Science, 4(4), 411–418. https://doi.org/10.1177/1948550612460059.

    Article  Google Scholar 

  75. Selvaraju, R. R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., & Batra, D. (2016). Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization. http://arxiv.org/abs/1610.02391

  76. Shaver, P., Hazan, C., & Bradshaw, D. (1988). Love as attachment. In The psychology of love (pp. 68–99). Yale University Press, Yale

  77. Shiota, M., Campos, B., Keltner, D., & Hertenstein, M. J. (2004). Positive emotion and the regulation of interpersonal relationships. The Regulation of Emotion. https://doi.org/10.4324/9781410610898.

    Article  Google Scholar 

  78. Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., & Salakhutdinov, R. (2014). Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15, 1929–1958.

    Google Scholar 

  79. Strubell, E., Ganesh, A., & McCallum, A. (2019). Energy and Policy Considerations for Deep Learning in NLP. http://arxiv.org/abs/1906.02243

  80. Sun, Q., Schiele, B., & Fritz, M. (2017). A Domain Based Approach to Social Relation Recognition. http://arxiv.org/abs/1704.06456

  81. Tommasi, T., Patricia, N., Caputo, B., & Tuytelaars, T. (2015). A Deeper Look at Dataset Bias. http://arxiv.org/abs/1505.01257

  82. Tsai, J. L., Knutson, B., & Fung, H. H. (2006). Cultural variation in affect valuation. Journal of Personality and Social Psychology, 90(2), 288–307. https://doi.org/10.1037/0022-3514.90.2.288.

    Article  Google Scholar 

  83. Voulodimos, A., Doulamis, N., Doulamis, A., & Protopapadakis, E. (2018). Deep learning for computer vision: a brief review. Computational Intelligence and Neuroscience, 2018, 1–13. https://doi.org/10.1155/2018/7068349.

    Article  Google Scholar 

  84. Wang, Y., & Kosinski, M. (2018). Deep neural networks are more accurate than humans at detecting sexual orientation from facial images. Journal of Personality and Social Psychology, 114(2), 246–257. https://doi.org/10.1037/pspa0000098.

    Article  Google Scholar 

  85. Webb, E. J., Campbell, D. T., Schwartz, R. D., & Sechrest, L. (1966). Unobtrusive measures: Nonreactive research in the social sciences. Rand Mcnally.

  86. Xu, X., Li, G., Xie, G., Ren, J., & Xie, X. (2019). Weakly supervised deep semantic segmentation using CNN and ELM with semantic candidate regions. Complexity, 2019, 1–12. https://doi.org/10.1155/2019/9180391.

    Article  Google Scholar 

  87. Yosinski, J., Clune, J., Bengio, Y., & Lipson, H. (2014). How transferable are features in deep neural networks? http://arxiv.org/abs/1411.1792

  88. Zhang, C.-L., & Wu, J. (2019). Improving CNN linear layers with power mean non-linearity. Pattern Recognition, 89, 12–21. https://doi.org/10.1016/j.patcog.2018.12.029.

    Article  Google Scholar 

Download references

Funding

There is no funding involved in this research.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Daniel Bojar.

Ethics declarations

Conflicts of interest

We have no conflict of interest.

Ethics statement

The individuals involved in this manuscript have given written informed consent to publish these case details.

Code availability

All code and trained models used for this work can be found at: https://github.com/Bribak/DeepConnection

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

See below Table3; Figs. 4, 5, 6, 7, 8 and 9.

Table 3 Performance metrics of alternative deep learning models used to classify momentary relationship state of romantic couples
Fig. 4
figure 4

Interpretation of ResNet-34 model predictions using Grad-CAM. Note Four representative images of couples not used in training (two labeled as happy (ab) and two as unhappy (cd) were used as inputs a trained ResNet-34 model. Gradients at the last convolutional layer were used to generate class-specific saliency heatmaps and layered on top of the original images together with the predicted class label and the model confidence in percent. Heatmaps of predicted class labels are always depicted as the left-hand image irrespective of the true label

Fig. 5
figure 5

Interpretation of SPP model predictions using Grad-CAM. Note Four representative images of couples not used in training (two labeled as happy (ab) and two as unhappy (cd)) were used as inputs for a trained spatial pyramid pooling (SPP) model. Gradients at the last convolutional layer were used to generate class-specific saliency heatmaps and layered on top of the original images together with the predicted class label and the model confidence in percent. Heatmaps of predicted class labels are always depicted as the left-hand image irrespective of the true label

Fig. 6
figure 6

Interpretation of DeepConnection model predictions using Grad-CAM. Note Four representative images of couples not used in training (two labeled as happy (ab) and two as unhappy (cd)) were used as inputs for the trained DeepConnection model. Gradients at the last convolutional layer were used to generate class-specific saliency heatmaps and layered on top of the original images together with the predicted class label and the model confidence in percent. Heatmaps of predicted class labels are always depicted as the left-hand image irrespective of the true label

Fig. 7
figure 7

Activations and weights of the last fully connected layer of the SPP model for selected images. Note Four representative images of couples not used in training (two labeled as happy (ab) and two as unhappy (cd)) were used as inputs for the trained spatial pyramid pooling (SPP) model. Plotted points represent the weights of the last fully connected layer of size 100 × 2 in the trained SPP model which, together with their inputs, determine the class probabilities. Points were colored from yellow to red by the magnitude of activations from the presented image directly prior to the last fully connected layer. To determine class probabilities, activations will be multiplied by class weights and summed

Fig. 8
figure 8

Activations and weights of the last fully connected layer of the DeepConnection model for selected images. Note Four representative images of couples not used in training (two labeled as happy (ab) and two as unhappy (cd)) were used as inputs for the trained DeepConnection model. Plotted points represent the weights of the last fully connected layer of size 100 × 2 in the trained DeepConnection model which, together with their inputs, determine the class probabilities. Points were colored from yellow to red by the magnitude of activations from the presented image directly prior to the last fully connected layer. To determine class probabilities, activations will be multiplied by class weights and summed

Fig. 9
figure 9

Training the DeepConnection model is accelerated by multi-sample dropout. Note. Exchanging the dropout layer immediately prior to the last fully connected layer in DeepConnection with a multi-sample dropout layer (using eight dropout channels) enabled faster training convergence, both with regard to the number of epochs (a) as well as to the total amount of time required (b). Data shown was gathered from five training runs per dropout method and is plotted as mean ± standard deviation. The calculation of p-values was done using a two-tailed Student’s t-test

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Uhlich, M., Bojar, D. DeepConnection: classifying momentary relationship state from images of romantic couples. J Comput Soc Sc 4, 631–653 (2021). https://doi.org/10.1007/s42001-021-00102-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s42001-021-00102-2

Keywords

Navigation