Skip to main content
Log in

Search for Galaxy Cluster Candidates in the Cosmic Microwave Background Maps of the Planck Space Mission Using a Convolutional Neural Network Based on the Method of Tracing the Sunyaev–Zeldovich Effect

  • Published:
Astrophysical Bulletin Aims and scope Submit manuscript

Abstract—We propose a method of searching for radio sources exhibiting the Sunyaev–Zeldovich effect in the multi-frequency emission maps from the Planck mission data using a convolutional neural network. A catalog for recognizing radio sources is compiled using the GLESP pixelation scheme at the frequencies of 100, 143, 217, 353, and 545 GHz. The quality of the proposed approach is evaluated and the quality of the dependence of model data on the S/N ratio is estimated. We show that the presented neural network approach allows the detection of sources with the Sunyaev–Zeldovich effect. The proposed method can be used to find the most likely galaxy cluster candidates at large redshifts.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1.
Fig. 2.
Fig. 3.
Fig. 4.
Fig. 5.
Fig. 6.
Fig. 7.

Similar content being viewed by others

Notes

  1. https://pla.esac.esa.int/#home.

  2. http://sed.sao.ru/vo/planck_maps/.

  3. https://github.com/pytorch/vision/blob/master/torchvision/models/ resnet.py.

  4. https://github.com/SunnyientDev/SZ-detection.

REFERENCES

  1. G. O. Abell, H. G. Corwin, and R. P. Olowin, Astrophys. J. Suppl. 70, 1 (1989).

    Article  Google Scholar 

  2. C. de Breuck, W. van Breugel, H. J. A. Röttgering, and G. Miley, Astron. and Astrophys. Suppl. 143, 303 (2000).

    ADS  Google Scholar 

  3. K. Simonyan and A. Zisserman, arXiv:1409.1556 (2014).

  4. P. A. R. Ade et al. (Planck Collab.), Astron. and Astrophys. 594, 19 (2016).

    Article  Google Scholar 

  5. N. Aghanim et al. (Planck Collab.), Astron. and Astrophys. 641, id. A1 (2020a).

  6. N. Aghanim et al. (Planck Collab.), Astron. and Astrophys. 641, id. A6 (2020b).

  7. S. W. Allen, A. E. Evrard, and A. B. Mantz, Annual Rev. Astron. Astrophys. 49, 409 (2011).

    Article  ADS  Google Scholar 

  8. A. K. Aniyan and K. Thorat, Astrophys. J. Suppl. 230, id. 20 (2017).

  9. M. Arnaud, G.W. Pratt, R. Piffaretti, et al., Astron. and Astrophys. 517, id. A92 (2010).

  10. D. Barbosa, J. G. Bartlett, A. Blanchard, and J. Oukbir, Astron. and Astrophys. 314, 13 (1996).

    ADS  Google Scholar 

  11. D. Baron and D. Poznanski, Monthly Notices Royal Astron. Soc. 465, 4530 (2017).

    Article  ADS  Google Scholar 

  12. K. Basu, J. Erler, J. Chluba, et al., Bull. Amer. Astron. Soc. 51, 302 (2019).

    Google Scholar 

  13. G. Blumenthal and G. Miley, Astron. and Astrophys. 80, 13 (1979).

    ADS  Google Scholar 

  14. V. Bonjean, Astron. and Astrophys. 634, id. A81 (2020).

  15. A. M. Bykov, H. Bloemen, and Y. A. Uvarov, Astron. and Astrophys. 362, 886 (2000).

    ADS  Google Scholar 

  16. P. Carvalho, G. Rocha, and M. P. Hobson, Monthly Notices Royal Astron. Soc. 393, 681 (2009).

    Article  ADS  Google Scholar 

  17. P. Carvalho, G. Rocha, M. P. Hobson, and A. Lasenby, Monthly Notices Royal Astron. Soc. 427, 1384 (2012).

    Article  ADS  Google Scholar 

  18. J. J. Condon, W. D. Cotton, E. W. Greisen, et al., Astron. J. 115, 1693 (1998).

    Article  ADS  Google Scholar 

  19. D. Cunnama, A. Faltenbacher, C. Cress, and S. Passmoor, Monthly Notices Royal Astron. Soc. 397, L41 (2009).

    Article  ADS  Google Scholar 

  20. W. Dobbels, M. Baes, S. Viaene, et al., Astron. and Astrophys. 634, id. A57 (2020).

  21. A. G. Doroshkevich, P. D. Naselsky, O. V. Verkhodanov, et al., Int. J.Mod. Phys. D 14 (02), 275 (2005).

    Article  ADS  Google Scholar 

  22. A. G. Doroshkevich, O. V. Verkhodanov, P. D. Naselsky, et al., Int. J.Mod. Phys. D 20 (06), 1053 (2011).

    Article  ADS  Google Scholar 

  23. M. Hasselfield, M. Hilton, T. A. Marriage, J. Cosmology Astroparticle Physics, No. 07, id. 008 (2013).

  24. K. He, X. Zhang, S. Ren, and J. Sun, arXiv:1512.03385 (2015).

  25. D. Herranz, J. L. Sanz, M. P. Hobson, et al., Monthly Notices Royal Astron. Soc. 336, 1057 (2002).

    Article  ADS  Google Scholar 

  26. S. Hochreiter, Y. Bengio, P. Frasconit, et al., in A Field Guide to Dynamical Recurrent Neural Networks, Ed. by J. F. Kolen and S. C. Kremer (Wiley-IEEE Press, 2001).

    Google Scholar 

  27. M. Hossin and M. N. Sulaiman, Int. J. Data Mining and KnowledgeManag. Proc. 5 (2), 1 (2015).

    Google Scholar 

  28. A. G. Howard, M. Zhu, B. Chen, et al., arXiv: 1704.04861 (2017).

  29. M. L. Khabibullina and O.V. Verkhodanov, Astrophysical Bulletin 64, 123 (2009).

  30. T. V. Keshelava and O. V. Verkhodanov, Astrophysical Bulletin 70, 257 (2015).

    Article  ADS  Google Scholar 

  31. J. Kormendy and S. Djorgovski, Annual Rev. Astron. Astrophys. 27, 235 (1989).

    Article  ADS  Google Scholar 

  32. A. V. Kravtsov and S. Borgani, Annual Rev. Astron. Astrophys. 50, 353 (2012).

    Article  ADS  Google Scholar 

  33. A. Krizhevsky, I. Sutskever, and G. E. Hinton, in Proc. 25th Conf. on Advances inNeural Information Processing Systems, Lake Tahoe, USA, 2012, Ed. by F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger, 1097 (2012).

  34. L. Liu, H. Jiang, P. He, W. Chen, et al., arXiv: 1908.03265 (2020).

  35. J. B. Melin, J. G. Bartlett, and J. Delabrouille, Astron. and Astrophys. 459 (2), 341 (2006).

    Article  ADS  Google Scholar 

  36. D. Nagai, AIP Conf. Proc. 1632 (1), 88 (2014).

    Article  ADS  Google Scholar 

  37. A. Oronovskaya, https://habr.com/ru/users/sunny_ space/posts/.

  38. Yu. N. Parijskij, W. M. Goss, A. I. Kopylov, et al., Bull. Spec. Astrophys. Obs. 40, 5 (1996).

    ADS  Google Scholar 

  39. A. Paszke, S. Gross, F. Massa, et al., arXiv:1912.01703 (2019).

  40. D. M. W. Powers, Australia Technical Report SIE-07-001 (2007).

  41. R. B. Rengelink, Y. Tang, A. G. de Bruyn, et al., Astron. and Astrophys. Suppl. 124, 259 (1997).

    ADS  Google Scholar 

  42. O. Russakovsky, J. Deng, H. Su, et al., arXiv:1409.0575 (2014).

  43. C. L. Sarazin, Rev. Modern Physics, 58 (1), 1 (1986).

    Article  ADS  Google Scholar 

  44. D. I. Solovyov and O. V. Verkhodanov, Astrophysical Bulletin 72, 217 (2017).

    Article  ADS  Google Scholar 

  45. C. Szegedy, W. Liu, Y. Jia, et al., in Proc. IEEE Conf. on Computer Vision and Pattern Recognition, Boston, USA, 2015, https://ieeexplore.ieee.org/document/7298594.

    Google Scholar 

  46. S. Tacchella, B. Diemer, L. Hernquist, et al., Monthly Notices Royal Astron. Soc. 487, 5416 (2019).

    Article  ADS  Google Scholar 

  47. Y. Tao, Y. Zhang, C. Cui, and Z. Ge, arXiv:1801.04839 (2018).

  48. G. Ucci, A. Ferrara, S. Gallerani, and A. Pallottini, Monthly Notices Royal Astron. Soc. 465, 1144 (2017).

    Article  ADS  Google Scholar 

  49. K. Vanderlinde, T. M. Crawford, T. de Haan, et al., Astrophys. J. 722 (2), 1180 (2010).

    Article  ADS  Google Scholar 

  50. B. P. Venemans, H. J. A. Röttgering, G. K. Miley, et al., Astron. and Astrophys. 461, 823 (2007).

    Article  ADS  Google Scholar 

  51. O. V. Verkhodanov and Yu. N. Parijskij, Radio galaxies and Cosmology (FIZMATLIT,Moscow, 2009).

    Google Scholar 

  52. O. V. Verkhodanov, E. K.Majorova, O. P. Zhelenkova, et al., Astrophysical Bulletin 70, 156 (2015).

    Article  ADS  Google Scholar 

  53. O. V. Verkhodanov, N. V. Verkhodanova, O. S. Ula-khovich, et al., Astrophysical Bulletin 73, 1 (2018).

    Article  ADS  Google Scholar 

  54. A. A. Zaporozhets and O. V. Verkhodanov, Astrophysical Bulletin 74, 247 (2019).

    Article  ADS  Google Scholar 

  55. Ya. B. Zeldovich and R. A. Sunyaev, Astrophys. and Space Sci. 4, 301 (1969).

    Article  ADS  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to A. P. Topchieva.

Ethics declarations

The authors declare no conflict of interest.

Additional information

Translated by E. Chmyreva

APPENDIX

APPENDIX

This work is based on metrics that work with two classes. Let us call the objects that have the SZ-effect the positive class (P), and the objects without the effect as negative (N). We shall refer to the responses that we know beforehand as ground truth (gt), and the responses that we predict, as prediction (pred).

Let us first consider one object of the sample. We know the correct answer beforehand (P or N) and we predict a certain response (P or N). There are four options:

gt = P, pred = P—here the SZ-effect really was present, and we predicted it. We shall call this true positive (TP);

gt = N, pred N—there was no SZ-effect, and we predicted its absence. This shall be “true negative” (TN);

gt = P, pred N—SZ-effect was present, but we did not notice it. This is a “false negative” (FN);

gt = N, pred = P—no SZ-effect, but we think it was present. This is a “false positive” (FP).

For the list of sample objects we have a list of correct responses (e.g., gt = PPPNNNN) and a list of our predictions (e.g., pred = PNNPPPN). Each object is related to one of the four cases: TP, TN, FP, FN.

Based on these principles, we can describe all metrics that are possible for the two classes. We are interested in the two:

Accuracy—the ratio of the fraction of objects with correctly identified SZ-effect to the total objects in the sample. I.e.,

$$\frac{{{\text{TP}} + {\text{TN}}}}{{{\text{TP}} + {\text{TN}} + {\text{FP}} + {\text{FN}}}},$$

Recall—the number of objects with gt = P predicted as P. I.e.,

$$\frac{{{\text{TP}}}}{{{\text{TP}} + {\text{FN}}}}.$$
$$\frac{{{\text{TP}}}}{{{\text{TP}} + {\text{FP}}}}.$$

F1—the harmonic mean between Precision and Recall, i.e.,

$$\frac{{2{\text{TP}}}}{{2{\text{TP}} + {\text{FP}} + {\text{FN}}}}$$

ROC AUC—the area below the ROC curve. I.e.,

$$\frac{{1 + {\text{TPR}} - {\text{FPR}}}}{2},$$

where True Positive Rate (TPR) is the percentage of class 1 points which were classified correctly by our algorithm, and False Positive Rate (FPR) is the percentage of class 0 points which were identified incorrectly by our algorithm. The Accuracy metrics can be demonstrated using the following example:

gt = PPPPNNNN, pred = NNNNNNNN and Accuracy = 0.5;

gt = PPPPNNNN, pred = PNPNPNPN and Accuracy = 0.5;

gt = PNNNNNNN, pred = NNNNNNNN and Accuracy = 7/8.

As we can see, for balanced samples (when gt consists of roughly the same number of P and N), Accuracy gives a result no lower than 0.5, for a constant prediction of always N or for a random value. But for the non-balanced classes it can give a higher value, which in this case shows not the quality of our predictions, but the balance of our sample. Result: Accuracy is useful only in balanced samples.

The limitations of using the Recall metric can be demonstrated, for example, as:

gt = PPPPNNNN, pred = NNNNNNNN and Recall = 0;

gt = PPPPNNNN, pred = PNPNPNPN and Recall = 0.5;

gt = PPPPNNNN, pred = PPPPPPPP and Recall = 1;

gt = PNNNNNNN, pred = PPPPPPPP and Recall = 1.

Obtaining 1 in Recall is not very hard—one must only always predict P. Therefore, the Recall metric cannot be considered separately from the others: one may not notice that the model simply yields P instead of smart predictions.

The metrics work according to the following principle:

• The sample is balanced, therefore we can use Accuracy.

• If we compare with the catalogs, the result is that the catalogs contain only the correct objects (TP), and the quality of the catalog can be measured only by the absence of some objects (FN). Accuracy also requires TN and FP, which we do not have. We can therefore only use Recall.

• We can calculate the Recall for the MMF1, MMF3, and PwS catalogs, since they were compiled based on Recall maximization.

Our neural network gives a number from 0 to 1 as a result: the probability that an object belongs to a class, and we need an answer of P or N. We therefore need to impose a certain threshold, above which we get P. We divide the dataset into three parts:

• Train—we learn, i.e., shift the network weights in such a way as to minimize the loss function;

• Validation—subsample used for setting the threshold, i.e., thresholds from 0 to 1 are examined and the one that gives a better average Accuracy for the validation set is selected;

• Test—this is the subsample used to measure the final Accuracy and Recall, which are then used in the table for comparison.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Verkhodanov, O.V., Topchieva, A.P., Oronovskaya, A.D. et al. Search for Galaxy Cluster Candidates in the Cosmic Microwave Background Maps of the Planck Space Mission Using a Convolutional Neural Network Based on the Method of Tracing the Sunyaev–Zeldovich Effect. Astrophys. Bull. 76, 123–131 (2021). https://doi.org/10.1134/S1990341321020103

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1134/S1990341321020103

Keywords:

Navigation