Skip to main content

Explainable AI and Its Applications in Healthcare

  • Chapter
  • First Online:
Explainable AI: Foundations, Methodologies and Applications

Part of the book series: Intelligent Systems Reference Library ((ISRL,volume 232))

Abstract

Due to the lack of high-end graphics or tensor processing units, previously, deep neural networks could not be implemented as state-of-the-art Artificial Intelligence (AI) algorithms. Rather, linear models were preferred, and they were easy to understand and interpret. Things started changing with the advent of more advanced processing units, in the last decade, when the algorithms took on real-world problems. The models began getting bigger and better.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  • Altmann, A., Toloşi, L., Sander, O., Lengauer, T.: Permutation importance: a corrected feature importance measure. Bioinformatics 26 (2010). https://doi.org/10.1093/bioinformatics/btq134

  • Amann, J., Blasimme, A., Vayena, E., et al.: Explainability for artificial intelligence in healthcare: a multidisciplinary perspective. BMC Med. Inform. Decis. Mak. 2 (2020). https://doi.org/10.1186/s12911-020-01332-6

  • Amoroso, N., Pomarico, D., Fanizzi, A., et al.: A roadmap towards breast cancer therapies supported by explainable artificial intelligence. Appl. Sci. (Switzerland) 11 (2021). https://doi.org/10.3390/app11114881

  • Aslam, A., Khan, E., Beg, M.M.S.: Improved edge detection algorithm for brain tumor segmentation. Procedia Comput. Sci. (2015)

    Google Scholar 

  • Bach, S., Binder, A., Montavon, G., et al.: On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLoS ONE (2015). https://doi.org/10.1371/journal.pone.0130140

    Article  Google Scholar 

  • Baker, A.: Book: crossing the quality chasm: a new health system for the 21st century. BMJ 323 (2001). https://doi.org/10.1136/bmj.323.7322.1192

  • Bartolo, M., Roberts, A., Welbl, J., et al.: Beat the AI: investigating adversarial human annotation for reading comprehension. Trans. Assoc. Comput. Linguist. 8 (2020). https://doi.org/10.1162/tacl_a_00338

  • Bhattacharya, S., Lane, N.D.: From smart to deep: Robust activity recognition on smartwatches using deep learning. In: 2016 IEEE International Conference on Pervasive Computing and Communication Workshops, PerCom Workshops 2016 (2016)

    Google Scholar 

  • Calmon, F.P., Wei, D., Vinzamuri, B., et al.: Optimized pre-processing for discrimination prevention. In: Advances in Neural Information Processing Systems (2017)

    Google Scholar 

  • Caruana, R., Lou, Y., Gehrke, J., et al.: Intelligible models for healthcare: Predicting pneumonia risk and hospital 30-day readmission. In: Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (2015)

    Google Scholar 

  • Chattopadhay, A., Sarkar, A., Howlader, P., Balasubramanian, V.N.: Grad-CAM++: generalized gradient-based visual explanations for deep convolutional networks. In: Proceedings—2018 IEEE Winter Conference on Applications of Computer Vision, WACV 2018 (2018)

    Google Scholar 

  • Chaudhari, A.S., Fang, Z., Kogan, F., et al.: Super-resolution musculoskeletal MRI using deep learning. Magn. Reson. Med. (2018). https://doi.org/10.1002/mrm.27178

    Article  Google Scholar 

  • Chen, H., Engkvist, O., Wang, Y., et al.: The rise of deep learning in drug discovery. Drug Discov. Today 23 (2018)

    Google Scholar 

  • Chen, H., Lundberg, S., Lee, S.I.: Explaining models by propagating shapley values of local components. In: Studies in Computational Intelligence (2021)

    Google Scholar 

  • Ching, T., Himmelstein, D.S., Beaulieu-Jones, B.K., et al.: Opportunities and obstacles for deep learning in biology and medicine. J. R. Soc. Interface 15 (2018). https://doi.org/10.1098/rsif.2017.0387

  • Chittajallu, D.R., Dong, B., Tunison, P., et al.: XAI-CBIR: explainable AI system for content based retrieval of video frames from minimally invasive surgery videos. In: Proceedings—International Symposium on Biomedical Imaging (2019)

    Google Scholar 

  • Cleverley, J., Piper, J., Jones, M.M.: The role of chest radiography in confirming covid-19 pneumonia. BMJ 370 (2020)

    Google Scholar 

  • Cohen, I.G.: Informed consent and medical artificial intelligence: what to tell the patient? SSRN Electron. J. (2020). https://doi.org/10.2139/ssrn.3529576

    Article  Google Scholar 

  • Couteaux, V., Nempont, O., Pizaine, G., Bloch, I.: Towards interpretability of segmentation networks by analyzing deepDreams. In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (2019)

    Google Scholar 

  • Coutts, L.V., Plans, D., Brown, A.W., Collomosse, J.: Deep learning with wearable based heart rate variability for prediction of mental and general health. J. Biomed. Inform. 112 (2020). https://doi.org/10.1016/j.jbi.2020.103610

  • Cukier, R.I., Fortuin, C.M., Shuler, K.E., et al.: Study of the sensitivity of coupled reaction systems to uncertainties in rate coefficients. I Theory. J. Chem. Phys. 59 (1973). https://doi.org/10.1063/1.1680571

  • Cutillo, C.M., Sharma, K.R., Foschini, L., et al.: Machine intelligence in healthcare—perspectives on trustworthiness, explainability, usability, and transparency. NPJ Digit. Med. 3 (2020)

    Google Scholar 

  • Dash, S., Günlük, O., Wei, D.: Boolean decision rules via column generation. In: Advances in Neural Information Processing Systems (2018)

    Google Scholar 

  • Deeks, A.: The judicial demand for explainable artificial intelligence. C. Law Rev. 119 (2019)

    Google Scholar 

  • Dhurandhar, A., Chen, P.Y., Luss, R., et al.: Explanations based on the missing: towards contrastive explanations with pertinent negatives. In: Advances in Neural Information Processing Systems (2018)

    Google Scholar 

  • Dindorf, C., Konradi, J., Wolf, C., et al.: Classification and automated interpretation of spinal posture data using a pathology-independent classifier and explainable artificial intelligence (Xai). Sensors 21 (2021). https://doi.org/10.3390/s21186323

  • Dong, D., Tang, Z., Wang, S., et al.: The role of imaging in the detection and management of COVID-19: a review. IEEE Rev. Biomed. Eng. 14 (2021). https://doi.org/10.1109/RBME.2020.2990959

  • Elisa Celis, L., Huang, L., Keswani, V., Vishnoi, N.K.: Classification with fairness constraints: a meta-algorithm with provable guarantees. In: FAT* 2019—Proceedings of the 2019 Conference on Fairness, Accountability, and Transparency (2019)

    Google Scholar 

  • El-Sappagh, S., Alonso, J.M., Islam, S.M.R., et al.: A multilayer multimodal detection and prediction model based on explainable artificial intelligence for Alzheimer’s disease. Sci. Rep. 11 (2021). https://doi.org/10.1038/s41598-021-82098-3

  • Esteva, A., Robicquet, A., Ramsundar, B., et al.: A guide to deep learning in healthcare. Nat. Med. 25 (2019)

    Google Scholar 

  • Everingham et al. 2010Everingham, M., van Gool, L., Williams, C.K.I., et al.: The pascal visual object classes (VOC) challenge. Int. J. Comput. Vis. 88 (2010). https://doi.org/10.1007/s11263-009-0275-4

  • Feldman, M., Friedler, S.A., Moeller, J., et al.: Certifying and removing disparate impact. In: Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (2015)

    Google Scholar 

  • Fuhrman, J.D., Gorre, N., Hu, Q., et al.: A review of explainable and interpretable AI with applications in COVID-19 imaging. Med. Phys. 49 (2022)

    Google Scholar 

  • Garisto, D.: Google AI beats top human players at strategy game StarCraft II. Nature (2019). https://doi.org/10.1038/d41586-019-03298-6

    Article  Google Scholar 

  • Gawehn, E., Hiss, J.A., Schneider, G.: Deep learning in drug discovery. Mol. Inform. 35 (2016)

    Google Scholar 

  • Hassan, S.A., Sayed, M.S., Abdalla, M.I., Rashwan, M.A.: Breast cancer masses classification using deep convolutional neural networks and transfer learning. Multimed. Tools Appl. 79 (2020). https://doi.org/10.1007/s11042-020-09518-w

  • He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (2016)

    Google Scholar 

  • Higgins, D., Madai, V.I.: From bit to bedside: a practical framework for artificial intelligence product development in healthcare. Adv. Intell. Syst. 2 (2020). https://doi.org/10.1002/aisy.202000052

  • Hind, M., Wei, D., Campbell, M., et al.: TED: teaching AI to explain its decisions. In: AIES 2019—Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society (2019)

    Google Scholar 

  • Hooker, S., Erhan, D., Kindermans, P.J., Kim, B.: A benchmark for interpretability methods in deep neural networks. In: Advances in Neural Information Processing Systems (2019)

    Google Scholar 

  • Itti, L., Koch, C., Niebur, E.: A model of saliency-based visual attention for rapid scene analysis. IEEE Trans. Pattern Anal. Mach. Intell. 20 (1998). https://doi.org/10.1109/34.730558

  • Jin, C., Chen, W., Cao, Y., et al.: Development and evaluation of an artificial intelligence system for COVID-19 diagnosis. Nat. Commun. 11 (2020). https://doi.org/10.1038/s41467-020-18685-1

  • Kamiran, F., Karim, A., Zhang, X.: Decision theory for discrimination-aware classification. In: Proceedings—IEEE International Conference on Data Mining, ICDM (2012)

    Google Scholar 

  • Kavya, R., Christopher, J., Panda, S., Lazarus, Y.B.: Machine learning and XAI approaches for allergy diagnosis. Biomed. Signal Process. Control 69 (2021). https://doi.org/10.1016/j.bspc.2021.102681

  • Kearns, M., Neel, S., Roth, A., Wu, Z.S.: Preventing fairness gerrymandering: auditing and learning for subgroup fairness. In: 35th International Conference on Machine Learning, ICML 2018 (2018)

    Google Scholar 

  • Kermany, D.S., Goldbaum, M., Cai, W., et al.: Identifying medical diagnoses and treatable diseases by image-based deep learning. Cell 172 (2018). https://doi.org/10.1016/j.cell.2018.02.010

  • Kim, B., Khanna, R., Koyejo, O.: Examples are not enough, learn to criticize! Criticism for interpretability. In: Advances in Neural Information Processing Systems (2016)

    Google Scholar 

  • Kindermans, P.J., Schütt, K.T., Alber, M., et al.: Learning how to explain neural networks: PatternNet and PatternAttribution. In: 6th International Conference on Learning Representations, ICLR 2018—Conference Track Proceedings (2018)

    Google Scholar 

  • Kletz, S., Schoeffmann, K., Husslein, H.: Learning the representation of instrument images in laparoscopy videos. Healthc. Technol. Lett. (2019)

    Google Scholar 

  • Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. Commun. ACM (2017). https://doi.org/10.1145/3065386

    Article  Google Scholar 

  • Kuenzi, B.M., Park, J., Fong, S.H., et al.: Predicting drug response and synergy using a deep learning model of human cancer cells. Cancer Cell 38 (2020). https://doi.org/10.1016/j.ccell.2020.09.014

  • Kusner, M., Loftus, J., Russell, C., Silva, R.: Counterfactual fairness. In: Advances in Neural Information Processing Systems (2017)

    Google Scholar 

  • Lapuschkin, S., Binder, A., Montavon, G., et al.: Analyzing classifiers: fisher vectors and deep neural networks. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (2016)

    Google Scholar 

  • Lapuschkin, S., Wäldchen, S., Binder, A., et al.: Unmasking Clever Hans predictors and assessing what machines really learn. Nat. Commun. 10 (2019). https://doi.org/10.1038/s41467-019-08987-4

  • Lecun, Y., Bengio, Y., Hinton. G.: Deep learning. Nature (2015)

    Google Scholar 

  • Lee, C.S., Wang, M.H., Yen, S.J., et al.: Human versus computer go: review and prospect [Discussion Forum]. IEEE Comput. Intell. Mag. 11 (2016). https://doi.org/10.1109/MCI.2016.2572559

  • Lei, T., Barzilay, R., Jaakkola, T.: Rationalizing neural predictions. In: EMNLP 2016—Conference on Empirical Methods in Natural Language Processing, Proceedings (2016)

    Google Scholar 

  • Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. In: Advances in Neural Information Processing Systems (2017)

    Google Scholar 

  • Lundberg, S.M., Nair, B., Vavilala, M.S., et al.: Explainable machine-learning predictions for the prevention of hypoxaemia during surgery. Nat. Biomed. Eng. 2 (2018). https://doi.org/10.1038/s41551-018-0304-0

  • Ma, K., Wang, J., Singh, V., et al.: Multimodal image registration with deep context reinforcement learning. In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (2017)

    Google Scholar 

  • Marblestone, A.H., Wayne, G., Kording, K.P.: Toward an integration of deep learning and neuroscience. Front. Comput. Neurosci. 10 (2016). https://doi.org/10.3389/fncom.2016.00094

  • Mauldin, T.R., Canby, M.E., Metsis, V., et al.: Smartfall: a smartwatch-based fall detection system using deep learning. Sensors (Switzerland) 18 (2018). https://doi.org/10.3390/s18103363

  • Mei, X., Lee, H.C., Diao, K.Y., et al.: Artificial intelligence—enabled rapid diagnosis of patients with COVID-19. Nat. Med. 26 (2020). https://doi.org/10.1038/s41591-020-0931-3

  • Miotto, R., Wang, F., Wang, S., et al.: Deep learning for healthcare: review, opportunities and challenges. Brief. Bioinform. 19 (2017). https://doi.org/10.1093/bib/bbx044

  • Mordvintsev, A., Tyka, M., Olah, C.: Inceptionism: going deeper into neural networks, google research blog. In: Google Research Blog (2015)

    Google Scholar 

  • Nweke, H.F., The, Y.W., Al-garadi, M.A., Alo, U.R.: Deep learning algorithms for human activity recognition using mobile and wearable sensor networks: State of the art and research challenges. Expert Syst. Appl. 105 (2018)

    Google Scholar 

  • Papanastasopoulos, Z., Samala, R.K., Chan, H.-P., et al.: Explainable AI for medical imaging: deep-learning CNN ensemble for classification of estrogen receptor status from breast MRI (2020)

    Google Scholar 

  • Peng, J., Zou, K., Zhou, M., et al.: An explainable artificial intelligence framework for the deterioration risk prediction of hepatitis patients. J. Med. Syst. 45 (2021). https://doi.org/10.1007/s10916-021-01736-5

  • Pereira, S., Meier, R., Alves, V., et al.: Automatic brain tumor grading from MRI data using convolutional neural networks and quality assessment. In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (2018)

    Google Scholar 

  • Petsiuk, V., Das, A., Saenko, K.: RisE: randomized input sampling for explanation of black-box models. In: British Machine Vision Conference 2018, BMVC 2018 (2019)

    Google Scholar 

  • Piccialli, F., di Somma, V., Giampaolo, F., et al.: A survey on deep learning in medicine: why, how and when? Inf. Fusion 66 (2021). https://doi.org/10.1016/j.inffus.2020.09.006

  • Plischke, E.: An effective algorithm for computing global sensitivity indices (EASI). Reliab. Eng. Syst. Saf. 95 (2010). https://doi.org/10.1016/j.ress.2009.11.005

  • Qiu, S., Joshi, P.S., Miller, M.I., et al.: Development and validation of an interpretable deep learning framework for Alzheimer’s disease classification. Brain 143 (2020). https://doi.org/10.1093/brain/awaa137

  • Ribeiro, M.T., Singh, S., Guestrin, C.: “Why should I trust you?” Explaining the predictions of any classifier. In: Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (2016)

    Google Scholar 

  • Ribeiro, M.T., Singh, S., Guestrin, C.: Anchors: high-precision model-agnostic explanations. In: 32nd AAAI Conference on Artificial Intelligence, AAAI 2018 (2018)

    Google Scholar 

  • Richards, B.A., Lillicrap, T.P., Beaudoin, P., et al.: A deep learning framework for neuroscience. Nat. Neurosci. 22 (2019)

    Google Scholar 

  • Ronneberger, O., Fischer, P., Brox, T.: U-net: convolutional networks for biomedical image segmentation. In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (2015)

    Google Scholar 

  • Rudin, C.: Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell. 1 (2019)

    Google Scholar 

  • Salehi, S., Abedi, A., Balakrishnan, S., Gholamrezanezhad, A.: Coronavirus disease 2019 (COVID-19) imaging reporting and data system (COVID-RADS) and common lexicon: a proposal based on the imaging data of 37 studies. Eur. Radiol. 30 (2020). https://doi.org/10.1007/s00330-020-06863-0

  • Saltelli, A., Ratto, M., Andres, T., et al.: Global sensitivity analysis: the primer (2008)

    Google Scholar 

  • Sarkar, A., Vandenhirtz, J., Nagy, J., et al.: Identification of images of COVID-19 from chest X-rays using deep learning: comparing COGNEX VisionPro deep learning 1.0TM software with open source convolutional neural networks. SN Comput. Sci. 2 (2021). https://doi.org/10.1007/s42979-021-00496-w

  • Sarp, S., Kuzlu, M., Wilson, E., et al.: The enlightening role of explainable artificial intelligence in chronic wound classification. Electronics (Switzerland) 10 (2021). https://doi.org/10.3390/electronics10121406

  • Sayres, R., Taly, A., Rahimy, E., et al.: Using a deep learning algorithm and integrated gradients explanation to assist grading for diabetic retinopathy. Ophthalmology 126 (2019). https://doi.org/10.1016/j.ophtha.2018.11.016

  • Schaefer, J., Lehne, M., Schepers, J., et al.: The use of machine learning in rare diseases: a scoping review. Orphanet J. Rare Dis. 15 (2020)

    Google Scholar 

  • Schönberger, D.: Artificial intelligence in healthcare: a critical analysis of the legal and ethical implications. Int. J. Law Inf. Technol. 27 (2019). https://doi.org/10.1093/ijlit/eaz004

  • Selvaraju, R.R., Cogswell, M., Das, A., et al.: Grad-CAM: visual explanations from deep networks via gradient-based localization. Int. J. Comput. Vis. 128 (2020). https://doi.org/10.1007/s11263-019-01228-7

  • Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. In: 34th International Conference on Machine Learning, ICML 2017 (2017)

    Google Scholar 

  • Simonyan, K., Vedaldi, A., Zisserman, A.: Deep inside convolutional networks: visualising image classification models and saliency maps. In: 2nd International Conference on Learning Representations, ICLR 2014—Workshop Track Proceedings (2014)

    Google Scholar 

  • Singh, A., Mohammed, A.R., Zelek, J., Lakshminarayanan, V.: Interpretation of deep learning using attributions: application to ophthalmic diagnosis (2020)

    Google Scholar 

  • Smith, J.A., Abhari, R.E., Hussain, Z., et al.: Industry ties and evidence in public comments on the FDA framework for modifications to artificial intelligence/machine learning-based medical devices: a cross sectional study. BMJ Open 10 (2020). https://doi.org/10.1136/bmjopen-2020-039969

  • Sobol, I.M.: Global sensitivity indices for nonlinear mathematical models and their Monte Carlo estimates. Math. Comput. Simul. 55 (2001). https://doi.org/10.1016/S0378-4754(00)00270-6

  • Sun, J., Darbehani, F., Zaidi, M., Wang, B.: SAUNet: shape attentive U-net for interpretable medical image segmentation. In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (2020)

    Google Scholar 

  • Szegedy, C., Liu, W., Jia, Y., et al.: Going deeper with convolutions. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (2015)

    Google Scholar 

  • Thompson, B., Baker, N.: Google AI beats humans at designing computer chips. Nature (2021). https://doi.org/10.1038/d41586-021-01558-y

    Article  Google Scholar 

  • van Molle, P., de Strooper, M., Verbelen, T., et al.: Visualizing convolutional neural networks to improve decision support for skin lesion classification. In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (2018)

    Google Scholar 

  • Vaswani, A., Shazeer, N., Parmar, N., et al.: Attention is all you need. In: Advances in Neural Information Processing Systems (2017)

    Google Scholar 

  • Wang, L., Lin, Z.Q., Wong, A.: COVID-Net: a tailored deep convolutional neural network design for detection of COVID-19 cases from chest X-ray images. Sci. Rep. (2020). https://doi.org/10.1038/s41598-020-76550-z

    Article  Google Scholar 

  • Wang, S., Li, Z., Yu, Y., Xu, J.: Folding membrane proteins by deep transfer learning. Cell Syst. 5 (2017). https://doi.org/10.1016/j.cels.2017.09.001

  • Wehbe, R.M., Sheng, J., Dutta, S., et al.: DeepCOVID-XR: an artificial intelligence algorithm to detect COVID-19 on chest radiographs trained and tested on a large U.S. clinical data set. Radiology 299 (2021). https://doi.org/10.1148/RADIOL.2020203511

  • Wei, D., Dash, S., Gao, T., Günlük, O.: Generalized linear rule models. In: 36th International Conference on Machine Learning, ICML 2019 (2019)

    Google Scholar 

  • Wen, D., Khan, S.M., Xu, A.J., et al.: Characteristics of publicly available skin cancer image datasets: a systematic review. Lancet Digit. Health 4 (2022)

    Google Scholar 

  • Weng, S.F., Reps, J., Kai, J., et al.: Can machine-learning improve cardiovascular risk prediction using routine clinical data? PLoS ONE 12 (2017). https://doi.org/10.1371/journal.pone.0174944

  • Wickstrøm, K., Kampffmeyer, M., Jenssen, R.: Uncertainty and interpretability in convolutional neural networks for semantic segmentation of colorectal polyps. Med. Image Anal. 60 (2020). https://doi.org/10.1016/j.media.2019.101619

  • Wu, G., Kim, M., Wang, Q., et al.: Unsupervised deep feature learning for deformable registration of MR brain images. In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (2013)

    Google Scholar 

  • Xia, H., Sun, W., Song, S., Mou, X.: Md-net: multi-scale dilated convolution network for CT images segmentation. Neural Process. Lett. 51 (2020). https://doi.org/10.1007/s11063-020-10230-x

  • Xiong, Z., Wang, R., Bai, H.X., et al.: Artificial intelligence augmentation of radiologist performance in distinguishing COVID-19 from pneumonia of other origin at chest CT. Radiology 296 (2020). https://doi.org/10.1148/radiol.2020201491

  • Xu, J.: Distance-based protein folding powered by deep learning. Proc. Natl. Acad. Sci. U. S. A. 116 (2019). https://doi.org/10.1073/pnas.1821309116

  • Young, K., Booth, G., Simpson, B., et al.: Deep neural network or dermatologist? In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (2019)

    Google Scholar 

  • Zafar, M.B., Valera, I., Rodriguez, M.G., et al.: From parity to preference-based notions of fairness in classification. In: Advances in Neural Information Processing Systems (2017)

    Google Scholar 

  • Zech, J.R., Badgeley, M.A., Liu, M., et al.: Variable generalization performance of a deep learning model to detect pneumonia in chest radiographs: a cross-sectional study. PLoS Med. 15 (2018). https://doi.org/10.1371/journal.pmed.1002683

  • Zeiler, M.D., Fergus, R.: Visualizing and understanding convolutional networks. In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (2014)

    Google Scholar 

  • Zhang, K., Liu, X., Shen, J., et al.: Clinically applicable AI system for accurate diagnosis, quantitative measurements, and prognosis of COVID-19 pneumonia using computed tomography. Cell (2020). https://doi.org/10.1016/j.cell.2020.04.045

    Article  Google Scholar 

  • Zhang, B.H., Lemoine, B., Mitchell, M.: Mitigating Unwanted Biases with Adversarial Learning. In: AIES 2018—Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society (2018)

    Google Scholar 

  • Zhou, B., Khosla, A., Lapedriza, A., et al.: Learning deep features for discriminative localization. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (2016)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Arjun Sarkar .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Sarkar, A. (2023). Explainable AI and Its Applications in Healthcare. In: Mehta, M., Palade , V., Chatterjee, I. (eds) Explainable AI: Foundations, Methodologies and Applications. Intelligent Systems Reference Library, vol 232. Springer, Cham. https://doi.org/10.1007/978-3-031-12807-3_6

Download citation

Publish with us

Policies and ethics