Abstract
Public consumption of artificial intelligence (AI) technologies has been rarely investigated from the perspective of data surveillance and security. We show that the technology acceptance model, when properly modified with security and surveillance fears about AI, builds an insight on how individuals begin to use, accept, or evaluate AI and its automated decisions. We conducted two studies, and found positive roles of perceived ease of use (PEOU) and perceived usefulness (PU). AI security concern, however, negatively affected PEOU and PU, resulting in less acceptance of AI—(1) use, (2) preference, and (3) participation. AI surveillance concern also had negative effects on the credibility of AI and its recommendations. We integrated extant literature on socio-demographic differences, providing an insight on how AI acceptance is based on one’s rationality regarding (1) technological risks (security/surveillance) and (2) benefits (PEOU/PU) as well as other contextual factors of socio-demographics.
Similar content being viewed by others
References
Acquisti A, Brandimarte L, Loewenstein G (2015) Privacy and human behavior in the age of information. Science 347:509–514. https://doi.org/10.1126/science.aaa1465
Ananny M, Crawford K (2018) Seeing without knowing: limitations of the transparency ideal and its application to algorithmic accountability. New Med Soc 20:973–989. https://doi.org/10.1177/1461444816676645
Araujo T, Helberger N, Kruikemeier S, De Vreese CH (2020) In AI we trust? Perceptions about automated decision-making by artificial intelligence. AI Soc 35(3):611–623. https://doi.org/10.1007/s00146-019-00931-w
Baruh L, Secinti E, Cemalcilar Z (2017) Online privacy concerns and privacy management: a meta-analytical review. J Commun 67:26–53. https://doi.org/10.1111/jcom.12276
Baum SD (2020) Social choice ethics in artificial intelligence. AI Soc 35(1):165–176. https://doi.org/10.1007/s00146-017-0760-1
Crawford K, Schultz J (2014) Big data and due process: toward a framework to redress predictive privacy harms. BC Law Rev 55:93
Davis FD (1989) Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Q. https://doi.org/10.2307/249008
Dutton WH, Rogers EM, Jun SH (1987) Diffusion and social impacts of personal computers. Commun Res 14:219–250. https://doi.org/10.1177/009365087014002005
Fishbein M, Ajzen I (1975) Intention and behavior: an introduction to theory and research. Addition-Wesley, Boston
Fulk J (1993) Social construction of communication technology. Acad Man J. https://doi.org/10.5465/256641
Giovanis AN, Binioris S, Polychronopoulos G (2012) An extension of TAM model with IDT and security/privacy risk in the adoption of internet banking services in Greece. Eur Med J Bus 7:24–53. https://doi.org/10.1108/14502191211225365
Grimes-Gruczka T, Gratzer C, Dialogue C (2000) Ethics: survey of consumer attitudes about health web sites. California HealthCare Foundation
Hayes AF (2012) PROCESS: a versatile computational tool for observed variable mediation, moderation, and conditional process modeling. http://www.afhayes.com/public/process2012.pdf. Accessed 26 June 2021
Hilligos B, Rieh SY (2008) Developing a unifying framework of credibility assessment: construct, heuristics, and interaction in context. Inf Proc Man 44:1467–1484. https://doi.org/10.1016/j.ipm.2007.10.001
Horvitz E (2017) AI, people, and society. Science 357:7. https://doi.org/10.1126/science.aao2466
Jahangir N, Begum N (2008) The role of perceived usefulness, perceived ease of use, security and privacy, and customer attitude to engender customer adaptation in the context of electronic banking. Afr J Bus Man 2:032–040
Janssen CP, Donker SF, Brumby DP, Kun AL (2019) History and future of human automation interaction. Int J Hum Commun Stud 131:99–107. https://doi.org/10.1016/j.ijhcs.2019.05.006
Joo J, Sang Y (2013) Exploring Koreans’ smartphone usage: an integrated model of the technology acceptance model and uses and gratifications theory. Comput Human Behav 29:2512–2518. https://doi.org/10.1016/j.chb.2013.06.002
Lupton D (2012) M-health and health promotion: the digital cyborg and surveillance society. Soc Theory Health 10:229–244. https://doi.org/10.1057/sth.2012.6
Milano S, Taddeo M, Floridi L (2020) Recommender systems and their ethical challenges. AI Soc 35(4):957–967. https://doi.org/10.1371/journal.pcbi.1005399
Moon JW, Kim YG (2001) Extending the TAM for a world-wide-web context. Inf Manag 38:217–230. https://doi.org/10.1016/S0378-7206(00)00061-6
Nath R, Sahu V (2020) The problem of machine ethics in artificial intelligence. AI Soc 35(1):103–111. https://doi.org/10.1007/s00146-017-0768-6
Park YJ (2021a) The future of digital surveillance: why digital monitoring will never lose its appeal in a world of algorithm-driven AI. University of Michigan Press, Michigan
Park YJ (2021b) Personal data concern, behavioral puzzle and uncertainty in the age of digital surveillance. Telem Inform. https://doi.org/10.1016/j.tele.2021.101748
Park YJ (2021c) Structural logic of AI surveillance and its normalisation in the public sphere. Javnost Public 28(4):341–357. https://doi.org/10.1080/13183222.2021.1955323
Park, YJ (2021d) Why privacy matters to digital inequality. In: Handbook of Digital Inequality (Hargittai E). Edward Elgar Publishing.
Park YJ, Shin DD (2020) Contextualizing privacy on health-related use of information technology. Comput Hum Behav 105:106204. https://doi.org/10.1016/j.chb.2019.106204
Pelau C, Dabija DC, Ene I (2021) What makes an AI device human-like? The role of interaction quality, empathy and perceived psychological anthropomorphic characteristics in the acceptance of artificial intelligence in the service industry. Comput Hum Behav 122:106855. https://doi.org/10.1016/j.chb.2021.106855
Rogers EM (2010) Diffusion of innovations. Simon and Schuster, New York
Sang Y, Lee JY, Park S, Fisher C, Fuller G (2020) Signalling and expressive interaction: online news users’ different modes of interaction on digital platforms. Dig J 8(4):467–485. https://doi.org/10.1080/21670811.2020.1743194
Shin D (2009) Understanding user acceptance of DMB in South Korea using the modified technology acceptance model. Int J Hum Comput Interact 25:173–198. https://doi.org/10.1080/10447310802629785
Shin D (2021a) How do people judge the credibility of algorithmic sources? AI Soc. https://doi.org/10.1007/s00146-021-01158-4
Shin D (2021b) The effects of explainability and causability on perception, trust, and acceptance: implications for explainable AI. Int J Hum Commun Stud 146:102551. https://doi.org/10.1016/j.ijhcs.2020.102551
Stegner B (2018) January 10. 7 Ways Alexa and Amazon echo pose a privacy risk. https://www.makeuseof.com/tag/alexa-amazon-echo-privacy-risk/. Accessed 26 June 2021
Sundar S (2020) Rise of machine agency: a framework for studying the psychology of human–AI interaction. J Comput Med Commun. https://doi.org/10.1093/jcmc/zmz026
Topol EJ (2019) High-performance medicine: the convergence of human and artificial intelligence. Nat Med 25:44. https://doi.org/10.1038/s41591-018-0300-7
Vassakis K, Petrakis E, Kopanakis I, Skourletopoulos G, Mastorakis G, Mavromoustakis C, Dobre C, Pallis E (2018) Big data analytics: applications, prospects and challenges. In: Mobile big data. Springer, Cham, pp 3–20
Venkatesh V, Morris MG, Davis GB, Davis FD (2003) User acceptance of information technology: toward a unified view. MIS Q. https://doi.org/10.2307/30036540
Winkelman WJ, Leonard KJ, Rossos PG (2005) Patient-perceived usefulness of online electronic medical records: employing grounded theory in the development of information and communication technologies for use by patients living with chronic illness. J Am Med Inf Assoc 12:306–314. https://doi.org/10.1197/jamia.M1712
Złotowski J, Yogeeswaran K, Bartneck C (2017) Can we control it? Autonomous robots threaten human identity, uniqueness, safety, and resources. Int J Hum Comput Stud 100:48–54. https://doi.org/10.1016/j.ijhcs.2016.12.008
Funding
No funding was received to assist with the preparation of this manuscript.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors have no relevant financial or non-financial interests to disclose. There is no conflicting interest to disclose.
Data availability statement
There is no data set associated with this work to be publicly available.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Park, Y.J., Jones-Jang, S.M. Surveillance, security, and AI as technological acceptance. AI & Soc 38, 2667–2678 (2023). https://doi.org/10.1007/s00146-021-01331-9
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00146-021-01331-9