Abstract
The suspension of the chatbot service “Luda” has raised numerous questions for South Korean society on ethical issues in dealing with artificial intelligence (AI). While the primary reason for suspension was the chatbot’s hate speech against the LGBT community, the chatbot itself was also a target for sexual harassment from manipulative users. Moreover, the service provider “Scatter Lab” had to go through an investigation for possible violations of the Personal Information Protection Act.
The aim of this research is to systematically examine the public’s expectations and concerns over AI and to retrieve implications from the perspective of human – AI interaction. Using big data analysis incorporated with natural language processing, the research analyzed news and comments related to Luda and found that the main concern of the public on data was regarding its collection process and content. Based on a qualitative analysis of the literature, the main source of such concerns was enhanced AI literacy which was also induced by the complex ethical issues related to Luda.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
Optimind (Version 3.0).
References
McCurry, J.: South Korean AI chatbot pulled from Facebook after hate speech towards minorities, 14 January 2021. https://www.theguardian.com/world/2021/jan/14/time-to-properly-socialise-hate-speech-ai-chatbot-pulled-from-facebook. Accessed 26 Mar 2021
Kim, E.: (News focus) CHATBOT Luda CONTROVERSY leave questions over AI ethics, data collection, 13 January 2021. https://en.yna.co.kr/view/AEN20210113004100320. Accessed 26 Mar 2021
Wolf, M.J., Miller, K.W., Grodzinsky, F.S.: Why we should have seen that coming: comments on microsoft’s tay “experiment”, and wider implications. ORBIT J. 1(2), 1–12 (2017)
Sawers, P.: Apple and GOOGLE halt human Voice-data reviews over privacy backlash, but transparency is the real issue, 2 August 2019. https://venturebeat.com/2019/08/02/apple-and-google-halt-human-voice-data-reviews-over-privacy-backlash-but-transparency-is-the-real-issue/. Accessed 26 Mar 2021
Dafoe, A.: AI governance: a research agenda. Governance of AI Program, Future of Humanity Institute, University of Oxford, Oxford, UK (2018)
Kim, L., Kim, N.: Connecting opinion, belief and value: semantic network analysis of a UK public survey on embryonic stem cell research. J. Sci. Commun. 14(1), A01 (2015)
Lokesh. Lee LUDA: A CHATBOT developed by SCATTERLAB gone wrong, social Media Rampage, 17 Feb 2021. http://www.forumgrad.com/lee-luda-a-chatbot-developed-by-scatterlab-gone-wrong-social-media-rampage/. Accessed 26 Mar 2021
Long, D., Magerko, B.: What is AI literacy? Competencies and design considerations. In: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, pp. 1–16, April 2020
AI principles, 11 April 2018. https://futureoflife.org/ai-principles/. Accessed 26 Mar 2021
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Appendix
Appendix
1.1 The Original Semantic Network of News Articles
1.2 The Original Semantic Network of News Comments
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Kim, Y., Kim, J.H. (2021). The Impact of Ethical Issues on Public Understanding of Artificial Intelligence. In: Stephanidis, C., Antona, M., Ntoa, S. (eds) HCI International 2021 - Posters. HCII 2021. Communications in Computer and Information Science, vol 1420. Springer, Cham. https://doi.org/10.1007/978-3-030-78642-7_67
Download citation
DOI: https://doi.org/10.1007/978-3-030-78642-7_67
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-78641-0
Online ISBN: 978-3-030-78642-7
eBook Packages: Computer ScienceComputer Science (R0)