Skip to main content

Between Overload and Indifference: Detection of Fake Accounts and Social Bots by Community Managers

  • Conference paper
  • First Online:
Disinformation in Open Online Media (MISDOOM 2019)

Abstract

In addition to the increased opportunities for citizens to participate in society, participative online journalistic platforms offer opportunities for the dissemination of online propaganda through fake accounts and social bots. Community managers are expected to separate real expressions of opinion from manipulated statements through fake accounts and social bots. However, little is known about the criteria by which managers make the distinction between “real” and “fake” users. The present study addresses this gap with a series of expert interviews. The results show that community managers have widespread experience with fake accounts, but they have difficulty assessing the degree of automation. The criteria by which an account is classified as “fake” can be described along a micro-meso-macro structure, whereby recourse to indicators at the macro level is barely widespread, but is instead partly stereotyped, where impression-forming processes at the micro and meso levels predominate. We discuss the results with a view to possible long-term consequences for collective participation.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 79.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Singer, J.B., et al.: Introduction. In: Singer, J.B., et al. (eds.) Participatory Journalism: Guarding Open Gates at Online Newspapers. Wiley Subscription Services, Inc., Sussex (2011)

    Chapter  Google Scholar 

  2. Diakopoulos, N., Naaman, M.: Towards quality discourse in online news comments. In: Proceedings of the ACM 2011 Conference on Computer Supported Cooperative Work, pp. 133–142 (2011)

    Google Scholar 

  3. Grimme, C., Preuss, M., Adam, L., Trautmann, H.: Social bots: human-like by means of human control. Big Data 5, 279–293 (2017)

    Article  Google Scholar 

  4. Erjavec, K., Kovačič, M.P.: You don’t understand, this is a new war!’ Analysis of hate speech in news web sites’ comments. Mass Commun. Soc. 15(6), 899–920 (2012)

    Article  Google Scholar 

  5. Bastos, M.T., Mercea, D.: The brexit botnet and user-generated hyperpartisan news. Soc. Sci. Comput. Rev. (2017). https://doi.org/10.1177/0894439317734157

    Article  Google Scholar 

  6. Neudert, L.-M., Kollanyi, B., Howard, P.N.: Junk news and bots during the German parliamentary election: what are German voters sharing over Twitter? In: COMPROP Data Memo, vol. 7, September 2017

    Google Scholar 

  7. Badri Satya, P.R., Satya, B., Lee, K., Lee, D., Zhang, J.J.: Uncovering fake likers in online social networks. ACM Trans. Internet Technol. 2365–2370 (2016). https://doi.org/10.1145/2983323.2983695

  8. Woolley, S.C., Howard, P.N.: Social media, revolution, and the rise of the political bot. In: Routledge Handbook of Media, Conflict, and Security, pp. 282–292. Routledge, New York (2016)

    Google Scholar 

  9. Davis, C.A., Varol, O., Ferrara, E., Flammini, A., Menczer, F.: BotORNot: a system to evaluate social bots. In: WWW 2016 Companion, pp. 1–11 (2016)

    Google Scholar 

  10. Heinderyckx, F.: Gatekeeping Theory Redux. In: Vos, T.P., Heinderyckx, F. (eds.) Gatekeeping in Transition, pp. 253–268. Routledge, New York (2015)

    Google Scholar 

  11. Williams, B.A., DelliCarpini, M.X.: Unchained reaction: the collapse of media gatekeeping and the Clinton-Lewinsky scandal. Journalism 1(1), 61–85 (2000)

    Article  Google Scholar 

  12. Bruns, A.: Gatewatching. Collaborative Online News Production. Peter Lang, New York (2005)

    Google Scholar 

  13. Vos, T.P.: Revisiting gatekeeping theory during a time of transition. In: Vos, T.P., Heinderyckx, F. (eds.) Gatekeeping in Transition, pp. 3–24. Routledge, New York (2015)

    Google Scholar 

  14. Gagliardone, I., et al.: MECHACHAL: online debates and elections in Ethiopia - from hate speech to engagement in social media, Oxford (2016)

    Google Scholar 

  15. Engelin, M., De Silva, F.: Troll detection: a comparative study in detecting troll farms on Twitter using cluster analysis. KTH, Stockholm, Sweden, 11 May 2016

    Google Scholar 

  16. Braun, J., Gillespie, T.: Hosting the public discourse, hosting the public. J. Pract. 5(4), 383–398 (2011)

    Google Scholar 

  17. Frischlich, L., Boberg, S., Quandt, T., Boberg, S., Quandt, T.: Comment sections as targets of dark participation? Journalists’ evaluation and moderation of deviant user comments, vol. 9699 (2019)

    Article  Google Scholar 

  18. Weischenberg, S., Malik, M., Scholl, A.: Journalismus in Deutschland 2005 Zentrale Befunde der aktuellen Repräsentativbefragung deutscher journalisten. Media Perspekt. 7, 346–361 (2006)

    Google Scholar 

  19. Gläser, J., Laudel, G.: Experteninterviews und qualitative Inhaltsanalyse als Instrument rekonstruierender Untersuchungen, 4th edn. VS Verlag für Sozialwissenschaften/Springer, Wiesbaden (2010)

    Book  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Svenja Boberg .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Boberg, S., Frischlich, L., Schatto-Eckrodt, T., Wintterlin, F., Quandt, T. (2020). Between Overload and Indifference: Detection of Fake Accounts and Social Bots by Community Managers. In: Grimme, C., Preuss, M., Takes, F., Waldherr, A. (eds) Disinformation in Open Online Media. MISDOOM 2019. Lecture Notes in Computer Science(), vol 12021. Springer, Cham. https://doi.org/10.1007/978-3-030-39627-5_2

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-39627-5_2

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-39626-8

  • Online ISBN: 978-3-030-39627-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics