Skip to main content

Creating an Influencer-Relationship Model to Locate Actors in Environmental Communications

  • Chapter
  • First Online:
Research Methods for the Digital Humanities

Abstract

This chapter describes a method for creating an influencer-relationship model from newspaper articles and illustrates each step required to develop the model. These steps include collecting articles from disparate sources, locating relevant actors in the articles, compiling and querying a MySQL database, and creating visualizations to assist with analysis. Once assembled, the article archive can be searched and modeled to find relationships between people who influence the production of public environmental knowledge. My area of focus is environmental communications regarding groundwater debates in Texas. The chapter focuses on public communications about groundwater during drought years in Texas (2010–2014) as a case study. It concludes with a few thoughts on how to improve the method.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 34.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 44.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Franco Moretti, Distant Reading (London and New York: Verso, 2013).

  2. 2.

    I repeated the process of creating a conceptual model many times to arrive at a workable method and found it helpful to document each question and approach. First, research practices should be transparent, and it can be easy to forget to record critical choices. Second, the documentation can help to clarify the results later in the research process.

  3. 3.

    Mats Ekström, “Epistemologies of TV Journalism,” Journalism: Theory, Practice & Criticism 3, no. 3 (2002), 259–282.

  4. 4.

    Tom Rosenstiel, Amy Mitchell, Kristen Purcell, and Lee Rainier, “How People Learn About Their Local Community,” Pew Research Center, 2011; Maxwell T. Boykoff and Jules M. Boykoff, “Balance as Bias: Global Warming and the US Prestige Press,” Global Environmental Change 14, no. 2 (2004), 125–136.

  5. 5.

    Tom Rosenstiel, Amy Mitchell, Kristen Purcell, and Lee Rainier, “How People Learn About Their Local Community,” Pew Research Center, 2011.

  6. 6.

    Klaus Krippendorff, Content Analysis: An Introduction to Its Methodology, 2nd ed (Thousand Oaks, Calif: Sage, 2004); J. Macnamara, “Media Content Analysis: Its Uses, Benefits and Best Practice Methodology,” Asia Pacific Public Relations Journal 6, no. 1 (2005), 1–34; Bernard Berelson and Paul Lazarsfeld, Content Analysis in Communications Research (New York: Free Press, 1946).

  7. 7.

    Steve Stemler, “An Overview of Content Analysis,” Practical Assessment, Research & Evaluation 7, no. 17 (2001).

  8. 8.

    Klaus Krippendorff, Content Analysis: An Introduction to Its Methodology, 2nd ed (Thousand Oaks, Calif: Sage, 2004).

  9. 9.

    The metadata for a text document contains the articles publication name, city, date, author, word count, and other identifying information.

  10. 10.

    A negative keyword is any word that should not return results; it is used to narrow a search. For example, sports terms were made negative. One of the unintended and unofficial findings of this project is that “drought” is more common when describing a basketball team’s win/loss record rather than a meteorological condition in local papers.

  11. 11.

    A regular expression is a sequence of characters used to find patterns within strings of text. They are commonly used in online forms to ensure the fields are correctly filled out. For example, a programmer may use a regular expression to confirm a cell has the correct format for a phone number or address. If the user does not use the proper syntax, an error is returned. There are numerous online tutorials to help people write a regular expression. I used regex101.com and regexr.com to help write the expressions needed for this project.

  12. 12.

    Austin Meyers, founder of AK5A.com, wrote the PHP scripts used in the application. Austin and I have collaborated over the past 15 years on numerous technical projects and applications. The process of web scraping is a method for extracting objects or text from HTML websites. There are many software companies producing applications to assist in mining website data. Researchers may also choose to build a web scraper for specialized research projects.

  13. 13.

    Web-scraping is a technique to transfer the content of a webpage to another format.

  14. 14.

    Import.io is a free web scraping application that converts a webpage into a table. It can be automated to run against multiple websites or used to search within large sites. For example, the Brownsville Herald website has over 118,000 pages (as of December 19, 2017) and it would be impractical to search the entire site and copy and paste individual articles. The website accompanying this book has a video on how Import.io can be used to gather newspaper articles into a database.

  15. 15.

    Klaus Krippendorff, Content Analysis: An Introduction to Its Methodology, 2nd ed (Thousand Oaks, Calif: Sage, 2004).

  16. 16.

    A Boolean phrase match allows a user to search for phrases using operators such as AND, OR, NOT to refine searches.

  17. 17.

    Allen Ritter is the Texas State Representative for District 21 and current chairman of the Texas House Committee on Natural Resources.

  18. 18.

    Robert Philip Weber, Basic Content Analysis, 2nd ed. Sage University Papers Series, no. 07-049 (Newbury Park, CA: Sage, 1990).

  19. 19.

    Klaus Krippendorff, Content Analysis: An Introduction to Its Methodology, 2nd ed (Thousand Oaks, Calif: Sage, 2004); Johnny Saldaña, The Coding Manual for Qualitative Researchers, 3rd ed (Los Angeles, CA and London, New Delhi, Singapore, Washington DC: Sage, 2016); Sharan B. Merriam and Elizabeth J. Tisdell, Qualitative Research: A Guide to Design and Implementation, 4th ed (San Francisco, CA: Jossey-Bass, 2016).

  20. 20.

    A local server is a MySQL database hosted on the user’s computer rather than hosted by a provider. Running the database on a local machine, as opposed to online, reduces risks allowing the user to experiment without worrying about security or performance issues. Instructions, best practices, and links to help get you started with a MySQL database on are on the website accompanying this book.

  21. 21.

    There are many ways to construct a MySQL query. I used a query similar to the one in Fig. 5.2 because it fit into my workflow; it was easy for me to find the keyword_id that associated with the keywords I was interested in. However, another researcher may have rewritten the query differently, but still arrive the same output.

  22. 22.

    There are other ways to accomplish the same goal using MySQL , the only requirement for this type of project are that the results are accurate. Using identification numbers rather than the text searchers sped up the verification process.

  23. 23.

    Edward Tufte has written extensively on data visualization beginning in the 1970s (Tufte 2001, 2006). Ben Fry has not only written on the subject (Fry 2008) but has also developed a programing language around data visualization called Processing.

  24. 24.

    Marian Dörk, Christopher Collins, Patrick Feng, and Sheelagh Carpendale, “Critical InfoVis: Exploring the Politics of Visualization,” In CHI’13 Extended Abstracts on Human Factors in Computing Systems, edited by Wendy E. Mackay and Association for Computing Machinery (New York: ACM, 2013).

  25. 25.

    Johanna Drucker, “Humanities Approaches to Graphical Display,” Digital Humanities Quarterly 5, no. 1 (2011).

  26. 26.

    Lisa Otty and Tara Thomson, “Data Visualization in the Humanities,” In Research Methods for Creating and Curating Data in the Digital Humanities, edited by Matt Hayler and Gabriele Griffin. Research Methods for the Arts and Humanities (Edinburgh: Edinburgh University Press, 2016).

  27. 27.

    Ibid.

  28. 28.

    Marian Dörk, Christopher Collins, Patrick Feng, and Sheelagh Carpendale, “Critical InfoVis: Exploring the Politics of Visualization,” In CHI’13 Extended Abstracts on Human Factors in Computing Systems, edited by Wendy E. Mackay and Association for Computing Machinery (New York: ACM, 2013).

  29. 29.

    Maxwell T. Boykoff and Jules M. Boykoff, “Balance as Bias: Global Warming and the US Prestige press,” Global Environmental Change 14, no. 2 (2004), 125–136.

  30. 30.

    See Lance Bennett’s (2016) The Politics of Illusion (10th edition) by the University of Chicago Press.

  31. 31.

    The specific steps required to create these charts are available on the website accompanying this book.

  32. 32.

    Python is a general-purpose high-level programming language and is ideal for small-scale applications. Python’s Natural Language Toolkit (NLTK) is a platform that allows Python application to work with English language data. The NLTK is a free, open-source project used by researchers across disciplines.

References

  • Berelson, Bernard, and Paul Lazarsfeld. Content Analysis in Communications Research. New York: Free Press, 1946.

    Google Scholar 

  • Boykoff, Maxwell T., and Jules M. Boykoff. “Balance as Bias: Global Warming and the US Prestige Press.” Global Environmental Change 14, no. 2 (2004): 125–136. https://doi.org/10.1016/j.gloenvcha.2003.10.001.

    Article  Google Scholar 

  • Dörk, Marian, Christopher Collins, Patrick Feng, and Sheelagh Carpendale. “Critical InfoVis: Exploring the Politics of Visualization.” In CHI’13 Extended Abstracts on Human Factors in Computing Systems, edited by Wendy E. Mackay and Association for Computing Machinery. New York: ACM, 2013. http://mariandoerk.de/criticalinfovis/altchi2013.pdf.

  • Drucker, Johanna. “Humanities Approaches to Graphical Display.” Digital Humanities Quarterly 5, no. 1 (2011). http://www.digitalhumanities.org/dhq/vol/5/1/000091/000091.html#p4.

  • Ekström, Mats. Epistemologies of TV Journalism. Journalism: Theory, Practice & Criticism 3, no. 3 (2002): 259–282.

    Article  Google Scholar 

  • Fry, Ben. Visualizing Data. Sebastopol, CA: O’Reilly Media, Inc., 2008.

    Google Scholar 

  • Krippendorff, Klaus. Content Analysis: An Introduction to Its Methodology. 2nd ed. Thousand Oaks, Calif: Sage, 2004.

    Google Scholar 

  • Macnamara, J. “Media Content Analysis: Its Uses, Benefits and Best Practice Methodology.” Asia Pacific Public Relations Journal 6, no. 1 (2005): 1–34.

    Google Scholar 

  • Merriam, Sharan B., and Elizabeth J. Tisdell. Qualitative Research: A Guide to Design and Implementation. 4th ed. San Francisco, CA: Jossey-Bass, 2016.

    Google Scholar 

  • Moretti, Franco. Distant Reading. London, New York: Verso, 2013.

    Google Scholar 

  • Otty, Lisa, and Tara Thomson. “Data Visualization in the Humanities.” In Research Methods for Creating and Curating Data in the Digital Humanities, edited by Matt Hayler and Gabriele Griffin. Research Methods for the Arts and Humanities. Edinburgh: Edinburgh University Press, 2016.

    Google Scholar 

  • Rosenstiel, Tom, Amy Mitchell, Kristen Purcell, and Lee Rainier. “How People Learn About Their Local Community.” Pew Research Center, 2011. http://www.journalism.org/2011/09/26/local-news/.

  • Saldaña, Johnny. The Coding Manual for Qualitative Researchers. 3rd ed. Los Angeles, CA and London, New Delhi, Singapore, Washington DC: Sage, 2016.

    Google Scholar 

  • Stemler, Steve. “An Overview of Content Analysis.” Practical Assessment, Research & Evaluation 7, no. 17 (2001). http://PAREonline.net/getvn.asp?v=7&n=17.

  • Tufte, Edward R. The Visual Display of Quantitative Information. 2nd ed. Cheshire, CT: Graphics Press, 2001.

    Google Scholar 

  • ———. Beautiful Evidence. Cheshire, CT: Graphics Press, 2006.

    Google Scholar 

  • ———. Envisioning Information. Fourteenth printing. Cheshire, CT: Graphics Press, 2013.

    Google Scholar 

  • Weber, Robert Philip. Basic Content Analysis. 2nd ed. Sage University Papers Series, no. 07-049. Newbury Park, CA: Sage, 1990.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 The Author(s)

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Rheams, D. (2018). Creating an Influencer-Relationship Model to Locate Actors in Environmental Communications. In: levenberg, l., Neilson, T., Rheams, D. (eds) Research Methods for the Digital Humanities. Palgrave Macmillan, Cham. https://doi.org/10.1007/978-3-319-96713-4_5

Download citation

Publish with us

Policies and ethics