Many of my computer science students, and even some teaching colleagues, struggle to recognise the epistemological distinction between the words quantitative and objective. As they work on their research dissertations, inventing the software technologies that will become the basis of the next generation of mobile apps, social media start-ups, and internet infrastructure, they are cautioned that their design work must be evaluated quantitatively. This advice is taken very seriously, even where the goals of the project might be health (quantified), empathetic emotion (quantified), creative arts (quantified), or personal trust and security (naturally, quantified). The conflation of quantification with objectivity can lead to faintly ridiculous research conventions.

If a computer science student were to ask a research volunteer to describe how some new wearable technology made them feel, the verbal response, no matter how subtle or insightful, is likely to be denigrated as purely qualitative or perhaps “anecdotal” data. The recommendation for a student designing such technology is that they should instead ask the volunteer to supply a number, expressing how they feel on a scale of 1 to 7. Aggregations of such Likert scale values are preferred as scientific evidence, despite their clear inadequacy by comparison to plain speech, because the numbers are seen as objective, a ludicrous claim considering the perfectly plain fact that the “feelings” being studied are by definition subjective.

It is this strategy of using numbers to avoid human subjectivity that makes computer science and engineering attractive to many young people, with the promise that messy ambiguities of social and emotional life might be resolved through immersion in quantitative study. Those who as children have struggled to understand social nuance, or whose belief systems lead them to expect strictly defined boundaries of classification and behaviour, seem particularly likely to choose such areas of study. And of course, lack of nuance is celebrated by prominent technology entrepreneurs and other extremists who serve as role models to such students. To accumulate wealth, every attribute to be valued must ultimately be quantified. If not, how would it be possible to define the conversion rate to dollars?

When viewed from outside, these tendencies of thought are sometimes attributed to the fundamentally binary nature of digital data storage and processing, as the underlying reason for loss of nuance and ambiguity, rather than the desires and motivations of those who build the systems. Although there may once have been a time when computers could be observed at the binary level, decades of evolving complexity mean that this is now the wrong level of abstraction at which to develop critiques of the digital.

Instead, we need to consider the drivers of iteration in terms of Shannon’s information theory, which measures in bits the extent to which data are surprising rather than repetitiously redundant.Footnote 1 Importantly, we must recognise that information has value and cost, and is acquired from individuals at real cost to those persons. Repetition is not a service, because no new information is created. But the consumption of mindlessly repeated statements is costly to every person who must filter noise from knowledge. The commercial dynamic of today’s online monopolies is driven by the mundane profit of delivering repetition while consuming attention.

By one analysis, our collective investment in informationally efficient infrastructure has led inevitably to a commercial imperative that rewards iteration rather than understanding. The consequence of reshaping knowledge to fit such an infrastructure has been, as the other contributions to this issue clearly demonstrate, an epistemological shift away from (informationally costly) discourse and consensus to the cheaper alternatives of measurement and quantitative aggregation that are laughably characterised as “artificial intelligence” when in fact they serve only to make us all more stupid.