Introduction

Although mechanical aids to calculation have existed for millennia and automatic office machinery for centuries (Solla Price 1959; Campbell-Kelly and Aspray 2004), computing started to develop as a distinct academic discipline only in the mid-1900s, after the birth of the stored-program paradigm (Tedre 2006; Tedre and Sutinen 2009). Throughout the short disciplinary history of modern computing, there has been a great variety of different approaches, definitions, and outlooks on computing as a discipline. Arguments about the content of the field, its methods, and its aims have sometimes been fierce, and the rapid pace of extension of the field has made it even harder to define computer science faithfully. The most heated debates have revolved around the question whether computing is a scientific discipline (Tedre 2006).

In the 1930s and 1940s automatic computing was in a pre-science state (cf. Kuhn 1996): there were competing theories and techniques for automatic computing, but none of those theories or techniques had established superiority over the others. A number of developments were, however, under way. On the one hand, theoreticians such as Church, Post, and Turing made pioneering contributions to computability theory (Church 1936; Post 1936; Turing 1936). On the other hand, a number of researchers experimented on electronic circuit elements and fully electronic computing, regardless of the fact that many older members of the computing establishment defended analog and hybrid computing (Campbell-Kelly and Aspray 2004, 70–83; Flamm 1988, 48).

In the first half of the 1940s, relatively young researchers in the Moore School of Electrical Engineering at the University of Pennsylvania gradually came to understand, firstly, the advantages of fully electronic computing and, secondly, the stored-program concept (Ceruzzi 2003). Copies of their drafts (Neumann 1975) were circulated widely, and after a few successful implementations of fully electronic stored-program computers, the computing community gradually adopted what might be called the stored-program paradigm (Tedre and Sutinen 2009). Since its inception, the stored-program paradigm has gained enough momentum that it is, despite its limitations, largely taken as an unquestioned foundation for automatic computing.

In the 1950s the professional identity of computing professionals was somewhat unclear (Weiss and Corley 1958), and the field was in the process of developing a status as a discipline distinct from mathematics and electrical engineering. Although the first societies for computing professionals were formed as early as 1946, although there were elaborate defenses of computer science already in the 1960s, and even though the first departments of computer science were established in the early 1960s (Newell et al. 1967; Wood 1995; Rice and Rosen 2004), it took until the 1970s until the field gained stable foothold as an independent discipline. In 1974, the National Science Foundation (NSF) of the U.S. affirmed the distinction of computer science from all other science and engineering disciplines and recommended that the NSF make that distinction manifest in its programmatic activities (Galler 1974).

Yet, while an acknowledgment of computing as a new discipline allowed it to be distinguished from its originating fields, the new discipline itself was being stretched in various directions. Towards the end of the 1970s, there grew an understanding that computer science is an interdisciplinary field: it is partly a scientific discipline, partly a mathematical discipline, and partly a technological discipline (Wegner 1976). Ever since the 1970s, computing has typically been characterized as a combination of empirical, mathematical, and engineering traditions (Tedre and Sutinen 2008). Note, however, that having those three intertwined but inseparable aspects to form a discipline is not unique in sciences—it can easily be argued that, say, quantum mechanics is also an inseparable combination of mathematical, scientific, and engineering aspects.

During the last 70 years, researchers in the field of computing have brought together a wide variety of academic disciplines and methodological viewpoints. The resulting discipline, computer science, offers a variety of unique concepts, models, theories, and techniques. But although many constituents of modern computer science—such as the Church-Turing Thesis, the stored-program paradigm, and the concept of algorithm—are quite stable and date back quite some time (see, e.g. Knuth 1972), the field at large has changed radically between the 1940s and 2010s. The diversification of the field has led to ambiguity about the proper topic and proper methods of computer science as well as how the term computer science should be operationalized.

Firstly, it is not certain what kinds of topics should be considered to be computer science. Should computer science be defined by its subject matter, its methods and techniques, its key concepts and theories, its aims, or something else? Secondly, it is difficult to come up with a common understanding of how research in computer science should ideally be done. If a generic set of rules for quality research in computer science were formulated, it is not clear if those rules should cover research in fields such as software engineering, complexity theory, usability, the psychology of programming, management information systems, virtual reality, and architectural design. The subjects that computer scientists study include, for instance, programs, logic, formulae programmers, machines, usability, complexity, and information systems. It can be debated if an overarching, all-inclusive definition of computer science is even necessary.

Thirdly, and most importantly from the viewpoint of this article, it is not certain if computing can even be a science in the same sense that physics, chemistry, biology, and astronomy are sciences. The question about the scientific nature of computing has vexed computer scientists for decades, and there is no agreement on the issue. In this article, competing viewpoints on the scientific nature of computing as a discipline are presented, those viewpoints are analyzed, and it is proposed that the problems in those discussions arise from deep conceptual uncertainty about sciences in general as well as computing in particular.

Defense of Computing as a Science

In the debates about the relationship between computing as a discipline vis-à-vis mathematics, engineering, and natural sciences, the scientific nature of computing has been debated the most (Tedre 2006). The question is usually about whether computer science can be considered to be a science in the same sense natural sciences are. It is sometimes argued that computer scientists do theoretical work similar to mathematics, and hence, like mathematics, computing is not natural science (Dijkstra 1987). Other people have asked computer scientists to be honest about the engineering nature of their work (McKee 1995). And even other people have argued that computer scientists should indeed aspire to work the way physicists and other natural scientists do. Peter J. Denning, who has for 30 years been at the forefront of the public discussions about the disciplinary identity of computing, even argued that computer science is a natural science in the sense that it studies naturally (but also artificially) occurring information processes (Denning 2005, 2007). Moreover, Knuth (2001, 167) called computer science an unnatural science and Simon (1981) called it an artificial science.

Often the pro-science argument is that although computer science might not be a natural science, it is still an empirical or experimental science, because computer scientists follow the scientific method (they explore and observe phenomena, form hypotheses, and empirically test those hypotheses). Paul Rosenbloom argued that computer science is a new, fourth domain of science, distinct of the physical sciences, which focus on nonliving matter; the life sciences, which focus on living matter; and the social sciences, which focus on humans and their societies (Rosenbloom 2004). Note that many other interdisciplinary fields, such as cognitive science, span across the three domains, yet Rosenbloom considered computer science so special that it constitutes a whole new domain.

Separating Computing from Mathematics

After the birth of the stored-program paradigm, computing specialists started to develop a disciplinary identity for their field (Tedre 2006). Separating the discipline of computing from mathematics took the combined effort of a number of computing pioneers, such as Forsythe (e.g. 1968), Knuth (e.g. 1974b), and Dijkstra (e.g. 1974). For instance, Donald Knuth—who is not only a pioneer of theoretical computer science but also a recognized mathematician—explained the differences between computer science and mathematics in a 1974 issue of American Mathematical Monthly. That difference, in Knuth’s (1974b) opinion, is in the subject matter and approach. He wrote that whereas mathematics deals more or less with theorems, infinite processes, and static relationships, computer science deals more or less with algorithms, finitary constructions, and dynamic relationships (Knuth 1974b). In Knuth’s view, there are three salient features of computer science that differentiate it from mathematics: finiteness is a prerequisite of realizability, dynamic relationships are a prerequisite for modeling the dynamic world, and algorithms shift the focus from static models towards processes and automation.

Two months after Knuth’s elaboration of computer science, Edsger Dijkstra wrote, in the same journal, about the same topic. Dijkstra (1974) called programming “an activity of mathematical nature”, but pointed out that the cognitive skills needed in programming are different from the cognitive skills needed in mathematics. In Dijkstra’s opinion, computer scientists should have concept-creating skills instead of just holding a standard collection of concepts; they should learn to invent ad-hoc notations instead of just learning the standard notation; and they should learn to work with a deep hierarchy instead of just working on a single semantic level (Dijkstra 1974).

The process of distinguishing computing from mathematics and developing a disciplinary identity for it took a while, but at the turn of the 1980s many computer scientists believed that “there is nothing laughable about calling computer science a science [anymore]” (Ralston and Shaw 1980). However, although a large number of characterizations of computer science employed the term science, most of those characterizations did not specify what they mean by science. Various accounts of computing as a science were presented, but none of them gained the high ground over the others.

Formulations of the Scientific View

At the same time when computer science achieved independence of electrical engineering and mathematics, there was a push towards a view of computing as a scientific discipline. In that effort, one of the first questions was that if botany is the study of plants, zoology the study of animals, and astronomy the study of stars, what is computer science a study of? (Newell et al. 1967) Is it a study of algorithms, automation, programming, information, classes of computations, computers, usability, uses of computing, all of those, or something else? That question was answered in one of the earliest arguments for the scientific nature of computer science: Newell et al.’s defense of computer science in a 1967 issue of Science. Newell et al. (1967) argued that phenomena breed sciences, that there is a phenomenon called computers, and that computer science is the study of computers and the phenomena surrounding them.

In the 1970s, one of the common views of computing as a field was that it is fundamentally a science of complexity. That is, computer science is about mastering the semantical properties of super- and subclasses, different sizes of aggregates, and connections between complexes and entities (Minsky 1979). Marvin Minsky, who was a pioneer of artificial intelligence, wrote that in many ways the theory of computation is fundamentally a science of the relations between parts and wholes and of the ways in which local properties of things and processes interact to create global structures and behaviors (Minsky 1979). Dijkstra (1974) argued that due to the hierarchical nature of computer systems, programmers become especially agile in switching between levels of scope and semantics. At the same time, Simon (1969) wrote the seminal book The Sciences of the Artificial on complexity and complex systems.

Many computing professionals underscored the novelty of the field (e.g. Gal-Ezer and Harel 1998). For instance, Minsky (1979) wrote that “computer science is an almost entirely new subject, which may grow as large as physics and mathematics combined”. Also Hartmanis (1994) argued that the uniqueness of computer science warrants that it should be viewed as a new science altogether. Thus, there was an effort to strengthen the foothold of computer science among other sciences. In the famous Snowbird Report from 1981, computer science was characterized as the study of representation, transformation, nature, and philosophy of information, and it was elevated to the status of a core science (Denning et al. 1981). The report proposed that, like mathematics, computer science is an indispensable tool in other disciplines and, like natural sciences, computer science is both a theoretical and experimental science.

Similar to Newell et al.’s (1967) defense of computer science, Denning (2005) and Tichy (1998) have argued against the objection that computer science is not a science because it studies human-made objects and phenomena. Denning’s and Tichy’s argument was that computer science studies information processes both natural and artificial. In this sense, Denning and Tichy tried to hang on to the parallel with natural sciences.

But, instead, one could argue that it is not the subject but the method of inquiry that is the defining feature of science. One could argue that what defines science is not what is being studied but how the research is being done. In this sense, one could argue that those branches of computer science that rely on the scientific method are indeed science, regardless of the nature of their subject of inquiry. Computer science, no matter how artificial or synthetic its subject may be, can be done through empirical inquiry like any other science (Newell and Simon 1976; Simon 1981, 3).

Empirical or Experimental Science

The view of computer science as an empirical or experimental science has support among the ranks of computer scientists. The terms empirical science or research and experimental science or research are sometimes considered to be synonymous. But often the term empirical stands in contrast with terms such as analytical, mathematical, and theoretical, and refers to research that is based on observation-based data collection. The term experimental can be considered to go deeper, and refer to the use of controlled experiments for testing hypotheses. There are natural sciences that are empirical but not experimental. For instance, astronomy does not rely on controlled experiments on the subject of study. Furthermore, not all experiments require initial hypotheses but sometimes are based on trial and error.

The early 1980s saw a strong push towards what was called experimental computer science. Experimental computer science was considered to be undervalued at the time (McCracken et al. 1979). It was argued that no scientific discipline can be productive in the long term if its experimenters merely build components (Denning 1980b). Hence, a number of strategies were presented. In Denning’s (1980b) opinion, scientific work needs to be directed and systematic. Feldman and Sutherland (1979) advocated the kind of empirical research that is close to the falsificationist paradigm and the scientific method. And, of course, experimental science also relies on some theoretical foundations; for instance, Gelernter (1999, 44) wrote that “computer science is scientific insofar as it has a theoretical foundation that allows a person to make general statements and prove them”.

So, if it is the empirical or experimental nature of computing that makes it a science proper, one needs to ask what are the actual activities that make computer science scientific. To start with, there are many activities that have led to interesting findings in computer science but that are not science; Denning (1980b) called those activities tinkering and hacking. It is not science to put things together, try them, and see what happens (no matter how much one might like that modus operandi; see Graham (2004, 18–25)). As to the specific activities of experimental computer science, participants of the 1980s debates mentioned measuring, testing, making hypotheses, observing, collecting data, classifying, and sustaining or refuting hypotheses (Denning 1981; Ousterhout 1981; McCracken et al. 1979). The cycle of observation, measurement, and analysis is in these arguments portrayed as a sign of good research (Freeman 2008).

Colburn (2000, 167) portrayed an analogy between the scientific method and the problem-solving approach in computer science. Colburn noted that what is being tested in the scientific method is not the experiment, but the hypothesis. The experiment is a tool for testing the hypothesis. Similarly, what is being tested in problem-solving in computer science is not the program, but the algorithm. The program is written in order to test the algorithm. In this analogy, writing a program is analogous to constructing a test situation. Khalil and Levy (1978) presented a similar analogy: they wrote, “programming is to computer science what the laboratory is to the physical sciences”. However, as noted by Duhem (1977, 183–188) and Quine (1980, 20–46), logically speaking, an experiment can not falsify a single hypothesis, but rather call into question the whole technical–theoretical–empirical framework and the particular test situation.

In pursuance of their push towards making computer science scientific, the proponents of experimental computer science, however, made some confusing arguments. One of the proponents argued that the only major difference between traditional sciences and computer science is just that information is “neither energy nor matter” (Tichy 1998). Another wrote that “many of the engineering problems in computer science are not constrained by physical laws” (Hartmanis 1994). A third wrote that a hypothesis in experimental computer science may concern a law of nature, such as “whether a hashing algorithm’s average search time is a small constant independent of the table size”, and suggested working on that law by measuring a large number of retrievals (Denning 1980b).

However, the first two claims would distance the authors from materialist monism, which is usually taken as the ontological basis for scientific realism. The third claim unnecessarily stretches the meaning of law of nature. That is, it is hard to see in which sense the average search time of a hashing algorithm A (which is a human construction), implemented on a computer brand B (which is a human construction), both A and B relying on the theoretical-technical framework of modern computing (which is a human construction), would be a law of nature in the sense that it would reveal much about naturally occurring things. Instead, it tells about how well previous computer scientists have done their job.

Opposition Towards the Scientific View

The effort to make computing an established scientific discipline has always faced criticism. The criticism usually comes in two flavors. Some opponents of the scientific view of computing have argued that computing is not a science at all, whereas some others have argued that too much of the activities that go on in computing are bad science. This section presents some arguments from both flavors of the critique.

Computing is not Science

The scientific nature of computing was increasingly criticized in the 1990s. For instance, McKee (1995) wrote that the term science refers to “the set of intellectual and social activities devoted to the generation of new knowledge about the universe”. McKee (1995) argued that computer scientists, however, are not honest about their work: they are “just acting like scientists and not actually doing science”. Many critics of the scientific orientation based their viewpoint on the engineering aspects of computing practice. For instance, Brooks (1996) wrote that unlike the disciplines in the natural sciences, computer science is a synthetic, engineering discipline.

There has been plenty of criticism of the term science in computer science. For instance, Brooks (1996) argued that the misnaming of computing as a science hastens various unhappy trends among computing professionals. Firstly, it leads them to accept a pecking order where theory is respected more than practice. Secondly, it leads them to regard the invention and publication of endless varieties of computers, algorithms, and languages as an end. Thirdly, it leads them to forget the users and their real problems. Fourthly, it directs young and brilliant minds towards theoretical subjects. Many of those who criticized the scientific orientation of computing wanted to see the problem fixed by changing the name of the field.

To fix the naming problem, McKee (1995) argued that the term computer science should be replaced with computics. McKee argued that mathematicians have acknowledged the nonscientific nature of mathematics by choosing the name of the field to end with “-ics”. Note, however, that following McKee’s logic, the name computics would equate computing with physics—a natural science if there ever was one. Brooks, for his part, argued that a folk adage of the academic profession says, “Anything which has to call itself a science, isn’t” (Brooks 1996). By Brooks’s folk adage, physics, chemistry, history, and anthropology may be sciences or not; cognitive science, neuroscience, social science, and computer science definitely are not.

It is sometimes difficult to tell when the critique has been about the wrong naming of computer science, and when the critique has been about the lack of scientific content, scientific aims, or the scientific method in computer science. Although some proponents of the theoretical tradition of computer science have argued that an empirical bent is detrimental to computer science, usually that critique is not directed towards empirical research as such. The critique is usually directed towards the centrality and prospects of empirical research in computer science, or towards the parallels drawn between natural sciences and computer science. Furthermore, some prominent computer scientists have regarded the name question as unimportant and a waste of time (Forsythe 1968; Knuth 1985).

Some of those computer scientists who have generally acknowledged the difference between natural sciences and computer science have presented various alternative characterizations of computing as a science. Computing has been called an unnatural science (Knuth 2001, 167), an artificial science (Simon 1981), an experimental science (McCracken et al. 1979), and even a completely new domain of science with its own paradigm (Denning and Freeman 2009; Denning and Rosenbloom 2009). Brooks (1996) argued that computer science is a synthetic discipline: the scientist builds in order to study, and the engineer studies in order to build—and in Brooks’s opinion, computer scientists study in order to build. Perhaps one could add one more viewpoint; that it might be that in computer science to study is to build and vice versa.

Hartmanis (1993) argued that the central issue about the nature of computing is that computer science concentrates more on the how than the what. He wrote that natural sciences concentrate more on questions of what, and that computer science, with its bias on how, reveals its engineering nature. Hartmanis continued that whereas the advancements of natural sciences are documented by experiments, in computer science the advancements are often documented by demonstrations. In some branches of computer science the scientists’ slogan “publish or perish” is indeed less influential than the engineers’ slogan “demo or die” (Hartmanis 1981). Hartmanis (1981) characterized computer science as the “engineering of mathematics”, and noted that whereas the physical scientists ask “what exists?”, computer scientists ask “what can exist?”.

A number of authors have argued that the problem with the name science is that it does not reflect what actually happens in computing fields. McKee (1995) wrote that people in computer science have different goals and methodology than people in traditional sciences. Hartmanis (1993), on the other hand, noted that there is a difference in the role of theories between computing and natural sciences. Hartmanis argued that unlike in the natural sciences, theories in computer science do not compete with each other in explaining the fundamental nature of information. In addition, in most fields of computing, new theories are not developed in order to reconcile theory with anomalies found in experimental results (Hartmanis 1993).

Brooks (1996) wrote that a new fact, a new law, is an accomplishment in science, and warned that computer scientists should not confuse any of their products with laws. According to Brooks, science is concerned with the discovery of facts and laws, whereas computer science is concerned with making things, be they computers, algorithms, or software systems. There is, indeed, a problem of what are laws in computer science, if there even are such things. There are even a number of rules-of-thumb that computer scientists call “laws”—take for instance, Moore’s Law, Rock’s Law, Machrone’s Law, Metcalfe’s Law, and Wirth’s Law (Ross 2003). Knuth (1974b) was of the opinion that the laws of computing are “human-made laws”, and Rombach and Seelisch (2008) argued that computer science deals with “cognitive laws”.

Since its birth, the discipline of computing has continuously expanded and diversified to new areas. Towards the turn of the millennium, a new understanding of computing as a diverse field took root, and even the previously adamant idealists gave some concessions to diversity in computer science. In 1997 Edsger Dijkstra wrote:

Another thing we can learn from the past is the failure of characterizations like “computing science is really nothing but X,” where for “X” you may substitute your favorite discipline, such as numerical analysis, electrical engineering, automata theory, queuing theory, lambda calculus, discrete mathematics, or proof theory. (Dijkstra 1997)

But while an acceptance of diversity in computing research kept on growing, an increasing sensitivity towards methodological quality of research emerged. A large number of normative arguments about methodology of computer science were presented through various kinds of comparative studies and meta-research. The claim no longer was that computing is not a science, but that computer scientists are not doing science right.

Computing is Bad Science

In the 1990s meta-analysis became fashionable in all fields of science, and computer science was not an exception. Researchers who analyzed computer science publications argued that computer scientists publish relatively few papers with experimentally validated results (Tichy et al. 1995). In addition, some complained that in contrast to natural and social sciences, research reports in computing disciplines rarely include an explanation of the research approach in the abstract, key word, or research report itself, which makes it difficult to analyze how computer scientists actually arrived at their results (Vessey et al. 2002).

The most common complaints about the quality of research in computer science revolved around software engineering. Holloway (1995) accused software engineers of basing their work on a combination of anecdotal evidence and human authority. In their study of 600 published articles on software engineering, Zelkowitz and Wallace (1997, 1998) found that about one third of articles in their sample failed to experimentally validate their results (see also Tichy (1998)).

The critics of computer science often argued that inclusion of engineering components undermined computing as a science. That is, if computer science is a science, and if scientists adhere to the scientific method but engineers do not, then engineering cannot be a proper part of computer science. Those opponents argued that it is difficult to see the theoretical foundations of, for instance, software engineering, and that the engineering parts of computer science are based on rules of thumb.

At the same time, too strong an adherence to the experimental procedure was also criticized. Fletcher (1995) disagreed with Denning’s (1981), Glass’s (1995), and Tichy et al.’s (1995) preoccupation with experimentation, and noted that without the theoretical principle of Turing-equivalence of all computers, there would be no academic discipline of computing, but just eclectic knowledge about particular machines. He wrote that much of research in computing is not of the experimental sort “[seek] the best solution to a previously specified problem” (Fletcher 1995). Instead, computer scientists work with problems that are poorly understood, and with which one major goal is to understand the problem and delimit it more precisely.

There were indeed quite a number of critics who wanted to see a strong theoretical foundation in computing fields. Dijkstra (1987) wrote that the “incoherent bunch of disciplines” that began computer science hardly appealed to the “intellectually discerning palate” of mathematicians. Dijkstra was not happy with the term computer science and advocated the term computing science instead. Dijkstra (1987) famously wrote that computing scientists should not care about “the specific technology that might be used to realize machines, be it electronics, optics, pneumatics, or magic”. He also disliked the view that programming should be like science or engineering, and argued that programming is an activity of a mathematical nature.

Dijkstra (1972) had earlier argued that computing scientists should not bother to make programs, but they should focus on designing classes of computations that display desired behaviors. Computing science, in Dijkstra’s (1987) opinion, is about what is common to the use of any computer in any application, and computing scientists should not be concerned with any technical details or any societal aspects of their discipline. In his argument against the technological bent of computer science, Dijkstra (1987) argued that computer science is an entirely wrong term: “Primarily in the U.S., the topic became prematurely known as ‘computer science’—which actually is like referring to surgery as ‘knife science’ ”.

Various methodological challenges, but also prospects, in computer science arise from the diversity of the computing field. There is hardly a fit-for-all methodological approach in a field as wide as computing. But the problem of proper methodology is not unique to computer science. What is considered to be proper research is a contested issue in sciences in general. Often the suggested solution to methodological problems is to adopt a “scientific approach” (Bunge 1998b, 247), but that solution just shifts the burden to the definition of the scientific approach.

Discussion

The debates about the essence of computing as a scientific discipline usually stumble over the same issues. Firstly, there is no coherent, unified, generally accepted view of science that the debaters could appeal to. Secondly, there is no consensus of the defining characteristics of computing as a discipline. Thirdly, in the debates about computing as a science, scientific terminology is used inconsistently. This section analyzes those issues and suggests that, although those issues make discussions somewhat ambiguous, those issues derive from science in general, not from computing in particular.

No Shared View of Science

One of the striking features of the science debate in computing is that there is no shared view of science among the debaters. Still, most of the discussants talk as if there were a monolithic Science, against which other things can be measured. That monolithic idea of Science usually entails both descriptive and normative ideas, which is where the problems start. But if one looks at the debates among, say, physicists, neurobiologists, or philosophers of science, it is difficult to find an agreement on how the term science should be operationalized in any field. The term can be understood in a number of ways, depending on the context (Kiikeri and Ylikoski 2004).

Science can refer to (1) a class of activities such as observation, description, and theoretical explanation; or (2) any methodical activity: “Justus has raised boxing to a science.” Science can also refer to (3) a sociocultural and historical phenomenon: “the rise of Islamic science.” Science can refer to (4) knowledge that is gained through experience, (5) knowledge that has been logically arranged in the form of general laws (Knuth 1974a), or (6) structured knowledge derived from “the facts” (Chalmers 1999, 1). One sort of an extreme is the idea that science is (7) an assemblage of eternal, objective, ahistorical, socially neutral, external, and universal truths (Glashow 1992).

Science can be seen as (8) a branch of study concerned with some specific goals (Brooks 1996). Science can be seen as (9) a societal institution: “Humanity should be governed by science,” or (10) a world view—that is, “the scientific world view.” Science can be thought of as (11) a specific style of thinking and acting (Bunge 1998a, 3). Science can also be understood as (12) scientists’ profession. Someone can even be said to be “doing science” or being a “man of science”, whatever those phrases mean. To make things more complicated, many scientists consider pure and applied sciences to be separate things. Sometimes applied science is thought of as being intellectually inferior to pure science, and sometimes pure science is accused of being alienated from practical, everyday matters.

Naturally, the different views on science greatly impact any argument about the scientific nature of computing. For instance, if one considers science as Brooks (1996) does, as a “branch of study concerned with the observation and classification of facts, especially with the establishment and quantitative formulation of verifiable general laws”, then many areas of computer science are not science proper. Accordingly, many arguments about computing indeed assume the criteria of the natural sciences. For example, according to Stewart (1995), those traditions of computing that are different from those in the natural sciences inhibit the field’s development into a proper science.

However, not only many computer scientists, but also philosophers of science would cringe at Brooks’s definition above—for different reasons. Whereas computer scientists may dislike the narrowness of Brooks’s view, philosophers of science might disagree with the references to facts and verification in the Brooks’s definition above. Firstly, one rarely talks about establishing facts in an observation-based science. Secondly, although there surely can be law-like generalizations in the field of computing, it is not certain at all what the laws of computing should look like. In some computing fields, there definitely are generalizations similar to laws in various other sciences, whereas in other computing fields “laws” might look very different. Thirdly, verification—in the sense of proving claims correct—has not been a principle of serious scientific inquiry since the mid-1900s, when Popper’s (1934, 1959) falsificationism became popular.

Characteristics of Science

Many modern philosophers and sociologists of science argue that scientists do not actually work according to any rigid guidelines, and that there is no fixed set of rules suitable for all branches of science (much less all branches of intellectual inquiry) (Feyerabend 1993; Kuhn 1996; Pickering 1995). But science is not an arbitrary term either. A large number of philosophers of science have argued that there are some characteristics that all sciences bear or should bear. The suggested characteristics include, for example, testability and falsifiability (Popper 1959), objectivity (Couvalis 1997), explanatory power (Hempel 1965, 331–496), simplicity, clarity, and parsimony—“Occam’s Razor” (Nash 1963, 173), logical rigor (Carnap 1967), conformance to the dominant paradigm (Kuhn 1996), predictive power (Lakatos 1978; Popper 1959), correspondence between theories and the world (Russell 1912), reducibility (Dennett 1996), progress (Lakatos 1978), consistency (internal, external, and logical), and the ability to differentiate between superfluous data and relevant data. Each of those suggestions have, however, at some point of time been contested by some other philosopher of science. No science, and no branch of computing research, embodies all the above-mentioned characteristics all the time.

One of the less contested ideas connected with science is that the aim of science is to explore, describe, predict, and explain phenomena in the world we live in (von Wright 1971, 6). Exploration refers to developing an initial understanding of a yet uncharted phenomenon. Description refers to systematically recording and modeling a phenomenon and its connections to other phenomena. Prediction refers to using previous understanding to predict phenomena that have not yet come to pass. And explanation refers to clarifying, especially through theories, the causes, relationships, and consequences of the phenomena at hand. Those aims do not, however, constitute either necessary or sufficient conditions of being science unless very narrowly defined: there are intellectual fields, such as linguistics, archeology, mathematics, statistics, and history, that do not subscribe to all of those aims. Furthermore, there are activities, such as astrology and religion, which hold those aims but are not science. Nevertheless, those aims are usually considered to be characteristic of science. Most research studies in computing can be tagged with some of the labels above, but rarely all of them, at the same time.

It is not clear at all which of these aims are acceptable or desirable of science. For instance, physicist and philosopher of science Pierre Duhem wrote that scientists should refrain from attempts to explain anything—Duhem thought that the only thing scientists should do is to describe things. The purpose of science, according to Duhem, was to describe facts about the world—he thought that explanations of the world should be left to, say, philosophers (see Hitchcock 2004, 8). Generally speaking, those philosophers and scientists who advocate descriptivism or instrumentalism argue that the only things one can require of science are description and prediction (Bunge (1998a, 59–61), Deutsch (1997, 3)).

The four aims of science give rise to different kinds of questions in science. Philosopher of science Mario Bunge listed six elementary problem forms in natural sciences: which, where, why, whether, how, and what (Bunge 1998b, 196). Examples of such problems are, for instance, “Which processes have the property A?”, “Under which circumstances (where) is x true?”, “What causes p to happen (why does p happen)?”, “Is q true or false?”, “How does c happen?”, and “What properties does c have?” It is uncertain whether those questions really reach the soul of computing as a discipline.

Quite a few people in computing, as well as science in general, agree that striving to objectivity is a sine qua non of true science (Couvalis 1997). But the term objective refers to different things in different contexts, and many quarrels about objectivity of science arise from different ways of interpreting the term. Few computing researchers would disagree that objectivity is a characteristic of science. Quite a few of them might, however, face a problem if they were asked to define what objectivity, strictly speaking, means.

One can talk about, for example, objective statements, objective knowledge, objective reality, objective methods, objective measurement, and objective standards. For instance, in the context of scientific observations, objectivity can refer to reproducibility: to the idea that observations can be confirmed by anyone who carries out the same experimental procedure under the same conditions. In the context of scientific theories, objectivity can refer to testability: to the idea that all the logical consequences of those theories can be tested by anyone (Kemeny 1959, 96). For instance, any logical consequence of the atomic theory can be tested by anyone, and those tests can either support atomic theory or conflict with atomic theory.

Another claim about science is that it is a self-correcting enterprise. Modern scientists usually do not claim that their explanations of the world would be unfailing; hence they do not claim to verify scientific statements. Generally speaking, modern scientists usually claim that (i) their explanations are truer than any nonscientific model of the world, (ii) they are able to test their truth claims, (iii) through science, they can discover the shortcomings of science, and (iv) by working scientifically, they are able to correct the shortcomings of science (Bunge 1998b, 33).

The argument that science is self-correcting relies on corrective social processes that form a cycle of hypotheses and theories; predictions; experiments; corroborations, corrections, or refutations; and new hypotheses and theories (e.g., Kuhn 1996; Popper 1959; Lakatos 1976) (See Dodig-Crnkovic (2003) for an illustration of the iterative nature of the scientific method). Furthermore, many scientists hold the view that scientists should actively seek to find flaws in scientific theories, try to unearth sources of bias in research settings, question all presuppositions, and so forth. Often the idea of self-correcting science relies on the view that in the course of time science gets closer to true understanding of the world. Both the scientists’ search for truth as well as competition between scientists might indeed encourage some self-correcting processes. The social processes of computing have been noted early in the debates over computing as a discipline (De Millo et al. 1979).

Most scientists believe that science should be pursued through the scientific method (Kemeny 1959, 174). However, unlike its name implies, the scientific method is not a single, well-defined “method.” The scientific method is a collection of techniques, procedures, and methods that are used for investigating the world. Essentially, the scientific method includes a cycle that involves examining and observing things, formulating hypotheses or explanations, and testing those hypotheses or explanations (Fig. 1).

Fig. 1
figure 1

Cycle of scientific research

In the cycle of phases that belong to the scientific method (Fig. 1), a scientist begins by observing a phenomenon. When enough observational data have accumulated, the scientist proceeds to describe the phenomenon by forming hypotheses about the phenomenon, possibly supported by some other theories. Scientific hypotheses have logical consequences that can be tested, which allows predictions of phenomena that have not yet come to pass. The next step in the cycle is to design experiments to test some predictions, and to carry out those experiments. If the experimental results support the hypothesis, the scientist goes back to design more experiments to test the hypothesis. If the experimental results conflict with the hypothesis, the scientist goes back to rethink the hypothesis (or to think whether the anomalous results might have been caused by other things, such as flaws in instruments or test setting). In Fig. 1, the applied, empirical parts of the scientific method are on the left side of the picture and the theoretical parts are on the right side of the picture.

The scientific method is often used as the central idea that differentiates science from mathematics and engineering. Firstly, mathematics is different from science because mathematicians do not observe phenomena, formulate hypotheses, and test those hypotheses. Mathematical knowledge is generally considered to be really proven instead of observed and tested (Shapiro 2000, 4). Scientists, on the other hand, do not aim at proving their theories to be necessarily true, beyond any doubt. In physics, for instance, the general theory of relativity is not proven, although it has stood up to an extensive range of tests and is able to predict phenomena well. Secondly, engineering is often considered to be different from science, too—engineers do not focus on describing or explaining the world; they focus on building things (Tedre 2009).

Computing and Science

Similar to the generic term science, the term computer science can also be understood—and has been understood—in a number of ways. For instance, it can be understood as specific classes of activities, such as modeling, developing, automating, and/or designing (Denning et al. 1989). It can be understood as a way of thinking (Arora and Chazelle 2005), especially one in which an individual is able to switch between abstraction levels and simultaneously consider microscopic and macroscopic concerns (Dijkstra 1974; Knuth 1974b). It can also be understood as an umbrella term for a large variety of topics such as data structures, robotics, e-commerce, visualization, and data mining (Denning 2003; Zadeh 1968). Computer science can be understood as a body of knowledge about topics such as computing, usability, or information processing. Some consider computer science to be a body of laws (Rombach and Seelisch 2008).

Furthermore, computer science can be understood as an institution run by cliques of scientists and dedicated to computing research. Or, somewhat more pessimistically, it can be understood as an institution run by the power elite and dedicated to the technocratic endeavor. Computer science can also be understood as having a temporal dimension, thus forming a historical continuum: “the 45-year path of modern computer science.” It can be understood as broadly as studies of phenomena surrounding computers or as narrowly as, say, computer science = programming. Computer science can be interpreted as incorporating subjective and objective issues as well as techn\(\bar{e}\) and epist\(\bar{e}\hbox{m}\bar{e}\) : the art and science of processing information. Computer science can be understood as a profession, too (CSAB 2006).

The subject matter of computing—“What is it a science of?”—also depends on one’s inclinations (see, e.g., Rapaport (2005) for an overview of different views of the subject matter of computer science). In the course of time, prominent computer scientists have argued that the subject matter of computer science is, for instance, computers (Hamming 1969); computers and phenomena surrounding them (such as algorithms) (Newell et al. 1967); algorithms and other phenomena surrounding computers (Knuth 1974b); information representations and processing, especially with digital computers (Forsythe 1967; Atchison et al. 1968; Hartmanis 1993); complexity (Minsky 1979; Simon 1981); classes of computations (Dijkstra 1972); theory and practice of programming (Khalil and Levy 1978); and processes of information flow and transformation (Denning et al. 1981, 1989). Different points of view on the subject matter of computer science naturally lead to different views of how computer scientists should work.

Activities in computer science vary greatly, too. The activities of computer scientists have been argued to include, for instance, designing (Dijkstra 1972; Denning et al. 1989), representing and processing (Forsythe 1967), mastering complexity (Dijkstra 1974), formulating (Dijkstra 1974), programming (Khalil and Levy 1978), empirical research (Wegner 1976), and modeling (Denning et al. 1989). Computer science has been conceptualized as a natural science (Denning 2007) as well as mathematics, engineering and design, an art, a science, a social science, and an interdisciplinary endeavor (Goldweber et al. 1997).

The number of research topics in computer science has increased dramatically since the establishment of the discipline. Zadeh’s (1968) 25-item list of computer science topics consists of mathematical and engineering topics, but Denning’s (2003) 30-item list of computer science topics (“core technologies”) consists of a variety of topics that have arisen from the cross-section of computer science and other fields, such as e-commerce (computing and business), workflow (computing and business), human-computer interaction (computing, psychology, cognitive science, and sociology), and computational science (computing and various fields).

Despite the efforts to characterize computing as a science, the status of methodological education in the official computing curricula is marginal; for instance, the ACM/IEEE curriculum recommendations do not include a proper methodology course, but only techniques and narrowly defined methods (Denning et al. 2001). It has been argued that instead of their general education, computer scientists draw their research skills from master-apprentice relationships with senior colleagues in a doctoral program, and from examining the writings of successful prior researchers (Glass 1995). Despite the claims to the contrary, there is nothing fundamentally wrong with the kind of knowledge transfer presented above; it portrays an excellent case of scientific paradigm as exemplar, which is a set of puzzle-solutions that are used as models or examples, and that replace the explicit rules of normal science (Kuhn 1996, 187–191).

However, although learning from exemplars might work in narrowly focused topics, exemplars cannot give the typical computer scientist the broad methodological knowledge that choosing and using methods and techniques requires. It is dubious if one can utilize a method without knowing its limitations, pitfalls, methodological and epistemological linkages, and theoretical underpinnings. Introducing methods (techniques, procedures, or tools of inquiry) without methodology (principles and foundations of methods) is shallow at best, misleading at worst. It is notable that if computer science is considered to be a scientific discipline, scientific methodology still does not play much of a role in computing curricula.

Perhaps due to the lack of methodological education in computing curricula, computer scientists do not quite work according to rigid methodological prescriptions. For instance, in Zelkowitz and Wallace’s (1997, 1998) study of 612 software engineering articles, they found that the terms experiment and effective were often used loosely or ambiguously. They wrote, “Researchers write papers that explain some new technology; then they perform ‘experiments’ to show how effective the technology is”. However, often the experiment was just a weak example that favored the authors’ solution over alternatives (Zelkowitz and Wallace 1997, 1998).

Take, for instance, research where one (1) explores different kinds of algorithms for, say, polygonal approximation and analyzes their strengths and weaknesses, (2) develops a new algorithm for the task, (3) formulates a hypothesis about how the new algorithm fares compared with the previous algorithms, (4) designs experiments for testing the hypothesis and collects data using those different algorithms, and (5) analyzes and interprets the results. Although such research does not deal with naturally occurring phenomena, but with human-made phenomena, such research can easily be argued to be empirical and experimental science. Glass (1995) indeed suggested that the rule “observe the world” in natural sciences is changed to the rule “observe the problem space” in computing.

It could be argued, though, that research such as the one above does not follow the normal experimental protocol. Computing researchers rarely take precautions against experimenter bias (Fletcher 1995). If a researcher argues that his or her algorithm A works faster than comparison algorithms B and C with data set d and parameters p, the research set-up sometimes does not follow the blinding principle familiar from psychology, medicine, and social sciences. The blinding principle is necessary if one wants to get rid of any experimenter’s bias—in that case, the researcher should not be able to choose B, C, d, and p favorably for his or her own algorithm A. Another variation of the argument above is that in the traditional experimental protocol in natural sciences a researcher should be an outsider to the phenomenon to be explained—but it is uncertain how much a computer scientist can be an outsider to a phenomenon he or she has created.

Some influential philosophers of science argue that a mature science works in accordance with a scientific paradigm (Kuhn 1996). It has been argued that computer science provides a unique scientific paradigm, separated from other academic fields by its focus on information processes (Denning and Freeman 2009)—although that argument ignores the fact that the concept of paradigm was never well defined (Masterman 1970). Hence, in some ways computing might have a paradigm, but in other ways not. Similarly vaguely referring to the paradigm concept, it has been suggested that the stored-program paradigm constitutes a technological and theoretical paradigm for computing (Tedre and Sutinen 2009).

The stored-program paradigm is one of the considerably stable parts of modern computer science in the sense that it has served as an uncontested basis for research for long periods of time. That paradigm was formed during the early twentieth century from innovations such as the Church-Turing thesis, and was epitomized in the First Draft of a Report on the edvac (Neumann 1975) and in the construction of the computers binac and edsac.

The term stored-program paradigm refers to the constellation of innovations that surround the stored-program computer architecture (Tedre and Sutinen 2009). Those innovations include a formalization of computable functions (i.e., the Church-Turing Thesis); Turing’s idea that instructions can be encoded as strings (the Universal Turing Machine); the idea that instructions and data reside in the same memory storage; random-access memory; and the separation between memory, the processing unit(s), control unit, and input/output unit(s) (von Neumann architecture).

The conception of the stored-program paradigm was a definite shift to a technological and theoretical paradigm. But the stored-program paradigm entails only technological principles and a theoretical framework. It does not dictate forms of inference, logic of justification, modes of argumentation, practices of research, conventions for settling scientific disputes, or other aspects of a scientific paradigm (Kuhn 1996). Regarding inference, logic, argumentation, or other kinds of conventions and scientific practices, computer scientists hold various views.

Ambiguous Terminology

One of the stumbling blocks in discussions about the nature of computer science is the ambiguity of terminology. It was noted earlier in this article that there is a problem with the term law in computing. The problem with laws in computer science is that, on the one hand, a lot of computer science deals with theoretical constructions and aims to establish relations within and between formal systems (formal systems do not exist without people creating them); but, on the other hand, a lot of computer science attempts to explain processes that happen regardless of people.

For instance, Knuth (1974b) stated that computer science deals with human-made laws, “which can be proved, instead of natural laws which are never known with certainty”. In Denning’s (1980b) opinion, computer science deals with laws of nature. And some argue that computer science deals with “cognitive” laws (Rombach and Seelisch 2008). Perhaps instead of laws one should talk about constructions, theories, theorems, or hypotheses. Yet, similar inconsistencies concerning laws can be found in other fields, too; consider, for example, the Law of Conservation of Energy, the Law of the Excluded Middle, and the Law of Averages, which all are very different things.

In the philosophy of science, two of the most common meanings of the term law are (1) regular patterns of (causal) events and (2) descriptions of those regular patterns. Note the difference: in the former meaning, laws are independent of whatever people may think about them—they obtain because that is how the world works; but, in the latter meaning, laws are constructions that people have made to describe how the world works—constructions that can be wrong. An example of the former might be the phenomenon that any two masses attract each other; an example of the latter might be Newton’s law of universal gravitation: \(F=G \frac{m_{1}m_{2}}{r^{2}}\). The former type is called an objective law, whereas the latter one is called a law formula; and although the two are sometimes conflated together, scientific laws are usually of the latter kind (Bunge 1998b, 392). Furthermore, it is not necessary to know the fundamental causes of phenomena and still be able to call propositions laws. For instance, although we do not know what causes gravity, we still call Newton’s Law of Universal Gravitation a law.

Theorems, too, are understood in various ways. Take, for instance, the discussion about folk theorems and rules-of-thumbs in computer science. In 1980, Harel (1980) and Denning (1980a) discussed a number of “folk theorems” or “folk myths” in computer science: theorems which are simple, intuitive, widely believed, of obscure origin—and some of which are false. These are, Denning wrote, usually referred to as “well-known” theorems. Denning (1980a) gave five examples, including the sorting theorem: “Optimal sorting algorithms take time proportional to n log n to sort n items in the worst case”. The lax attitude towards folk theorems, rules of thumb, and so-called laws is not, oddly enough, widely objected in computing (Harel 1980; Denning 1980a; Ross 2003).

Another source of confusion in computer scientists’ parlance, noted by philosopher of computing James H. Moor (1978), is the occasional sliding between programs, models, and theories as if there were no distinction between them. In science in general, the term theory is sometimes used for relating laws with principles and ideas different from those used in the law itself (Casti 1989, 23) For instance, the Kinetic Theory of Gases is a framework that explains Boyle’s Law by invoking an idea of the atomic nature of gas as “billiard balls” flying and bouncing about (Casti 1989, 23). Another meaning of the term is more generic: a set of laws used to explain and predict a set of events. In the latter sense, one can have a theory without having a model. For instance, one can have quantum theory without having a model of quantum mechanics.

Also, one can have a model of a subject matter without having a theory about that subject matter (Moor 1978). A soap film on irregularly shaped wires can be regarded as a model that gives solutions to minimization problems, yet it does not provide a theory about minimization (Moor 1978). In computer science, the “model = theory myth” states that constructing a computer model of a phenomenon is the same as constructing a theory about that phenomenon (Moor 1978).

Furthermore, there is also common confusion between programs and theories (Moor 1978). Take, for instance, an example quote from Artificial Intelligence: “Occasionally, after seeing what a program can do, someone will ask for a specification of the theory behind it. Often the correct response is that the program is the theory” (Winston 1977, 258). Newell and Simon (1961, 6) equated “theory of thinking” with the ability to simulate human problem solving behavior. This tendency of equating simulations, programs, and theories has been strongly argued against by, for example, the philosopher John R. Searle (e.g., 1980, 1990a, b).

If one considers examples of programs, it is dubious whether the eliza program is a theory of human conversation (or of thinking, or of the mind) or whether Kids’ Club’s football playing robots run the theory of playing football. And surely the ability to model a phenomenon does not necessarily mean that one would understand the phenomenon. A critic of Wolfram’s (2002) book A New Kind of Science asked: “Just because Wolfram can cook up a cellular automaton that seems to produce the spot pattern on a leopard, may we safely conclude that he understands the mechanisms by which the spots are produced on the leopard, or why the spots are there, or what function (evolutionary or mating or camouflage or other) they perform?” (Krantz 2002).

Although many arguments that computer science is an empirical science have been descriptive arguments, there is also a strong normative movement to increase the amount of experimentation in computer science. There are numerous studies in which research reports in computer science have been compared with research reports in other fields, and in which the researchers have found that computer scientists experiment significantly less, or employ different research designs, than researchers in many other disciplines. This has led many authors to advise computer scientists to experiment more (Tichy et al. 1995; Denning 2005). In an early article, Glass (1995) argued that even theoretical computer science needs experimental or observational evaluation. However, many of those authors have failed to justify why computer scientists should aspire to work like scientists in different fields do. One might justly ask that if the subject matter of computer science is different from the other sciences, why its methods should be the same.

For example, in practice, computer scientists do not work like physicists do (Hartmanis 1993; Tichy et al. 1995; Tichy 1998). The work that wins Nobel Prizes is very different from the work that wins Turing Awards (Hartmanis 1993). Computer science may be just different from the natural sciences, and that difference is downplayed by many authors who make normative arguments about computer science.

Meta-researchers of computer science have often compared publications in computing and some other discipline d, and found that computer scientists utilize some specific methodology M less than researchers in d. Based on that finding, a number of meta-researchers have argued that computer scientists should aspire to rectify the situation by increasing the proportion of methodology M in computer science research. For example, Tichy et al. (1995) called some of the current research traditions “unacceptable, even alarming” because they do not necessitate validation through empirical studies. However, although meta-research studies are usually done using rigorous methods, they often fail to address one crucial issue. They usually fail to justify the assumption that computer science should be done like discipline d. And that assumption is not necessarily correct.

Another source of conceptual confusion is the difference between research on naturally occurring phenomena and artificial (human-made) phenomena. For instance, the observations about algorithm behavior, about usability of machinery and software, or about information retrieval are observations of things that computer scientists have constructed. Those observations do not necessarily tell anything new about the world—those observations tell more about how well previous computer scientists have done their job. This, some critics argue, is not the task of science. Therefore, the thinking goes, computer science should not be called a science. Another view is that computer science is a natural science that studies procedures (Shapiro 2001), but that view faces problems with the ontologically (and, to some extent, epistemologically) subjective nature of procedures (see Searle (1996) for a discussion on those objective–subjective distinctions).

Overall, insofar as the phenomena that computer science studies—be it information, computers, computations, processes, procedures, usability, programs, or algorithms—is human-made, then studies of those phenomena are different from studies of naturally occurring phenomena. If the term science refers strictly to studies of naturally occurring phenomena, then, similar to mathematics, computer science might not be a science.

In the end, insofar as terminology is concerned, feelings remain divided. Concerning the naming of computing as a discipline, Knuth (1985) wrote that “the name of our discipline isn’t of vital importance, since we will go on doing what we are doing no matter what it is called; after all, other disciplines like Mathematics and Chemistry are no longer related very strongly to the etymology of their names”. On the other hand, Brooks (1996) considered that names are very important. Forsythe (1968, 455) pointed out well that when the discipline was young, names were important for creating disciplinary identity—but in a purely intellectual sense such jurisdictional questions are sterile and a waste of time.

Despite the frustration with terminological wrangling, although some of the terms may not be strictly definable, they can still be useful. For example, Wittgenstein (1958, \(\S 64-\S 67\)) noted that it is nigh impossible to come up with an all-inclusive definition for the term game. That definition should be able to include, under the same umbrella term, games such as chess, solitaire, football, Olympic games, Quake iii, and catch. However, even if defining game strictly may be impossible, the term can still be very useful (Wittgenstein 1958). Perhaps similar to game, the terms science and computer science can still function as useful umbrella terms that are just not easily definable.

Conclusions

Throughout its history, computing as a discipline has been overshadowed by an identity crisis. During that history there have been three overlapping stages in debates about the soul of computing as a discipline. Firstly, in the early days of the field, it was important to detach computing as a discipline from the fields that gave birth to it—especially mathematics and electrical engineering. Hence, there were a great number of arguments that computing is a novel discipline and not a subset of some other discipline. Secondly, after the field gained independence, the efforts turned towards formulating an overarching understanding of computing as a discipline—either by describing what computing researchers actually do, or by prescribing what computing researchers should do. Thirdly, a great number of articles lately have argued for internal expansion: They have argued for an extension of the discipline to topics, aims, methods, and research approaches that have not been previously considered to be a part of the discipline.

At the heart of many of the debates over the disciplinary identity of computing lies the subjective-objective division in its epistemological and ontological forms (see Searle 1996). One of the most common references in the debates of that type is the tension between science and art. That dichotomy emphasizes the logico-rational side of computing and the creative constructing side of computing. Variations of that same juxtaposition arise as a division between abstract and pragmatic, theory and practice, form and content, academy and industry, computing and the computer, algorithms and programs, science and technology, and so forth ad nauseam. The debates about the scientific merits of computing have often drawn content from that juxtaposition.

The tensions about the soul of computing as a scientific discipline are manifest in a plethora of viewpoints concerning the legitimate research subjects of computing. In a similar vein, there is an abundance of viewpoints about valid activities, methods, and research approaches in computing fields. The current gamut of topics, subjects, and approaches in computing fields does not fit under any single epistemological or methodological system. That variety is a source of strength and progress (Tedre and Sutinen 2009), but it also exposes the field to critique from all directions, both external and internal.

The heated debates about the soul of computing have not ceased. Among all the debates about definition or characterization of computing as a scientific discipline, there has not yet been an argument that would have brought the discussion to closure. Perhaps that is because there is nothing wrong either in considering computing to be a science, or considering it to be something else. Each of those arguments just assumes a specific viewpoint about both science and computing. After all, there is no strict definition of science, and there is no shared understanding of most aspects of computer science. When working with complex concepts and phenomena such as science and computer science, one has to have some tolerance for uncertainty. Nevertheless, if computing is considered to be a science, but the field’s international curricula recommendations lack training in scientific methodology, alarm bells should go off.

One might say that the name of the discipline is just a label. But characterizations of the discipline, discussions of its intellectual identity, and even labels are important for various reasons. On the one hand, funding, academic status, professional identity, political leverage, societal influence, and many other things in our societies depend to some degree on how a discipline looks from outside. On the other hand, disciplinary self-understanding and conceptual clarity are important parts of a mature field. In addition, if computing as an academic discipline—or computer science for that matter—needs to be distinguishable from other things, such as knitting, forklifts, or chemistry, there has to be some common understanding of what we mean by computer science.

But in many of the debates about the disciplinary identity of computing—debates that prima facie may seem like fruitless parochialism—there is a much deeper concern: The future of computing as a discipline. There is a feeling that although Newell et al. (1967) claimed that the computer is different from tools like the microscope and the spectrometer, perhaps computing as an academic field will, after all, face the same fate as the science of microscopy. Perhaps computing will in the future lose its status as an academic field and partly become subsumed under other disciplines, partly continue serving as a tool for other disciplines (Smith 1998). Is the computing researchers’ obsession with the discipline’s academic image actually about a deep worry, or hope, about the future of the field?