1 Introduction

It could be said that facial recognition algorithm is one of the iconic technologies of the current wave of AI ascendance that characterizes today’s world. What makes it very powerful is that the algorithm promises to find patterns from among a group of very large number of human faces, and these patterns can be used to predict traits in a highly accurate manner. What is startling is that these patterns and traits sometimes can hardly be discernible by trained human eyes, which mean that AI now is better than human beings in a number of areas. It is not surprising then that some of the uses of the technology at the moment are in security and healthcare. Instead of relying on a PIN number, one could let a camera scan one’s face to identify who one is for a variety of purposes. Furthermore, the technology also promises to be able to distinguish between a real human face and a photograph, making it harder for someone to fool the system by showing it a photo instead (Sennaar 2019). In healthcare, facial recognition can also be used for managing a patient’s adherence to medication routine or to gauge the patient’s pain level (Sennaar 2019). In fact, one of the most frequent uses of the technology is to identify individuals from among a large number of people in public places, linking their faces among the crowd to databases containing personal histories.

It is clear now that facial recognition technology incurs a lot of ethical problems, such as violation of privacy rights and illicit targeting. In fact, there are already many studies focusing on the ethical aspects of facial recognition technology (Introna 2005; Bowyer 2004; Brey 2004). In this paper, however, our focus is rather different. Instead of delineating and analyzing the ethical ramifications of the technology, the purpose of this paper is to present a preliminary phenomenological and hermeneutical analyses of facial recognition technology, following the pioneer works initiated by Don Ihde. In a series of books and articles, Ihde introduces a series of arguments detailing how the discipline of hermeneutics should be expanded to include not only analyses of texts, but also of things in the world (Ihde 1997, 1998, 2010, 2012). Furthermore, he also proposes a new form of analysis in philosophy of technology, postphenomenology, which emphasizes the dynamic nature of technology as well as its embeddedness in socio-historical contexts. Thus, what I would like to do in this paper is to present an Ihdean postphenomenological analysis of the facial recognition technology. As Ihde’s main idea in his is that of material hermeneutics, I propose that we might understand the whole phenomenon of facial recognition and the underlying machine learning and artificial intelligence underlying it through a concept of ‘machine hermeneutics.’ This is to reflect the emerging truth that the agent of hermeneutics is today not only us humans, but increasingly machines are doing their own hermeneutic stuff too. With the facial recognition technology, this means that we humans are subject to two layers of interpretation. The first layer occurs, as Ihde suggests, at the level of human perception through scientific instruments; however, the second level is also present when the algorithmic machine itself interprets its data input and outputs it as something that is already processed and interpreted in a way not unlike that of human beings.

The possibility gives rise to roughly two kinds of philosophical problems. The first kind is ethical: Is it ethical to design machines that have the capability of interpreting reality on their own without input from humans? And when machines do have such capability, how can we be ensured that they do their things in an ethical manner? Most of the recent outbursts of academic activities related to artificial intelligence are of this type. However, the second type is epistemological and here it pertains to the problem that we are discussing here more. What machines do their own interpretation, their own hermeneutic activity, how can we know that they are doing the right thing, not only in the sense of being ethical, but in the sense of outputting something that is correct? Perhaps our criterion here would not be unlike that of normal hermeneutics, where the standard of correctness of interpretation comes from a variety of sources. In any case, the prevalence of machine hermeneutic devices means that we are doubly estranged from reality. That might not be a bad thing in itself, but it means that reality and what is interpreted are hopelessly mixed up and there is perhaps no way of disentangling the two. The human element is present very deep down into nature itself.

2 Expanded hermeneutics, material hermeneutics, and digital hermeneutics

As previously discussed, Don Ihde has proposed a new way of doing hermeneutics where the object of analysis is not only the text, but the material world. His objective is primarily epistemological. He would like to criticize the distinction between positivism and hermeneutics, where the former is founded on the belief that science can bring accurate description of the world through rigorous methodology and hermeneutics is a type of activity that seeks to understand what texts say, but without such rigorous basis. According to Ihde, this hard and fast distinction leaves us ignorant of the elements of one lying deep inside the other, thus he appears to maintain that the distinction does not reflect the reality of the work of either the scientist or the hermeneuticist. This is because, as he argues, there are hermeneutic elements inside each and every scientific activity, and vice versa. A scientific investigation of nature, for Ihde, has hermeneutic elements when the scientist has to interpret his data, which implies that the interpretation will intervene with the attempt to gain access to external reality in one way or the other. In a way, this compromises the overall aim of science to gain access to the accurate picture of the world, but there is no way around it as the data are already results of an activity that necessarily mediates with reality. Furthermore, the use of technological devices such as the microscope or the telescope further distorts the idealized picture of a perfectly accurate description; this is so because such devices insert their own intervention in the process starting from reflection of light from the observed object to the eyes of the researcher. The intervention can be very clear and transparent, such as the very clear mirror on the Hubble space telescope, but the resultant colorful pictures that we see from Hubble do not have to correspond literally to what the stars really look like. In fact, it is quite incoherent to imagine what a star really looks like, as it is in itself (in the way that is required by the traditional picture of purely objective observation), because it is too big and too distant and thus there is no vantage point from which one can view it and can then claim that one can observe the star as it really is. The same kind of reasoning is also the case for the very small, observed through the microscope.

On the other side, there are also some “positivistic” or “scientific” elements in hermeneutics. Ihde says that one of the consequences of the traditional bifurcation here is that hermeneutics—the central activity of the humanistic disciplines—are marginalized and cut off from the so-called hard sciences of science and mathematics. Humanistic disciplines came to be regarded as not being able to gain the truth, and what is left of them is only a matter of “interpretation” or “opinion.” However, as any scholar in the humanities can attest, there are always indeed elements of objectivity and truth in their disciplines. Even though philosophy has been as contentious as it ever was since Plato’s time, philosophy was widely regarded as a means to truth, and certainty has been a part of philosophers’ concern for a long time, as well as statements that are considered to be the paragon of certainly, such as logical statements. This interpenetration of one end of the spectrum into the other prompts Ihde to doubt that the bifurcation really does work, and an upshot is that the notion of material hermeneutics took hold where hermeneutics is seen to be part and parcel of the scientific enterprise and where the object of analysis is no longer only texts, but also nature itself, and the instrument used is not merely the eyes reading the text, but all kinds of technological devices such as the telescope or the microscope.

The project of expanding hermeneutics here goes hand in hand with Ihde’s another project of postphenomenology. Traditionally philosophers of technology typically employ phenomenological analysis as a means toward comprehending the nature of technology as it makes itself appear to the lifeworld of human beings. Philosophers would ponder the phenomenon, such as one in which technological artifacts play a central role, and then try to find out how the phenomenon appears as something that is imbued with meanings and values, or not. Martin Heidegger, for example, is well known for his analysis of the new technology of his time, such as the hydro-electric dam, searching to find meanings in the dam, linking it with the examples of arts and craft from the ancient past (Heidegger 1977). His analysis of the dam as an exemplification of the notion of Bestand, or standing-reserve, presumably arises as a result of Heidegger’s reflection on the use and the purpose for which the dam was built. The river Rhein used to flow freely; its water was part of the natural landscape. The dam that was built to block up the river to generate electricity thus symbolizes a kind of mindset that looks at the river, not as a goddess as the ancients did, but a natural resource, standing ready to be tapped whenever needs arise. For Ihde, this way of analyzing the phenomena still has a vestige of the old distinction between positivism and hermeneutics; for Heidegger, there is no mention of any type of scientific elements coming into his own phenomenological analysis of the dam. It is as if the objective or the scientific elements are consigned over to the “positivist” side and nothing is left on this side except for his own way of analysis which does not need science or technology at all.

In contrast to this way of thinking, Ihde proposes a postphenomenological approach where the analysis is done through observing how a technological artifact actually operates in concrete settings and uses findings from the observation as a necessary ingredient in thinking about problems in philosophy of technology (Ihde 2010; Hongladarom 2013). Instead of asking what is the abstract meaning posed by the dam as arising in the mind of the philosopher when she performs a phenomenological analysis, Ihde’s question would be what kind of impact on human knowledge or on other areas of life would the artifact have and how we can understand the role the artifact plays in the lifeworld of the human community when they engage with one another in their daily living. Thus, as knowledge always plays a very important part in daily living, the role that the artifact plays in knowledge would be highly significant. For example, the typewriter played a very important role in creating, or at least disseminating, knowledge, in the same way as the notebook or the desktop computer is playing nowadays. Reflecting postphenomenologically would imply that the typewriter has a role to play, a mediating role in the activity of creating and disseminating knowledge. At least, it has some positive aspects as a result. This is in contrast with Heidegger’s denigration of the typewriter as a device which divorces humans from their naturalness that is exhibited in long-hand writing, using pen and paper. (Thus, one can only see quite clearly what Heidegger would say of today’s notebooks or iPads.) Instead of essentializing technology in the way Heidegger is doing, Ihde proposes to look at technology in a more diffused manner. Technology always plays a role within the web of activities performed by us human beings, and it takes on meanings along the way. This way of looking has a very significant implication for our examination of the new facial recognition technology. For the facial recognition technology and artificial intelligence, the expanded hermeneutics and postphenomenology of these technologies would be really interesting. What seems to set the technology apart from the older forms of technology is that facial recognition does its own form of perception, thus complicating and radicalizing Ihde’s analysis of hermeneutics a great deal.

This new kind of hermeneutics is called by Ihde “expanded hermeneutics” and later “material hermeneutics.” The difference between the two seems to be that the former refers more to the role that hermeneutics plays in science and in dismantling the distinction between positivism and hermeneutics described above. The latter, on the contrary, refers more to the application of the hermeneutic method to the social sciences and the humanities themselves. Ihde talks about letting things themselves speak, i.e., “a technique whereby things—materialities—are given a voice” (Ihde 2005: 342; Capurro 2010: 37; Verbeek 2005; Friis et al. 2012). Instead of focusing itself only on interpretation of texts, Ihde argues that the enterprise should be expanded to do essentially the same thing to material realities, and, in fact, his argument is such that the disciplines that profess to study the latter, the sciences, are already using the hermeneutic method from the beginning. In addition, when the hermeneutic method is being applied to the digital phenomena, the process becomes known as “digital hermeneutics” (Capurro 2010; Tripathi 2016). In addition to applying the hermeneutic method to the traditional areas of study by the social sciences and humanities, digital hermeneutics seeks to understand the application in the digital field. Thus, digital hermeneutics is both a field where technology itself plays a constitutive role (as in the digital technology being the environment in which everything operates, and this obviously includes how the study itself is conducted too), and where the technology is an object of analysis. This implies that the technology is both an object and an instrument by which it is studied and analyzed. The hermeneutics thus can be seen at how the unit is analyzed, which is the traditional way hermeneutics is understood, as well as the environment, and the means by which the analysis is done. The latter activity does not merely refer to the fact that the hermeneutic method is being applied to the unit of analysis—this is there in any type of hermeneutic study no matter what, but it means that the digital technology itself comprises such means of analysis from the beginning, which reminds one of using a mirror to study another mirror, creating endless reflections. Here the hermeneutic method—the act of trying to understand the text and finding its meanings—is performed through the help and use of digital technology itself, which is itself the object being studied. This tendency of self-reflection will be much more pronounced when I discuss what I term “machine hermeneutics” later in the paper. However, before we do that, it is perhaps appropriate to discuss in some detail an application of today’s very advanced digital technology, i.e., the use of algorithmic deep learning methods to identify human faces and to predict future traits based on analyses of the acquired facial images. We will see that hermeneutics is in operation at various levels in this new type of technology.

3 Facial recognition software and machine hermeneutics

Perhaps facial recognition technology has attained now the status of the iconic use of artificial intelligence. The technology is now being used widely, and the most notorious use of all appears to take place in China, where millions of ethnic Muslims are being put under constant surveillance by controlling devices that use the technology. In the West, its use appears to be more limited, but it is still being developed for a variety of potential uses. The website Sytoss.com has listed five current uses of the technology in the West; these are (1) facial recognition for access control, (2) class attendance tracking and control, (3) marketing, (4) authorization in banking, and (5) public security (5 Popular Uses 2019). We can see that these uses are not very different from the uses in China. There is a news report that the technology is being used in real situation at a school in China to track and control the students, exactly as mentioned in the Sytoss.com website. In fact, the applications listed in the website are among the more basic ones; that is, facial recognition is used to identify particular individuals whose faces are remembered by the machine and identified when the machine finds a match in its database. However, what we are concerned with in this paper is the more advanced use of the technology, that is, using the data obtained through the technology for categorization and prediction of certain traits. Data obtained from scanning a very large number of faces can be processed in such a way that recurring patterns can emerge and become visible in the “eyes” of the machine. The machine can then match the patterns with certain traits or properties of the population in a way that can scarcely be noticed by ordinary humans. For example, it might be possible in the near future to predict for criminal behavior merely by looking at someone’s face and compare certain features in that face with what the machine has already detected from the faces of known criminals. It is possible that the machine could be seeing something in the face of someone who it predicts to possess criminal traits while we human beings cannot see anything at all. This tendency naturally gives rise to grave ethical concerns. However, ethical concerns are not the topic of this particular paper. We are only interested in the underlying meaning behind this kind of power exhibited by the machine. One example of how the predictive power of the machine learning algorithm is used is in screening job interviewees (Harwell 2019). Another is using the software to detect the onset of a rare genetic disease (Mjoseth 2017). The face of a job applicant at a company is screened by the machine, which then analyzes it to look for traits such as how much excited the candidate is with the job prospect he is being offered, and so on. The analysis of the candidate’s face is compared with a huge database that the algorithm has at its disposal. The machine analyzes the database to look for traits that predict for employability and success in the organization. Then it tries to match the faces of the candidates with those traits of the known successful employees. If the faces of the candidate match, then their chance of getting hired will rise up dramatically. In the case of the genetic disease, the machine analyzes photos of faces of potential patients, and alerts the medical personnel if it finds that the faces it analyzes show patterns that are linked with the disease, and the rate of accuracy in the diagnosis is more than 96 percent (Mjoseth 2017). What is noticeable in these cases is that it is the machine that does the calculation and the decision-making all by itself. It scans a relatively large number of facial data, compares them with the database, and then comes up with its own prediction as to who will benefit the company the most in the long run or who is most likely to have certain diseases.

An application of facial recognition technology that foregrounds Ihde’s view on material hermeneutics is the use of the technology by the police (Collins 2019). We can imagine the police wearing technologically sophisticated glasses, which are hooked up with a computer and AI facial recognition software. When they look at a group of people, the program then puts the faces of the people in rectangular frames accompanied by numbers. These frames then blink and get highlighted when a sought-after suspect appears in their visual field, making it easy for the police to identify the suspects and take appropriate action. Here the glasses function in the way described by Ihde: the glasses are the intermediaries between the police officer and her visual field consisting of a group of people. Here hermeneutics comes into play when the glasses, being a technological device, mediate the officer’s perception in the same way as the telescope mediates our perception of the distant stars. Peter-Paul Verbeek, in What Things Do, presents a number of diagrams designed to capture the gist of the relations among the subject, the world, and technology as follows:

unmediated perception: I–world

mediated perception: I–technology–world (Verbeek 2005: 125)

In the first case, the police look at the people with unaided eyes. According to Ihde, this is an example of a simple, direct relation between the subject and the world. However, when the police officers put on their glasses, the latter functions as a kind of technological mediation. This is the case no matter if the glasses are ordinary ones, or the highly sophisticated ones equipped with latest software. In the second case, the glasses function as technological mediation, putting in the hermeneutic elements into the relation of perception. In ordinary setting, the glasses function as an aid to the subject’s vision. For those who suffer from nearsightedness, normal living would not be possible without the eyeglasses. So instead of the glasses functioning, as some may think, as something that distorts the “pure” vision of unaided eyes, the glasses help correct the blurred vision of the near-sighted person and help them function in the world, and here there is a strong case for the glasses to help the subject see things as they are.

Now, with the facial recognition software, the situation is more complicated. What is presented to the eyes of the police is not only what is mediated by the glasses, but the data from the glasses are sent to the software for processing and then the output is sent back to the glasses, which then present the result as an image that the police can see in and through the glasses. For example, the image that the police see is usually faces of people, now each one being framed with a rectangular figure accompanied by a series of numbers and alphabetic codes. It is obvious that the rectangular frames do not belong in nature, but these are imposed on the images presented to the officer’s eyes. However, there is a sense in which these frames and alphanumeric codes in a sense belong to external nature because they describe the faces of the people being observed externally by the police. We can look at this as another extension of the capabilities of the glasses themselves. Earlier we have discussed how the eyeglasses correct the eyesight of the wearer, making it a more accurate representation of the reality for the wearer. Here, with the imposed information, what is happening is that the reality being perceived is being augmented by the software, and since the information is supposed to be that of the perceived faces, it belongs to the faces in the sense that it describes, through the work of the algorithmic software, what each particular face in the glasses is like and what information can be gleaned from it. Another way of putting this is that the inside and the outside are merged together. The inside is what is supplied by the software, and the outside of course is the information from the perceived faces coming to the glasses and the eyes. A perhaps parallel situation is our normal perception, when the inner processing work of the eyes and the brain results in the images that we see and understand. We put labels on what we see, such as “That before me is a human face, and it belongs to someone I know;” the difference is that in the AI case the labeled information is literally labeled in the images presented by the glasses themselves. The officer thus both sees in and through her AI-equipped eyeglasses.

I would like to call this new phenomenon “machine hermeneutics.” Hitherto, the analysis of hermeneutics is performed by humans. Ihde’s material hermeneutics, where things speak for themselves, is limited to situations where things become objects of interpretive analyses by humans; here things merely replace texts. However, in the new situation made possible by machine learning algorithms, machines share the task of interpreting and analyzing the data. In analyzing the data obtained through the glasses worn by the police, the software performs its own interpretive tasks, categorizing and suggesting its own predictions as to who among the scanned faces is likely to have criminal tendency, in addition to identify known criminals whose faces are already in the database. The task of the machine appears to parallel that of humans very closely: first of all, it obtains the raw data, in this case, the images fed into the glasses; then it singles out human faces from the raw data and performs a series of complex calculations, turning each face into a large set of numbers and coordinates; then it analyzes these sets through intricate steps of calculations to come up with their predictions. The predictions are the result of the interpretation, and the process looks like steps taken by a normal interpretive process, where a human being takes up a set of raw data, such as a piece of ancient text, and she tries to make sense out of it by comparing it with her “database” (in this case her memory), finding patterns that match. The end result is her own interpretations as to what the text means. That the machine learning algorithm is an example of today’s sophisticated artificial intelligence means that the algorithm is able to do its own calculations, its own thinking and I would say interpreting to come up with results that purport to make sense of the data, either raw images or ancient texts.

In this case, the diagram originally proposed by Verbeek can be modified. The intervention of machine hermeneutics results in the following diagram:

I–technology–world2–AI–world1

In this relation, the subject (the “I”) experiences the world through an AI-equipped device (such as the glasses worn by the policemen described above), but then the world is already interpreted by the AI algorithm; thus, there are actually two worlds in the picture. (In a sense, though, there is only one world because the policemen are looking only at one scene, but this one scene is presented in two layers.) World1 is the world where the raw image originates; then the image is processed by the algorithm, imposing frames and codes and presenting the processed output to the glasses that the police officer is wearing, resulting in the image that he sees in his glasses. The image that he sees then represents World2, i.e., the world that is already interpreted by the algorithm. The officer looks at the crowd scene existing outside through his glasses, so here the glasses, qua ordinary eyeglasses, function in the way described by Ihde, that is, as a normal kind of technological mediation. Machine hermeneutics thus functions alongside the more mundane expanded hermeneutics; the former is represented by the AI algorithm connected with the glasses, and the latter by the glasses themselves. Moreover, World2 is perceived by the subject, the human individual, even though she does it through some form of technology. World1, on the contrary, is perceived by the machine. So this is another important difference.

The difference between machine and expanded (or material) hermeneutics can also be described more generally as follows. In the latter case, the function of the technology involved is an inert one in the sense that the technology does not insert itself, so to speak, into the interpretive process that takes place in the process itself. Ihde’s favorite examples, the telescope and the microscope, though themselves marvelous technological achievements, nonetheless function only as bringing distance objects closer to sight or magnifying very small objects. In either case, the goal is to represent reality as it (is supposed to be) really is. Certainly there is a lot of science and technology involved in these devices, but the science and technology here are not intended or designed to process or manipulate the information input to the devices. The overriding function of the microscope or the telescope is to be transparent. It is as if we are taken to a distance from the stars from which we can observe what they are like with naked eyes, and the same goes for the microscope. These devices are not designed to project their own take or their own understanding (now metaphorically speaking, more on this very soon) onto the reality. But this is precisely what the machine learning algorithm is designed to do. In addition to processing the images of the crowd to be observed by the police, AI is working in many other areas. In navigating and driving the driverless cars, the AI has to do its own very fast and very numerous interpretations, as it has to navigate the maze of today’s roads. The point is that it is supposed to do all this autonomously. Here we might be able to say that the driverless car “knows” or “understands” the rules of the road and how to drive without breaking any law and causing any accidents. Nowadays, however, the self-driving cars are still being in a developmental stage and can legally operate in more controlled environment only, but many believe that it is only a matter of time before they can function as well as any human being (or better) on normal roads. This, of course, depends on local or national regulations, a matter which is being widely debated and discussed. When we say that the car “knows” how to drive, we do not mean that the car has consciousness and is aware that it knows how to drive as we believe human beings do. In fact, the autonomous car does not seem to have any consciousness at all since it is only a machine, though a very sophisticated one. Here I would like to argue that terms such as “knowing” and “understanding” that many may colloquially ascribe to the self-driving cars are only metaphorical. We use the same linguistic mechanism when we say that we “fly” to this or that place, for example. Of course we are not birds, but we “fly” nonetheless.

This implies that when the AI algorithm interprets the raw data for the police or for the self-driving mechanism, what it does is interpreting, and this clearly implies that it is doing hermeneutics. However, when I say that AI is doing hermeneutics what I am saying is only metaphorical in the sense described above. So far, no AI program is conscious yet, but that does not prevent it from doing all the tasks that many ascribed only to human beings only a few years ago, tasks that many believed to be only possible by conscious human beings. Going back to the self-driving cars, World1 in the diagram above is the reality of the roads with their traffic signs, pedestrians, other vehicles, and so on. The AI interprets these data and comes up with World2, a model that it has created, a map of the whole environment that is relevant to its driving; this map can be presented in visual form to the human inside the car, and when he perceives it through his normal device, such as his glasses, then the glasses function in the normal way of Ihde’s expanded hermeneutics. We can also imagine that the World2 images processed by the machine can be transmitted over a long distance to humans who happen to be very far away from the car. In this case, the whole process of transmission functions as “Technology” bridging the “I” and the “World2” in the diagram. This is the case as long as the transmission process does not insert its own interpretation, but faithfully transmits the data to the eyes of the human observer.

4 Does this mean we lose access to reality?

The introduction of machine hermeneutics gives rise to a question whether the interpretive process performed by the machine would mean that the machine is giving its own view, its own version, of reality resulting in our being cut off from the reality we are perceiving or not. The answer, as I understand, is that it depends. Instead of the microscope or the telescope, which aims at representing reality as it supposedly is, the AI algorithm modifies reality in a significant way, but by so modifying it does not need to imply that it keeps us further away from reality. On the one hand, the algorithm help us understand reality better; it is as if there is somebody who looks at the reality we are looking and explains it for us, and the explanation could be so integrated with the looking that the explanation can become part of the things being looked at themselves. On the other hand, the work of the algorithm could also lead to the situation where the subject is misled or is deceived by the processed image. In the case of the police officers, this would mean that the analysis presented on their glasses is false, which could lead to disastrous results. Techniques such as Deep Fake, where the computer produces images of something that does not exist at all but so similar to reality that no one can see the difference, are particularly worrying. Here we are faced with a situation where photographic images can be something that are totally made up, as if they were paintings. People have a natural tendency to believe that photographs are accurate representations, but with Deep Fake this belief can be thrown into utter confusion. This can well happen with the facial recognition software used by the police. If Deep Fake or misleading algorithms have found their way into the AI used by the police, then it is an easy matter for the software to mislead the police to believe that an innocent bystander is a criminal. That would create a real chaos. Thus, an effective program that ensures everybody that the algorithm functions in the way they should is absolutely essential if we are to depend on AI in our daily lives. This can be done, for example, by designing and programming the algorithm itself to behave in a way that we expect. After all, if machine is to do its own hermeneutics, it has to learn how to do it the right way. This is just part of knowing how to do hermeneutics in the first place. In other words, there has to be a match between the technical excellence of the AI program and its ethical excellence. The technical excellence of the program means that the program is able to do the technical work it was designed for well, but this always needs to be coupled with ethical excellence in such a way that the two cannot be considered apart from each other. Deep Fake program is a glaring example of the uncoupling of technical and ethical excellence. The program achieves technical excellence when it does its job well, but with Deep Fake software the potential for misuse and violations of norms is so great that the software cannot achieve any kind of technical excellence in the first place. In other words, the program cannot be a good program in any sense. This idea, which I am developing elsewhere (Hongladarom forthcoming), is part of an idea for an ethics of artificial intelligence, a topic which is not directly related to this present paper. Thus, machine hermeneutics has its own standard of correctness, which is in outline not very different from the same in the more traditional kind of hermeneutics. In the latter, we know that an interpretation is correct, or acceptable, when it can elucidate the text in a satisfactory manner, or in the other words, when it fits well with the other interpretations which are accepted by the community. The same also goes for machine hermeneutics.

5 Conclusion

What I have tried to achieve in this paper is to show that there is an added dimension of Ihde’s material hermeneutics, which occurs when machines are capable of doing their own interpretation with the help of advanced AI algorithm. This results in there being a double mediation, one by the usual technology that Ihde talks about, and the other by the AI algorithm itself. I term this double mediation phenomenon, “machine hermeneutics.” In a way, it is increasingly the case that what we perceive, what we take to be external, objective reality, is an outcome of machine hermeneutics. What this would mean, phenomenologically speaking, is that we ourselves find that we are more and more being integrated into the machine world. This points to the importance of programming algorithms that are ethical from the beginning, so that the values and goals that we cherish are already there in the goals of the algorithms themselves. This attempt comes in at various levels. The police who wear AI-equipped glasses also have their values and goals, which might not chime with those of the rest of the population. What we need then is an ongoing rational debates and discussions as well as more philosophical reflection to understand the phenomenon further.