Some thoughts on Christian Doeller's
»How do machines see the world? In his artistic research project CYTTER, Christian Doeller deals with the question of how sensors, data filters and digital production processes change the way we look at ourselves and our environment.«
(From the invitation card to the project)
At the end of February 2021, I receive an invitation email to participate in the project and go to the website. You can register and then send an item by post to the data lab. This object then goes through a series of translation steps in the lab with different digital and analogue intermediate results. The core of the data lab is a network of machines (scanners, milling machines, drawing machines, programmes and also manual reproduction processes) through which an object can circulate for any length of time and receive new digital and analogue realisations again and again. In fact, after a series of steps, the translation process is stopped and the object sent in receives a new analogue form, which is sent back to the participants by post. Today, 1 April 2021, I received my transformed version.
The following thoughts are not a review or even a critical appraisal of Christian Doeller's project, but rather a further translation. Part of the nature of digitisation is that it does not follow the rules of the object to be digitised, but its own. No digitisation process can capture all the properties of an object; in a sense, it does not get involved with its objects at all, it cannot do so on principle. It therefore doesn't matter which object we digitise, the surrogate we create primarily reveals the structure of the digital, not that of the analogue object world. This became clear to me when I was thinking about which object to send to Doeller's data lab. Even the specifications for the object (1 pc, max. 30 cm × 30 cm × 30 cm, max. 5 kg) are indebted to the digitisation process and the associated machines, perhaps still to the postal service's conditions of carriage, but certainly not to our universe of objects. The machines and the rules and processes inscribed in them automatically generate digital existences of the object. What is thematised here is the process of objectification itself, rather than the concrete object. The concrete object is indeed the cause and reason for the initiation of the process, but the process itself does not ask whether it does justice to the object. In the same way, I do not ask whether I am doing justice to Christian Doeller's project with my text. His project is the trigger for my thinking, but I follow my own interests and do not trace the intentions of his artistic research project. I translate his project into my own thinking space, so to speak. Just as the transformed object I received back today is a translation that no longer bears any apparent resemblance to the noodle I sent on its way weeks ago, my text makes no attempt to establish superficial correspondences to Christian Doeller's project. It is a translation, not a reflection, precise description or mathematical mapping. Between our two objects of exchange - the noodle and the piece of linoleum floor - there are hidden but clear, reconstructible relationships; the translation of Christian Doeller's project into my text is less stringent, in that it is non-digital. But there are undoubtedly connections.
In the common sense, »object« is a term for everything that confronts the subject (the cognising I) in the external world. It is therefore about things in our environment that we can perceive, that trigger sensory stimuli, that we can look at, touch, sniff and identify as an entity. So we want to initially disregard imagined objects that we can only imagine. At the latest when it comes to digitalisation, we will realise that it simply doesn't work that way with objects. The objects that we manufacture industrially today or measure and produce scientifically in laboratories must be defined differently. In addition to the concrete properties, there are a number of abstract ones that are implemented primarily in the network of machines. With handcrafted objects, the manufacturing process could still be described as an interplay of tools and materials with the knowledge and skill of the craftsman. Craft skills have evolved slowly and have often been changed only gradually over long periods of time. The more automation advances, the more drawing processes and algorithms now come to the fore. The human being recedes to the same extent. Objects of the digital world are above all a network of relationships that exist between sign systems in computers. This goes so far that algorithms on microchips, for example, develop the next generation of microchips without engineers being able to understand the individual decisions in detail.
Even if it is machine-made, our initial noodle is still a real object that we can easily identify and perceive in our environment. An object that is even small enough that we can put it in an envelope and send it by post. On the website I see other objects that have been sent to the lab. So far there are 26, which have been translated in a total of 182 steps. I see mostly everyday objects there, a pen, a bicycle helmet, a pen holder, a piece of money, the leaf of a houseplant, a table mirror in plastic foil, toys, a bicycle inner tube, and so on. All things with which we surround ourselves, which are directly tangible, already within reach of our hands, when we are asked to send »an object« to a laboratory. Not an everyday request! There is a story to some of the objects that went to the lab. From the short descriptions on the registration form, we can see that they are connected to something for their owner, i.e. personal signifiers. Whether this history had an influence on the lab process they were subjected to, we don't know. My guess is no. Upon arrival in the lab, the objects lose their previous context and enter a new one. They lose their origin and meaning for their former owners, becoming scientific objects where only the physically measurable manifestation matters.
So I ask myself, what happens to my noodle when it ends up not in my stomach but in the lab? Normally we eat noodles. Because of the previous cooking, the noodle absorbs liquid, becomes bigger and soft. We chew it up and in the stomach it is broken down into components that can be used by the body, the rest is excreted. Through the digestion process, the noodle disappears. Not its physical components disappear, the atoms and molecules become part of other processes, but the noodle as a perceptual unit disappears. Our laboratory noodle, however, is now not cooked and eaten, but digitised. Digitising is therefore something different from digesting. The noodle is not dissolved into its components, but it serves to produce something second. After digitisation, we not only have our original noodle, but also a digital image. This poses a problem: What is the relationship between the digital image and the original noodle? What actually happens in this process of digitalisation? That something is happening is already evident in the objects of exchange. I send away a food item and get back a piece of linoleum flooring that has been processed with a milling machine. How are these two objects connected?
Objects are entities of the material world. In other words, things in the outside world to which our perception, our cognition and also our actions can be directed, but which we assume exist independently of our contents of consciousness. As soon as we begin not only to perceive an object or perform actions on it, but to write about it, a strange doubling already comes into play. There is the object and at the same time a text, signs that are not part of the object but nevertheless connected to it. Semiotics deals with the structure of such connections, with the relations of signs and their meanings. Whereby signs themselves are not objects, but schemata of action, especially digital signs! The , the supplementary sheet to the object, which had to be filled out and sent to CYTTER.datalab together with the object, also brings a duplication into play, it demands a description in order to define the objects in more detail. With such descriptions, the detachment from the object and its duplication begins. A further step in determining objects more closely on the basis of signs is to assign or deny them quantitative and qualitative properties. The objects are classified on a scale from 1 to 10, at the ends of which there are two opposing terms. Is the object rather flat or plastic, rather hard or soft, rather reduced or complex, etc., properties, in other words, that we can determine, even if only subjectively, but still halfway comparable, by visual or haptic inspection of the object. Quantifications are the essential mechanism for digitisation. The supplement itself is also digitised and thus acquires a dual existence. How the contents of the PHYSIOLAR, i.e. the descriptions and quantitative characteristics created by the participant, enter the further digitisation process remains hidden. Like most things that cross a laboratory threshold. Many characteristics that we measure in the laboratory are no longer accessible to direct perception anyway; they are quantities that can only be determined with the help of technical devices and complex machines and discussed in technical terms. On a laboratory analysis of my blood that I have just been given, I read that the platelets in healthy human blood should be between 140 - 160 per nl, the haemoglobin between 13.7 - 17.5 g/dl and the leucocytes in a range of 4.20 - 10.10 per nl. I see that my values are in the desired range, but what do these values mean? How did this division into normal and abnormal come about and how can such values be determined? These qualities of blood, which are not directly accessible to human perception, point to the fact that empirical knowledge today develops primarily in the confrontation with objects in complex technical environments.
According to Hans-Jörg Rheinberger, the »epistemic things« of science are first formed in complex experimental arrangements in the laboratory. This process is neither inevitable nor conclusive. Therefore, not only planning and control determine everyday laboratory and research life, but also improvisation and chance. In a second step, what is recognised in the laboratory can itself become a technical object with which further epistemic things can be investigated. Laboratories are therefore always places of objectification, i.e. they first produce the objects they pretend to only measure. The cultural meaning of the word blood is therefore completely different from the meaning that the modern medical practitioner attaches to it. In most cultures, blood is a symbol of life and sacrifice, whereas in modern medicine it is a complex system for identifying diseases and planning medical interventions. Today, determining the blood count is a standard process in medical laboratories. The individual blood values, once as epistemic things themselves the result of research processes, are condensed into medical vocabulary and now realised in technical equipment. The blood values are inseparable from the laboratory equipment that measures them. Empirical findings coagulate into methods and practices that are held together by new terms and implemented in technical equipment. These technical settings can then be used to investigate further research questions, a process that knows no end.
The computer laboratory
Until the 1980s, the places of computing, the computer centres, already resembled medical, chemical or biological laboratories on the outside. At the University of Erlangen-Nuremberg, where I wrote my first small programmes for calculating machines as a freshman at the Institute for Mathematical Machines and Data Processing, the computer centre staff sat behind glass, in air-conditioned rooms, shielded from the users. Only the operators, a very skilled profession at the time, had direct access to the machines. The user programmes on punch cards were packed into boxes suitable for the machines and pushed through slots into the machine room. From then on, they were placed in the laboratory processes and removed from the user's control. One never knew when the operator would take the box out of the slot, feed the contents to the punch card reader and start the process. The results printed on continuous paper were then pushed back into the anteroom through other slots. In my first attempts, all I got back were error lists, most of which were longer than the programmes themselves. These data centres have disappeared with the spread of PCs. Today it's mostly cloud computing that still takes place in air-conditioned data centres. We only get to see these places in photos and usually don't even know where they are. Communication with them is entirely through networks. When we talk about computer labs today, we usually mean something different from these high-performance computer centres. Namely, rooms with a manageable number of networked computers. In addition, there are usually a number of other machines that serve as interfaces between the digital and analogue worlds. Or we see apparatus in which processes take place that are controlled by the computers. While computer technology has become smaller and smaller over the years, interface processes remain stuck in the analogue. The digital is pure structure and thus not bound to a specific material or physical realisation, whereas analogue material processes cannot be reduced in size at will. The size and structure of the interface technologies, the sensors and actuators, i.e. the devices that realise the reciprocal transition between the analogue and digital worlds, are half arrested in the analogue world and half in the digital.
What makes laboratories special, however, is not what is apparent, their equipment and architecture, but what takes place in them. What takes place is not self-explanatory, nor can it be decoded and understood by mere observation. By their very nature, laboratories are black boxes. Part of the nature of the black box is that it cannot be decoded by its input/output behaviour. I send in a noodle and get back a plastic engraving. This process does not in itself allow conclusions to be drawn about the internal structure of the laboratory. I could try to test whether it is a trivial black box. In that case, every time I send a fusilli noodle to the lab, the same object would have to come back. But as long as I don't have an understanding of the laboratory processes, I can never be sure that I won't suddenly get something completely different back the (x + 1)th time, even though I have received the same thing x times. Christian Doeller tries to counter this inaccessibility of laboratory processes in his CYTTER installations through »«. Here, visitors can enter the installation (in times of Corona via livestream) to take a close look at everything and ask questions. Lab assistants are on hand to answer questions about the CYTTER system. How is it decided which path an object takes through the lab? When is it »finished translating«? What influence do the specifications in the PHYSOLAR have? On the basis of which parameters do the machines work on the translations? Is there a »back«? How do the algorithms involved work? What role do humans play in the data lab? This is an attempt to dissolve the distance between visitors/participants and the lab, to take them into the world of the lab and bring them a little closer to the processes that are perceived as inscrutable. But even this has its limits. Although individual processes are symbolised in the CYTTER.datalab, slowed down in time and rescaled spatially (for example: the CYTTER.datalab as a walk-in circuit board) in order to make it easier to understand the processes, residues always remain, which Doeller calls »invisible aspects of the visible«. Why is it that these blind spots, which are important for the translation processes, cannot be eliminated through explanation and walking? What is actually the basis of the laboratory assistant's conviction that he knows more than the visitor? And where does this lead end and his own ignorance and blind spots begin? These questions have to do not least with the nature of human understanding. This nature includes the blind spot, a central figure in second-order cybernetics, which shows our fundamental inability to attain complete knowledge.
What we call understanding is always a mixture of what is visible, measurable, tangible in the external world on the one hand and our abstract concepts, ideas and terms that exist only in our thinking on the other. Gaps in our explanatory models - theories - have their roots in the difference between doing and understanding. Things that we can do do not necessarily mean that we have understood them. A prehistoric man could learn very quickly, by copying, the hand movements necessary to make fire. Yet there was no deeper understanding of combustion processes, he had no theory of fire. In the same way, a digital layman could be taught a series of actions in the computer lab, with which he could transform objects, for example. Anyone who performs the actions with sufficient precision and in the correct order will arrive at the same result, regardless of whether they understand what they are doing or not. Here we also find the origin of algorithms and automation, in the unreflective repetition of processes once understood and functionalised. The programmer tries to penetrate a process to such an extent that he can reduce it to a scheme of action. The action pattern, i.e. the algorithm, itself knows nothing about its actions and has no understanding of the context of its actions. In laboratories, such action schemata are typically distributed across technology and humans; parts are implemented in technology, others are executed by humans. These interlocking processes only make sense when we build explanatory models of the overall system and its context and not just string actions together. Theories must not only be able to explain the perceptible and measurable phenomena in the laboratory, they also span the possibility space for action in the laboratory together with practical knowledge and experience. Essential elements of our theories are abstract concepts, quantitative quantities and fictitious images that allow us to plan our actions and predict results. It is not decisive whether the ideas used are »correct« in an absolute sense. Final certainty is not achievable here anyway; what succeeds are always only improvements. Previous ideas are replaced by new ones that better explain phenomena and with which we can subsequently open up new spaces for action. We know, for example, that the images we have of atoms and molecular structures are auxiliary constructions. Nevertheless, on the basis of these defective ideas, we have been able to develop a variety of new technologies and medical therapies that work well in practice. Our explanatory principles inevitably have gaps and we invent causal chains, concepts and images, Heinz von Förster would say »particles«, to bridge these gaps. For him, »particles« are always solutions to problems that we cannot solve otherwise, i.e. inventions to be able to explain certain problems at all. These elements of our theories and explanatory systems that are not understood and not questioned further cannot be eliminated, only shifted. Our many different explanatory systems are not buildings on eternal foundations, but rather floating platforms that must support themselves. The laboratory specialist's understanding is also a fabric that contains misunderstood elements. Although he has better explanatory models than the uninformed visitor and can therefore do more, his scope of action expands on the basis of his explanatory models. In order to have the same understanding of the processes as the laboratory specialist, the laboratory visitor would have to acquire the knowledge of the laboratory assistant, i.e. implement his imaginative model in his own head. It is clear that this is usually a long learning process that cannot be shortened at will. Every technical term conceals abstractions, complex references and experiences that have often been honed in over many years of practice. Practice labs, in which best practices are applied on an assembly line, are to be distinguished from research labs, which investigate open questions. Research labs follow the principle of objectification described above.
Epistemic things, i.e. explanatory models, theories and knowledge that is only in the process of becoming, exist not only in the natural sciences, but also in technology and the arts. Technology, which is not concerned with knowledge but with poiesis, i.e. making and producing something new, struggles just like science to find terms that are gradually sharpened and eventually become the standard vocabulary of the next generation of engineers. At the beginning, there are often imprecise questions in a still open space of possibilities for action. Signs (formalisms, terms, methods, algorithms) are always used in this context to capture repeatable actions and, conversely, to enable them. Vague ideas and fragile processes must first be stabilised in the laboratory. Only in this way do best practices slowly emerge, i.e. the entire network of concepts, methods, procedures, devices, theories and descriptions solidifies. Epistemic things, however, exist just as much in art. In this sense, the artist's studio has always been a laboratory. Whereby aesthetic phenomena can also be stabilised without exact scientific causal knowledge. Even if the artistic tools and apparatuses are not necessarily designed to deliver precise measurements or to serve the production of scientific knowledge, but rather the production of aesthetic artefacts, it is also a matter of stabilising and elaborating still imprecise ideas, hunches and unstable processes. An individual artistic style, which is sometimes formed, is always the result of an intensive examination of the artistic material. At the moment, the concept of artistic research is opening up the methods and objectives of certain artistic practices in the direction of science. Traditionally, technology and science form an extremely successful team. Technical development is dependent on scientific knowledge; conversely, scientific research without technology is no longer conceivable today. While the exchange between scientific research and the technical sciences - i.e. between knowing and doing - is firmly established and they both promote and challenge each other, artistic research is still largely excluded from this. In science and technology, artistic research is often still viewed sceptically, although art and technology have at least as strong a connection through poiesis as science and technology do through the goal of knowledge. Thus, at the moment, artistic research is above all an open field of experimentation, borne by enthusiasm and the conviction of the protagonists that engaging with the processes of research will pay off aesthetically, i.e. above all for art.
Digital data and algorithms not only fundamentally intervene in the social practices of technoscience, but are now shaping social action as a whole. Not least because of this, their methods and contexts are also becoming interesting for art. Computers are first and foremost semiotic machines, i.e. technology for processing information that is in turn encoded in signs. These signs serve above all to enable repetitions of actions. The fact that events can be repeated at all is not only the prerequisite for science, but also for every form of prediction and planning. Without repetition, orientation and goal-oriented human action are not possible. Every method, as well as every algorithm, tries to capture the essence of a repeating process without it having to be exactly the same every time. What is repeated can be captured in very abstract terms and does not have to show itself on the surface of the processes. One of the first human techniques, for example, was making fire. Through long periods of trial and error, of repeated failure and success, successful methods have evolved that allow anyone who knows the sequence and execution of the action steps and applies them to make fire. The entire context, such as place, occasion, performer, even fuel, kindling, etc. can be different each time with limitations, but in the end a fire still burns. Our ideas of what fire is have changed fundamentally over time and become more and more abstract. A formerly mysterious process of mystical power is now explained as a chemical oxidation reaction in which abstractly defined concepts such as oxygen, fuel and heat must be brought into the right relationship. The multitude of ways to make a fire is held together by a unified theory that presupposes an understanding of abstract concepts and can explain why and under what conditions the different methods work.
Here, then, we find the essence of the digital, in the naked skeletons of methods and theories. Algorithms are sequences of signs, and they also realise nothing more than pure manipulations of signs. They are maximally abstracted forms of life-world processes that only gain meaning through their context, i.e. their embedding in a real environment. This immediately leads to a central question, namely how sign processes that are empty of content can maintain a connection to the world at all? In many ways, is the first unsatisfactory answer. The process begins with what is called digitisation, i.e. the transfer of sections of reality into the world of signs. Digital data are always decontextualised and reduced images of real conditions. If the thing to be digitised is not itself a sign (a number, a letter, a word, etc.), a reduction inevitably takes place. A non-reductive digitisation would have to contain all the information necessary to produce an exact copy of the noodle at the atomic level. Since this only succeeds in science fiction, the common methods of digitisation are always reductions. For example, a noodle, which has colour, shape, taste, ingredients, etc., is reduced to its outer form. And even this can only be sampled incompletely, whereby differences from the original and artefacts come into play. Data is always decontextualised because although signs can stand for something in the external world, they do not encode their context. It is not possible to read from the digital data who made the noodle, who sent it in, whether it had any meaning for me, etc. Contextual information can be encoded in other character strings, but these can neither be complete nor provide information about their own context of origin. If one wanted to encode the contextual information, the problem would continue at the next level, in a process of character generations that cannot be completed. At the other end of digital processes, data is used to produce analogue objects or to control processes in the outside world. Here the reverse process takes place, a recontextualisation. The naked data, the digital surrogate of objects or processes are again phenomenologically charged and embedded in a real-world context. The object becomes visible again, palpable, smellable etc. and now enters into relationships with other objects once more. Not only does a new object emerge, but also a completely new context. I had taken the fusilli noodle out of a package whose remaining noodles have since ended up in the cooking pot. The exchange object, on the other hand, which has no proper name apart from the identification number PHYSO_011 assigned to it in the lab, is on the shelf, where it is trying to relate to the books. Yesterday it was spontaneously used as a coaster for the teapot, the noodle would hardly have been any good for that. Between the digitisation at the entrance and the concretisation at the exit, digital data can undergo any number of edits, i.e. further sign manipulations. These manipulations follow the laws of algorithms, but their meaning can only be set from the outside. And, of course, input and output can be short-circuited so that an object and its digital representation alternately undergo a series of successive transformations. In the case of purely digital transformations, we speak of a digital chain. For example, data can first represent the construction drawing of a house, from which data on static requirements can be calculated more or less automatically in a second step, i.e. in a further drawing process, and these in turn can form the basis of detailed material and construction plans, from which control data for the production machines can then be calculated automatically. At the beginning there is an idea, at the end there are the individual parts for the finished house, and in between there is a sequence of symbol-controlled transformations.
Let us return to the question posed earlier about how the digital signs in the machine maintain their connection to the world. It should be clear by now that digital data and algorithms are always part of a network of relationships that give them their meaning. The same number can be an account balance, a distance, a temperature, a time and much more. And even if this is fixed, i.e. the number stands for an amount of money, for example, even then it only acquires meaning through its further use and can be judged completely differently depending on the context. The network of relationships that generates the meaning of the signs is, generally speaking, distributed across people, digital machines and environments in which the respective processes are embedded. Such networks of meaning can be realised in very different ways. At present, it can be observed that ever larger areas of meaning generation are being delegated to algorithms and thus removed from humans. But it makes a difference whether the account balance of a customer applying for a loan is assessed by a bank employee or by an algorithm. In the same way, it makes a difference whether the decision on the likelihood of a criminal's recidivism is made by algorithms on the basis of static evaluations of thousands of files or by a judge on the basis of the files coupled with knowledge of human nature and experience.
Perspectives and side effects
We humans can obviously only perceive and judge situations and events from certain perspectives. Once the perspective has been chosen, conclusions are inevitably drawn that are consistent with the chosen perspective, i.e. rationally justifiable and purposeful for concrete situations. The choice of a perspective necessarily entails that all aspects that are not apparent from the chosen perspective are neglected. Other perspectives therefore lead to other results. We can adopt different perspectives one after the other and also combine them in a further step, but adopting one perspective always means reducing, abstracting and decontextualising. We call side effects that which is not part of a perspective of observation. That which shows itself unintentionally and disturbs the beautiful model or the flawless world view. Digital processes are machine embodiments of this principle. Algorithmic processes that are executed embedded in a certain context always realise a certain perspective. If the context changes or situations arise that were not foreseen during development, side effects become apparent. The entire history of technology could also be told as a history of side effects, as a history of consequences that its developers did not have in mind. Algorithms necessarily only care about tiny sections of the world. For them, only what data from the environment or the network are made available for and/or what is implicitly realised in the routines of the software exists. Laboratories also implement perspectives. The network of apparatus, procedures, the knowledge of the employees, the orders of the customers, etc. in a medical laboratory realises its own web of meaning that must seem mysterious to the outside layperson. And even the complex predictive models for the development of corona infections that are currently with us on a daily basis are analyses from certain angles. Observations of corona events from other perspectives lead to different evaluations. That is why we currently see completely different proposals for action for the same initial situation, depending on whether they come from virologists, economists, educationalists or sociologists. Perspectives, seen in this way, always manifest themselves in professions.
What can this mean for the »non-profession« of the artist and artistic research? Artistic research, like art in general, is not committed to one perspective, it can keep the play of perspectives and the handling of knowledge and its processes of creation open and thematise and question them with its means. Undoubtedly, this will produce different results than those presented by science and technology. For example, artefacts, disturbances and side effects, i.e. what engineers and scientists consider undesirable and try to prevent, can be elevated to a guiding principle in an artistic process. All human practices and all areas of social action, thus also technology and science, contain open, as yet unused possibilities for action and thus aesthetic potential. The potential of these voids can be revealed by questioning, deconstructing and expanding established approaches.