Noisy Media Theory
I am currently writing an article with my colleague Johan Jarlbrink, where we are trying to make (at least) some sense of a small selection of the ten million newspaper pages that have so far been digitised by the National Library of Sweden. Working with the resulting XML-files from Aftonbladet between 1830 and 1862, these contain extreme amounts of noise: millions of misinterpreted words generated by OCR, and millions of texts chopped off randomly by the auto-segmentation tool. Noisy media theory hence seems applicable – and below is a draft from the more theoretical parts in our forthcoming article:
In classic information theory, the signal is usually defined as useful information and noise as a disturbance. Noise has generally been understood as distorting the signal, making it unintelligible and/or impossible to understand. Eliminating noise was, in short, paramount within information theory—but, it also lead to the fact that noise per se became an analytical category. Information theory was, in short, always interested in noise. As is well known, Claude Shannon’s article, “A Mathematical Theory of Communication” (1948)—which envisioned a new way to enhance the general theory of communication—basically concentrated on noise. Already in his first paragraph, Shannon stated that he wanted “to include a number of new factors, in particular the effect of noise in the channel”, where the fundamental problem of communication, to his mind, was that of “reproducing at one point either exactly or approximately a message selected at another point”. As a consequence, Shannon’s article featured frequent discussions of both “noiseless systems” as well as channels “with noise”. As is evident, contemporary digitisation activities display a number of resemblences and affinities to these remarks and arguments.
As has often been remarked, Shannon was not interested in messages with “meaning”. All semantic aspects of communication were deemed “irrelevant to the engineering problem”—but noise was not. In part two of his article, Shannon for example wrote about a “discrete channel with noise”, where the signal was “perturbed by noise during transmission”. This meant that the received signal was “not necessarily the same as that sent out by the transmitter.” Furthermore, if a channel was too noisy it was not “in general possible to reconstruct the original message or the transmitted signal with certainty”. There were, however, “ways of transmitting the information which are optimal in combating noise” (Shannon 1948).
Classical information theory became popular when Shannon and his co-author, Warren Weaver, published their book, The Mathematical Theory of Communication in 1949. The same year, Weaver had published an article analysing the ways in which a “human communication theory might be developed out of Shannon’s mathematical theorems” (Rogers & Valente 1993, 39). In it Weaver stated that “information must not be confused with meaning”, but more importantly (for our article), he wrote a longer passage on “the general characteristics of noise”. How does noise “affect the accuracy of the message finally received at the destination? How can one minimise the undesirable effects of noise, and to what extent can they be eliminated?”, Weaver asked. If noise was introduced into a system—like a digitisation process—then the “received message contains certain distortions, certain errors, certain extraneous material, that would certainly lead one to say that the received message exhibits, because of the effects of the noise, an increased uncertainty.” Yet, as Weaver paradoxically stated, if uncertainty is increased, then information is also increased—”as though the noise were beneficial!” This type of uncertainty which arose because of errors or “because of the influence of noise”, Weaver however described as an “undesirable uncertainty” (Weaver 1949).
Within classical information theory, noise could in other words also be described as beneficial. In general, however, noise was a dysfunctional factor; the task was combating noise. Consequently, Shannon and Weaver’s mechanistic model of communication mostly dealt with the signal-to-noise ratio within various technical systems. Obviously, their model was indifferent to the nature of the medium. It has, however, since been argued that the arrival of a new medium always changes the relation (or ratio) between noise and information. Digitisation processes are no exception. Within German media theory, noise has for example often been used as a productive analytical category. Friedrich Kittler’s recurrent claim that technical media records not only meaning, but always also noise derives (to some extent) from Shannon. It should hence not come as a surprise that Kittler was the one who translated and introduced Shannon in Germany—with a book ingeniously entitled, Ein/Aus (Kittler 2000).
Then again, critiquing the ways classical information theory morphed into cultural studies and content based readings within media studies, Wolfgang Ernst has polemically asked if, indeed, it makes sense at all “for media studies to apply the information theory of Claude Shannon and Warren Weaver to the analysis of TV series?” From a distinct media archaeological perspective, Ernst has claimed that the only message of television is its signal—“no semantics” (Ernst 2013, 103, 105). The archaeology of media searches the “depths of hardware for the laws of what can become a program”, Ernst has furthermore stated. In doing so, media archaeology concentrates on the “non discursive elements” of the past: “not on speakers but rather on the agency of the machine” (Ernst, 2013, 45). What looks like “hardware-fetishism”, Ernst once stubbornly postulated in his inaugural lecture in 2003, is only “media archaeological concreteness” (Ernst 2003).
Emphasis on media specificity is hence always to be found within this German media theoretical tradition, and perhaps foremost so within the more digitally inclined media archaeology which often tries to look under the hood of contemporary technology. In this sense, media archaeology is part of a gradually shifting emphasis towards media specific readings of the computational base and the mathematical structures underlying actual hardware and software, a transition with analogies to Shannon that also resonates with an increased interest in technically rigorous ways of understanding both software and the operations of material technologies. Analysing accidents, errors and deviations has, for example, been one strategy to approach systems and technologies that are hard to grasp as long as they function properly. As Jussi Parikka has written (in his English introduction to Ernst writings), “more than once, Ernst asks the question ‘Message or noise?’”—a question that, according to Parikka, is “about finding what in the semantically noisy is actually still analytically useful when investigated with the cold gaze of media archaeology” (Parikka 2013). Another German media theorist, Sibylle Krämer, has even stated that various forms of analyses under the hood is the only way to make the functions of media technologies visible: “only noise, dysfunction and disturbance make the medium itself noticeable” (Krämer 2015, 31).
One does not, however, have to accept these media theoreticians definitive claims, to make noise beneficial in an analysis of the digitisation technologies that transform printed texts to digital files. Misinterpretations produced by the OCR makes explicit what graphical elements the software interprets as important and ‘meaningful’, and errors in the auto-segmentation show what the tool is programed to recognise as a ‘text’. No more, no less. Our perspectives and analyses in the following are thus more profane and empirical—yet, still informed by the noisy media theories described above.