The Routledge Companion to Cultural History

I am currently writing an article for the forthcoming Routledge Companion to Cultural History, edited by Alessandro Arcangeli, Jörg Rogge and Hannu Salmi, which will be published during 2018. My chapter will revolve around “media and mediatisation” during the 19th century (predominantly). I have just started working with my chapter – and the introduction currently reads as follows:

“There are no realities any more, there is only apparatus”, lamented the Austrian cultural historian Egon Friedell in the early 1930s. Writing a cultural history during the interwar period, media modernity finally seemed to have caught up with him and broken the spell and disenchantment of all previous ages—the Entzauberung der Welt as famously diagnosed by Max Weber—whereby traditional society and culture was replaced by secularisation, cultural rationalisation and modernised bureaucracy. For Friedell, however, even reality itself gave the impression to disintegrate into a mediated dimness, with film and radio as the main perpetrators for blurring cultural hierarchies between high and low. “As long as the cinema was dumb, it had other than film possibilities: namely, spiritual ones. But the sound-film has unmasked it, and the fact is patent to all eyes and ears that we are dealing with a brutish dead machine. The bioscope kills the human gesture only, but the sound-film the human voice as well. Radio does the same. At the same time it frees us from the obligation to concentrate, and it is now possible to enjoy Mozart and sauerkraut, the Sunday sermon and bridge.”

This dreadful and mediated “world of automata” appeared in the epilogue—ultimately entitled, “the collapse of reality”—at the very end of Friedell’s majestic, three volume Cultural History of the Modern Age (1927-31), a publication which became a huge commercial success, especially in the German speaking world, but which was also translated in numerous other languages. Spanning some 600 years, “from the Black Death to the World War”, and with the main focus put firmly on ‘great men’ and their achievements in art, science and culture, Friedell’s account of cultural history has been described as personal and even anecdotal. Yet, his account is also playful and witty—a present blogger designates the book as “obscenely readable”. With his somewhat odd background (for a cultural historian) as a cabaret performer and actor, Friedell simply knew how to please an audience.

However, given his personal experience of ‘low culture’ and the ways in which various form of mass media increasingly seemed to alter reality at the time of his writing around 1930, it remains surprising how murky Friedell’s account of modern media appeared in his cultural historical overview—that is to say, if media were mentioned at all. Friedell did state that the “high-speed printing press” was the most important machine introduced during the early 19th century, and he did devote a few sentences to “illustrated journals”, and yes, his account of the 1840s firmly described the “characteristic inventions of the age” as being “telegraphy and photography”. But apart from these brief notations, Friedell was not particularly interested in cultural historical media accounts, reports or descriptions, and consequently did not write about them (until in his epilogue). Media historiographically this remains somewhat peculiar since Friedell had previously published on the ways in which perception and representation around 1900 had been transformed via the medium of film. In 1912 he had, for example, stated that films are “short, quick, at the same time coded, and [the medium] does not stop for anything. … This is quite fitting for our time, which is a time of extracts.” Taken from his essay, “Prolog vor dem Film” these remarks (and others) in many ways forebodes cultural critic Walter Benjamin’s canonised account of the artwork in the age of mechanical reproduction (written during the 1930s). Yet, if Benjamin took a positive stance towards mass media, especially film—Friedell’s characterisation was way more gloomy. Still, given the accounts in the epilogue of Cultural History of the Modern Age, Friedell did seem to realise—and to some extent even anticipate—mass media’s increased importance. His final remarks were contemporary, but they could also have been historicised if he would had payed more attention to the cultural history of media.

Departing from Friedell’s paradoxical acknowledgement of both a “world of automata” and his lack of interest in situating media within cultural history, this chapter will provide an overview of the cultural impact of different media forms and technologies from the early 19th century until the advent of sound film and radio (that is, approximately at the time when Friedell was completing his cultural history). Taking my cue from novel ways to perform cultural historical media research and equipped with a media archaeological perspective—which seeks to avoid telling mono-media histories of technologies from past to present—I will pay attention to both new media as well as residual media formats (as the panorama and the stereoscope), while trying to pin down how these were publicly perceived, usually at the intersection between commercial attractions and instructive entertainment. The chapter will also discuss different historiographical ways to understand the cultural history of media, as for example theories of increased mediatisation. In general, the chapter will focus broader media systems—rather than particular media forms as the daily press—and especially pay attention to hybrid forms of media culture and various forms of intermediality, and how these altered over a longer period of time. If the technical reproduction of texts and images, sounds and moving images via fast printing presses, photography and phonographic recordings as well as later cinematography, were almost unimaginable in the early 1800s, a hundred years later they were all “treated as a matter of course”. How did this happen, what changes occured and which consequences did it have for the ways in which ordinary people perceived both themselves and their world?

Seg start för samskrivning

I det senaste numret av Språktidningen (juni 2017) har jag publicerat en artikel om ordbehandling och om att skriva tillsammans, Seg start för samskrivning. Ingressen ger en antydan om vad det hela handlar om: “Våra skrivprogram ser nästan likadana ut i dag som på 1980-talet. Och det har varit trögt få skribenter att använda de nya möjligheterna till digitalt textsamarbete. Medieprofessor Pelle Snickars förklarar varför.” En PDF av artikeln kan också laddas ned här: snickars_spraktidningen.

Spotify Teardown – First Draft Manuscript delivered to MIT Press

Together with my colleagues Patrick Vonderau, Rasmus Fleischer, Maria Eriksson and Anna Johansson, we have put together a first draft manuscript around our Spotify project – just sent off to the publisher MIT Press: Spotify Teardown. Inside the Black Box of Streaming Music. It is a substantial manuscript – some 80, 000 words. So what is it all about? Well, first of all the book is co-written by us five scholars, and subsequently edited, commented and revised by way of a ‘Google docs approach,’ in a collaborative and transparent fashion. Furthermore, the book contains two kinds of texts: original research chapters and what we call ‘interventions.’ By interventions, we refer to shorter types of texts that in one way or another interfere with either Spotify and/or established research methods. Our interventions are placed in between the main chapters. They can be read independently, but they are also thematically linked to discussions in previous chapters. Our four chapters each take up a simple question related to Spotify as a digital media company: (1.) Where is Spotify? (2.) When are files becoming music? (3.) How is music attended to? (4.) What is the value of free? Naturally, we hope that the MIT Press will like our book – and we look forward to the finished product (after a number of rounds of revisions). Hopefully, the book will be published early in 2018.

Lecture on Interfaces, Forensics & Visual Methods at Stockholm University

Today I gave a lecture at Stockholm University for the course, “Visual Sources”: “What is a visual source? Concepts like, picture, image, medium, visual and visuality will be discussed [during the course]” following its description – and my lecture focused digital images in general, and lossy compression and ‘poor images’ in particular. Slides can be downloaded here: snickars_SU_visual_sources_2017.

On scholarly use of social media in Nordicom Information

In the latest issue of Nordicom Information Maarit Jaakkola has drawn together a number of Nordic media scholars who use social media in various ways. “More and more scholars are using Twitter, Facebook, Instagram and other social networking sites to communicate their research to the larger audiences but also to connect with each other. Nordicom Information asked some academic users in the Nordic countries about their strategies and experiences in the most popular platforms of social media.” I am one the academics who gives some answers. “You seem to have an ambivalent relationship with being online?”, Jaakkola for example asked me: “Yes, I don’t see myself as an engaged researcher in social media – I occasionally post, link or retweet. To be honest, it’s an activity that doesn’t take up my time in any considerable way. I am simply present in social media – restricted to Twitter and my personal site – and I find it important and will continue to do so.” Her article can be downloaded here: jaakkola_bloggers_2017

Datadriven humanistisk forskning – workshop på KTH 15/9

Tillsammans med professor Patrik Svensson (för närvarande gästprofessor på UCLA) arrangerar jag fredagen den 15/9 en workshop på KTH om datadriven humanistisk forskning. Ur programmet:

“Digitalt driven humanistisk forskning har på senare år visat på potential att sammanfläta kritiska och teknologiska perspektiv, där det varken handlar om att bara hantera data och bygga verktyg eller att endast studera digitala företeelser på avstånd. I detta sammanhang kan digital humaniora ses som ett humanistiskt projekt som drivs av en kärna av (mer eller mindre uttalade) digitala humanister, men där forskare inom andra humanistiska discipliner, olika gränsområden, kultarvsinstitutionerna och datadrivna discipliner utanför humaniora också är helt centrala. […] Vi hoppas att det ska kunna bli en tankeväckande och produktiv workshop med engagemang, nyfikenhet och spänstighet, samt intellektuell och institutionell drivkraft. Att humanistisk forskning, utbildning och praktik utvecklas, utmanas och aktualiseras av det digitala i vid bemärkelse är i det närmaste givet. Det rör material, verktyg, forskningsinfrastruktur, perspektiv, forskningsfrågor, uttryckssätt och studieobjekt. Här spelar området digital humaniora en viktig roll samtidigt som digitalt anknuten forskning och experimentell praktik i allt högre grad blivit en fråga för de flesta humanistiska ämnen. Digitaliserat material på kulturarvsinstitutioner är också centralt i sammanhanget. Det digitala samhället är en viktig nationell fråga och regeringen har bland annat prioriterat datadriven forskning i forskningspropositionen.”
Evenemanget är öppet för envar (först till kvarn) – program, beskrivning och anmälningslänk finns i dessa två PDFer: Program workshop datadriven humanistisk forskning 15 sept 2017 och Inbjudan datadriven humanistisk forskning KTH 15 sept 2017

Intervjuas om transmedialt berättande i Resumé Insikt

För en tid sedan gjorde jag en intervju med journalisten Julia Lundin på Resumé Insikt kring transmedialt berättande (som jag för många år sedan gjorde en bok om, Berättande i olika medier. Reportaget ligger nu ute online med fokus på norska succéserien Skam. “Transmedial storytelling är en berättelse som är för stor för ett enda medium – och får utökat liv genom andra plattformar. Begreppet är absolut inte nytt. Star Wars, Matrix och Dr Who är alla exempel på transmedia. Medieprofessorn Pelle Snickars går tillbaka så långt som till 1678 och John Bunyans klassiker “Kristens resa”, med en fascinerande medial historia.” Vanligen ligger Resumé Insikt bakom betalvägg, men Lundins reportage är öppet tillgängligt: (Skam)lig succé.

On Turing and Bots

In mid May 1951, Alan Turing gave one of his few talks on BBC’s Third Programme. The recorded lecture was entitled, “Can Digital Computers Think?”. By the time of the broadcast, a year had passed since the publication of Turing’s (now) famous Mind-article, “Computing Machinery and Intelligence”, with its thought provoking imitation game (Turing 1950). The BBC program—stored on acetate phonograph discs prior to transmission—was approximately 20 minutes long, and basically followed the arguments Turing had proposed in his earlier article. Computers of his day, in short, could not really think and therefore not be called brains, he argued. But, digital computers had the potential to think and hence in the future be regarded as brains. “I think it is probable for instance that at the end of the century it will be possible to programme a machine to answer questions in such a way that it will be extremely difficult to guess whether the answers are being given by a man or by the machine”, Turing said. He was imagining something like “a viva-voce examination, but with the questions and answers all typewritten in order that we need not consider such irrelevant matters as the faithfulness with which the human voice can be imitated” (Turing 1951).

The irony is that Alan Turing’s own voice is lost to history; there are no known preserved recordings of him. The acetate phonograph discs from 1951 are all gone. The written manuscript of his BBC lecture, however, can be found at the collection of Turing papers held at King’s College in Cambridge—partly available online (Turing Digital Archive 2016). The BBC also made a broadcast transcript, taken from the recording shortly after the programme was aired. As Alan Jones has made clear, Turing’s radio lecture was part of a series the BBC had commissioned under the title “Automatic Calculating Machines”. In five broadcasts, an equal number of British pioneers of computing spoke about their work. The fact that these talks were given by engineers themselves, rather than by journalists or commentators, was “typical of the approach used on the Third Programme”. Naturally, it is also “what makes them particularly interesting as historical sources” (Jones 2004). Then again, Jones was only able to examine surviving texts of these broadcasts. Consequently, there is no way to scrutinze or explore Turing’s oral way of presenting his arguments. His intonation, pitch, modulation etcetera are all lost, and we cannot conceive the way Turing actually spoke. Perhaps, he was simply presenting and talking about his ideas in a normal way. Yet, according to the renowned Turing biographer Andrew Hodges, the producer at BBC had his doubts about Turing’s “talents as a media star”—and particularly so regarding his “hesitant voice” (Alan Turing Internet Scrapbook 2016). The point to be made is that audiovisual sources from the past often tend to be regarded as textual accounts. By and large audiovisual sources have also been used by historians to a way lesser degree than classical (textual) documents. Sometimes—as the case with Turing—archival neglect is the reason, but more often humanistic research traditions stipulate what kind of source material to use. In many ways, however, the same goes within the digital humanities.

In theoretical physics, the concept of fine-tuning refers to circumstances when parameters of a theory (or model) needs to be adjusted in order to agree with observations. In essence, a substantial part of our ongoing Spotify project— were we are repeatedly working with bots—has been about fine tuning the both highly influential and widely criticized classical Turing test. By focusing on the deceptive qualities of technology—particularly regarding the difference between man and machine—a number of the notions proposed in Turing’s essay “Computing machinery and intelligence” have never really lost their relevance. The imitation game, Turing stated in his 1950 essay, “is played with three people, a man (A), a woman (B), and an interrogator©”. The object of the game was for the interrogator to determine “which of the other two is the man and which is the woman”. Already at the beginning of his essay, Turing however, asked what would happen if “a machine takes the part of A in this game?” As N. Kathryn Hayles famously put it, gender hence appeared at the “primal scene” of humans meeting with their potential evolutionary successors, the machines (Hayles 1999). Still, following her interpretation of Turing, the ‘gender’, ‘human’ and ‘machine’ examples were basically meant to prove the same thing. Aware that one of the two participants (separated from one another) was a machine, the human evaluator would simply judge natural language conversation (limited to a text-only channel) between a human and a machine—designed to generate human-like responses. If the evaluator could not reliably tell the machine from human—the machine was said to have passed the test. It might then be termed artificially intelligent.

Towards the end of Turing’s article, the initial question, “Can machines think?” was consequently replaced by another: “Are there imaginable digital computers which would do well in the imitation game?” (Turing 1950). Naturally, Turing thought so—and only 15 years later, the computer scientist Joseph Weizenbaum programmed what is often regarded as the first bot, ELIZA. She (the bot) had two distinguishing features that usually characterize bots: intended functions that the programmer built, and a partial function of algorithms and machine learning abilities responding to input. In his article, “ELIZA—A Computer Program For the Study of Natural Language Communication Between Man And Machine”, Weizenbaum stated that “the program emulated a psychotherapist when responding to written statements and questions posed by a user. It appeared capable of understanding what was said to it and responding intelligently, but in truth it simply followed a pattern matching routine that relied on only understanding a few keywords in each sentence.” (Weizenbaum 1966]). ELIZA was hence a mock psychotherapist—online today it is still possible to interact with her. In 2005, Norbert Landsteiner reconstructed ELIZA through the implemention elizabot.js: “Is something troubeling you?”, the bot always starts by asking. Later, Landsteiner added graphics, real-time text and even speech integration: “E.L.I.Z.A. talking”—complete with both American and British intonation (Landsteiner 2013).

The element of artifice programmed into ELIZA again testifies to the deceptive qualities of technology (which the Turing test underlined). In fact, ever since, fraudulence (in one form or the other) seems to be a distinguished part of an evolving bot culture constantly capitalising on advancements in artificial intelligence. When Weizenbaum decided to name his bot ELIZA, he did so with explicit and ingenious reference to the flower girl, Eliza Doolittle in George Bernard Shaw’s play Pygmalion (1913), as well as—one might assume—to the more recent Hollywood musical adaptation, My Fair Lady (1964). The ancient Pygmalion myth—in Ovid’s poem Metamorphoses, Pygmalion was the sculptor who fell in love with his own statue—has often been artistically deployed to examine human(s) ability to ‘breath life’ into, for example, a man made object. In other words, the myth belongs to the domain of artificial humanity; a copy of something natural, with Shaw’s play acting as an ironic comment on class society. Learning impeccable speech and cultivated behaviour without real understanding (like a bot), his play was about a bet where a phonetics professor claimed that he could train a flower girl (Eliza Doolittle) to pass for a duchess at a garden party. Or as Weizenbaum declared: “Like the Eliza of Pygmalion fame, [ELIZA] can be made to appear even more civilized, the relation of appearance to reality, however, remaining in the domain of the playwright” (Weizenbaum 1966).

Bots appear to be human—which is why they are interesting. Bots give an impression of being able to act as a normal user and/or person. If they could (almost) pass for humans half a century ago (like ELIZA) the possibilities of such intelligent machines (or rather software robots) have since naturally increased. Today, the most sophisticated bots react instantly to public information, like the advanced algorithmic bots on the stock option market. They seems almost like disembodied cyborgs, part human and part automaton. Nevertheless, bot culture, artificial intelligence and ultimately the Turing test has naturally also been criticized. The latter has, for example, been deemed not particularly useful to determine if a machine can think (or not)—most famously so by John Searle in his 1980 article, “Minds, Brains, and Computers”. Via his “thought experiment” with himself “locked in a room and given a large batch of Chinese writing”, such text was to Searle just “so many meaningless squiggles”. As is well known, Searle’s experiment also involved him being given “a second batch of Chinese script together with a set of rules for correlating the second batch with the first batch. The rules are in English, and I understand these rules”, he stated. Via these rules, but without understanding a word (or sign) of Chinese, it would theoretically be possible, Searle argued, to appear fluent in Chinese. Searle, in short, thought of himself as a bot. In fact, in the beginning of his article he made an explicit reference to ELIZA, stating that it could pass the Turing test simply by manipulating symbols of which ‘she’ had no understanding. “My desk adding machine has calculating capacities, but no intentionality”, he summed up. Searle hence tried to show that a computational language system, in fact, could have “input and output capabilities that duplicated those of a native Chinese speaker and still not understand Chinese, regardless of how it was programmed.” Famously, he concluded: “The Turing test is typical of the tradition in being unashamedly behavioristic and operationalistic” (Searle 1980).

Hayles 1999. Hayles, K. How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics. Chicago, University of Chicago Press.

Jones 2004. Jones, A. Five 1951 BBC Broadcasts on Automatic Calculating Machines. IEEE Annals of the History of Computing 26(2), 3–15.

Landsteiner 2013. Landsteiner, N. E.L.I.Z.A. Talking. Available at: http://www.masswerk.at/eliza/

Searle 1980. Searle J. Minds, Brains, and Computers. Behavioral and Brain Sciences 3, 417-424.

Turing 1950. Turing, A. Computing Machinery and Intelligence. Mind 49, 433–460. Available at: http://www.csee.umbc.edu/courses/471/papers/turing.pdf

Turing 1951. Turing, A. Can digital computers think?. Annotations of a talk broadcast on BBC Third Programme 15 May. Available at: http://www.turingarchive.org/browse.php/B/5

Turing Digital Archive 2016. Available at: http://www.turingarchive.org/

Weizenbaum 1966. Weizenbaum J. ELIZA—a Computer Program for the Study of Natural Language Communication between Man and Machine. Communications of the ACM (9) 6. Available at: http://web.stanford.edu/class/linguist238/p36-weizenabaum.pdf