Radio Looping

Together with my colleague, Rasmus Fleischer, I am editing a special thematic section on “Spotify and digital methods” for a forthcoming number of the journal Culture Unbound. In all, six to seven articles will be included (we hope); Rasmus is currently finishing a piece on the commodification of online music, another article will deal with streaming music and gender, and a third will have a look at the #backaspotify-campaign. The thematic issue will all likely be out next year. In addition, we are currently together writing another article to be included on so called radio looping, that is a critical discussion on Spotify Radio, its history, functionality and ‘looping’ as media theoretical category. The tentative introduction to our article currently reads as follows, and sort of gives a hint at what we aim to address.

Sometime during Winter 2014 someone posted the following question on Quora: why does “my Spotify radio sounds so repetitive? I feel I am getting a few artists repeated in my Spotify radio.” Well, the Finnish ‘infojunkie’ Heikki Hietala swiftly replied (some time later in March), that’s because “the radio functionality in Spotify is very crude.” At the time, Spotify Radio had been around for more than two years, yet users seemed somewhat dissapointed. Maybe Spotify will “come up with something soon”, Hietala remarked, as for now “it’s very annoying”. Hietala apparently has had the same experience of repetition of songs played on Spotify Radio, and instead recommended the music streaming service Pandora. According to him, the latter had more succesfully “chopped the music up into tiny pieces of metadata, and [Pandora] are able to deliver a truly mesmerising radion function due to the vast amount of information they have on the music” (Hietala 2014).

Quora is a so called ‘question-and-answer’ site. Questions are posted—and subsequently answered, edited and organized by the community of users on the same site. Quite a number of questions on Quora evolve arund tech—which is hardly surprising since the company is co-founded by two former Facebook employees and based in Mountain View, California (Google headquarters). Quora also seems to be a site frequented by tech employees themselves. Tech Lead at Spotify, Erik Bernhardsson, has for example published almost 30 posts, some adjacent to the discussions around Spotify Radio. A couple of months after Hietala’s post, a similar disapproval reappered on Quora. In fact, almost identical questions around the dodgy functionality of Spotify Radio kept being repeatedly posted: “How do I get Spotify to stop playing the same few songs for every artists?”; “How do I teach a Spotify radio station to play a wider array of songs?”; “Why does my Spotify Radio play the same artists over and over for me?” (our italics).

The last of these question was asked by web designer, Bas Leijder Havenstroom, who in a re-entry specified in more detail what he was puzzled about: “I re-asked this one because this frustrates me as well. Even if I start a radio station based on a playlist with many many artists, I find that some (specific) artists keep coming back. I have the feeling that this all has to do with commercial reasons. I believe record labels pay Spotify to have their artists to show up in radio stations and random functions more often” (Leijder Havenstroom 2015). Apparently, the algorithms running Spotify Radio are identical, independent whether one uses the Free or the Premium service (the only difference is that advertisements play in the latter version, which also cannot stream higher audio qualities). The issue, however, if track repetition on Spotify Radio has commercial reasons remains obscure. Basically, the same uncertainty und unpredicatbility goes for trying to research the different algorithms regulating music recommendations on Spotify Radio. In addition, these vary and have naturally been altered and improved since the release of the radio functionality. 2012 was the year when Spotify began updating its desktop software with several new features, including a Pandora-like radio station. “Spotify to Take On Pandora With Radio service”, was subsequently announced online (Hachman 2012). The commercial whiff was hard to hide; an online radio offering “would advance Spotify’s strategy of attracting users with free, ad-supported services who can be converted later into paying subscribers”, Bloomberg reported (Fixmer 2012).

Today, Spotify Radio is one the standard features of the music service, available on all platforms. The Spotify Radio “lets you sit back and listen to music you love. The more you personalize the stations to match your tastes the better they get”, the company announces online (Spotify Radio 2016). Spotify Radio is arguaby a popular service. The functionality allows people (or rather various algorithms) to discover new music within the vast back-catalogue of Spotify, offering a potential infinite avenue of discovery. Then again, following the different conversations on Quora and elsewhere on the Spotify comminity Web for example, the service has also been disliked. In fact, it has repeatedly aroused disappointment, and even substantial criticism. This was definitively the case when the service was launched in 2012, or as user stated at the comminity Web: “better radio algorithms … there are too many repetitions” (lehwark 2012). Yet, as is evident from the later discussions on Quora and the Spotify comminity Web, the sometimes quite devastating critique has remained: “the terrible radio algorithm repeats the same songs over and over (see [the linked] thread, which has been going for 2+ years)”, user ‘tellure’ groaned in late 2015. “Need to update the algorithims for Radio”, the user ‘zaliad’ lamented a couple of months later, “the repetitions are SAD at this point within 1 hour I can easily hear the same song three times” (zaliad 2016).

Within our research project on Spotify we therefore decided to set up an experiment that would try to explore Spotify Radio, and ultimately the limitations and restrains found within ‘infinite archives’ of music streaming services. Our hypothesis was that many streaming services’ radio functions (like Spotify) appear to consist of a series of tracks that are played over and over again. Spotify Radio claims to be never-ending, yet there seems to be a loop pattern. If our hypotheses would prove to be true, what would such loop patterns look like? Are Spotify Radio’s music loops finite—or infinite (given that ‘the algorithm’ can after all choose between 30 million songs)? How many tracks (or steps) does a ‘normal’ loop consist of? Importantly, how is the size of a ‘music loop’ on Spotify Radio affected by user interaction in the form of skips, likes and dislikes? Does for example n amounts of ‘likes’ expand the music loop in terms of novel songs and artists? In short, how much preprogrammed imagination does streaming platforms really display?

In order to answer these research questions, at the digital humanities hub, Humlab (Umeå University), we set up an experiment in the form of a reversed engineered radio loop, with 40 bot ‘listeners’. Essentially, reverse engineering starts with the final and implemented product—in our case the Spotify Radio application within the streaming service desktop client—and takes it apart “seeking clues as to why it was put together in the way it was and how it fits into an overall architecture” (Gheel 2014:10). Our bots were Spotify Free users—with literally no track record; they had ‘heard’ no music before they were put into action. They were programmed to start Spotify Radio based on Abba’s “Dancing Queen”, document all subsequent tracks played in the loop, and (inter)act within the Spotify Web client as an ‘obedient’ listener, a ‘skipper’, a ‘liker’ and a ‘disliker’. On the one hand this article recounts, analyses and discusses the intervention we set up. Yet, on the other hand, the article also describes the background and the establishment of the ‘radio functionality’ at streaming services, as well as tries to ponder and reflect around the media theoretical concept of looping per se.

Digital humaniora – en lägesrapport publicerad i Respons

Jag har idag publicerat en lägesrapport om digital humaniora i tidskriften Respons. Ingressen ger vid handen att artikeln handlar om detta: “Digital humaniora är ett vitalt forskningsfält som förutsätter tekniskt kunnande och dialog med programmerare. Det saknas inte kritiker av digital humaniora, men ur ett metodologiskt perspektiv är utvecklingen omistlig och det är inte nödvändigt att ställa den klassiska humanioran mot den digitala. Det hävdar Pelle Snickars, professor i medie- och kommunikationsvetenskap vid Umeå universitet, som besökt DH-fältets största konferens för att ta tempen på ett forskningsfält i ropet.” Texten kan också laddas ned som PDF: PS Reportage DH.

Meta-Academic Reflections from Inside the Swedish Public Service Broadcasting Commission

I am currently writing an article for the upcoming Ripe@2016-conference, Public Service Media In a Networked Society? set in Antwerp in a couple of weeks. The article needs to be finished before the conference, and my take is a kind of meta-academic reflections from the inside of the Swedish Public Service Broadcasting Commission, that I worked with during last year. In short, the article will describe the work we did in the Broadcasting Commission, as well as debates surrounding it. Yet, I will also argue that academics as lobbyists cannot be avoided when it comes to issues regarding the role that public service should play in a digitized and globally networked media landscape. The topic is timely – and in fact the start of my article uses the “open letter” to Alice Bah Kunhke (Swedish Minister of Culture and Democracy), that the board chair of Mittmedia, Jan Friedman, published in Dagens Nyheter print edition today (which was available online already yesterday). Thus, my article starts in the following way:

During August 2016, the major news story dominating all Swedish media—about other Swedish media—was inside information from a recent board meeting of the media group Mittmedia at Arlanda airport in Stockholm. Following rumours, Mittmedia, one of Sweden largest media group with almost 30 local newspapers in the midst of the country, would in a year to come conceivably cut 75 percent of editorial services—and as a consequence leave almost 500 local journalists unemployed. In a small country as Sweden, the news was outrageous, potentially affecting almost a million local readers. The scoop was published by the leading daily newspaper, Dagens Nyheter; instantly appalled and critical comments filled social media as well as as other news outlets (Delin 2017). As was to be expected, the Swedish Minister of Culture and Democracy, Alice Bah Kuhnke (from the Green Party), was interviewed. Ultimately, she does bear the utmost responsibility for national media politics. The Minister stated that Swedish media, and especially local newspapers, seemed to be moving towards “the edge of a cliff”. I did “not sleep well at all”, she asserted when the reporter talked to her the day after the Mittmedia-news broke. In fact, she was utterly concerned, arguing that these dreadful revelations did not “only mean something for individual journalists or media in general. It meant something unheard of for Swedish democracy” (Jones 2016). Due to the effects of digitisation and globally networked media, the present (and mostly grim) media situation in Sweden naturally resembles similar ones in Western European countries. Yet, what made the news of Mittmedia’s downfall so dismay to many, was that this particular media enterprise had been a digital pioneer among (at least) local media groups in Sweden. Then again, money were being lost at a fast rate, and the quite bleak scenario hence put forward by the board—all in order to try to steer Mittmedia’s strategies (and finances) away from a fruitless digital business model.

Leading the board of Mittmedia is Jan Friedman, a businessman and a professional chairman in a number of Swedish business boards. After reading about the Swedish sleepless Minister of Culture and Democracy, and her concern about Mittmedia he decided to send her an “open letter”. It was published a few days later in Dagens Nyheter as well. “Dear Alice”, Friedman began—within the private media sector most curves have been pointing in the wrong direction for years. “Subscribers are diminishing and aging, while advertisers prefer other channels [than local newspapers], especially foreign owned ones as Google and Facebook. Overall, most local media companies display diminished margins”, he lamented. Nevertheless, he contiued by explicitly addressing the Minister: you could “help us in different ways by forging an imprint that makes it easier for us and others to succeed.” You are the Minister responsible for our national media after all, and you “own yourself a toolbox”. In it, Friedman stated, are “four tools that would really make a difference.” He then divided his letter into four sections—each one for each tool—where the last three of these basically had to do with press subsidies and taxes. Friedman, for example, urged the Minister to reconstruct the contemporary Swedish press subsidy system, and sincerely hoped this would be the case when the on-going Swedish Media Inquiry—initiated by the government in May 2015—would present their long awaited proposal later in November 2016. Friedman furthermore argued for the abolishment of the so called Swedish advertisement tax (a truly peculiar tax on only print media advertising revenues), as well as the strikingly similar printing VAT, both causing a lot of financial troubles for local newspapers.

The point I want to make with this introduction, however, is that Friedman’s major concern was Swedish Public Service—and particularly so at the regional and local level. It was, in fact, the number one tool for the Minister to reconsider. Traditionally, both Swedish Television (SVT) and Swedish Television (SR)—which are linked entities, but remain two separate institutions—have been present across Sweden at various regional locations. Local public service, Friedman hence told the Minister, sometimes triggered stimulating competition, but more often it has proven “counterproductive”. Above all, Friedman stated, it is indeed “commercially challenging to meet part of SVT’s and SR’s free online services, while trying to make editorial investments and create tempting conditions for reader paid news activities in a digital world.” If even local news are free on svt.se or sr.se—why should users in the middle of Sweden pay Mittmedia money for the same news? Naturally, Friedman told the Minister, he was aware that the present Media Inquiry investigated this hotly debated question. Yet, he urged the Minister to really pay attention to the matter, and not let go of the issue of “public service increased negative market impact.” One concrete way, he suggested, to deal with the issue at the local level “could be to increase collaboration between private and public service companies … and even consider to privately outsource the kind of local media production that regional and local public service companies carry out today” (Friedman 2016).

Jan Friedman’s open letter—both friendly and sarcastic in tone—to Minister Alice Bah Kuhnke, is but the latest news item in a seemingly neverending media discussion around Swedish public service. The issue has naturally and historically been debated before. But during the last decade the discussion has become way more intensified, as testified by an a rapidly increasing number of articles in the Swedish Media Retriever Database. In 2005 a serch for “public service” generated 1,100 articles, ten years later the amount was 2,700 (Media Retriever 2016).

Spotify, akademiska interventioner & musikmetoder

Efter sjutton år är jag tillbaka på mitt favoritbibliotek – numera ombyggda Stabi Ost i Berlin. Den nya (nåja) lässalen är tyskt proper; och mycket angenäm att arbeta i. Jag skriver för närvarande på en artikel till en bok som Christopher Kulleberg på Göteborgs unviversitet sätter samman om digitala metoder. Den handlar delvis om övergripande frågor i ämnet (som jag debatterat tidigare), och delvis om det projekt jag leder som behandlar Spotify. Boken lär väl bli färdig i början av nästa år – och delar av texten ser för närvarande ut på följande sätt:

Mediefilosofen John Durham Peters har argumenterat för att vi numera tycks förutbestämda – ja, nästintill predestinerade – av den data vi producerar, konsumerar och inte minst ständigt delar med varandra. Digitala medier, påtalar han i sin bok, The Marvellous Cloud, ”point to a fundamental task of order and maintenance, the ways in which data ground our being, and the techniques that lie at the heart of human dwelling on earth” (Durham Peters 2015:8). Dagens kodbaserade medieformer skiljer sig därför från 1900-talets massmedier, menar Durham Peters. Spetsar man till frågan, handlar medier numera – i form av ett slags integrerade dataflöden – allt mindre om innehåll, program eller opinioner, och alltmer om organisation och positionering, metadata och hypermedialitet, insamling och mätning, kalkylering och analys. Det är en karakteristiska som passar väl på vad ett företag som Spotify ägnar sig åt.

Durham Peters är en humanistiskt orienterad medievetare; att hans perspektiv skiljer sig från den mer samhällsvetenskapliga medieforskning (som jag refererade till ovan) är därför inte konstigt. I den forskningsdisciplin jag själv främst är verksam inom, digital humaniora, är den här typen av synsätt helt standard. Digital humaniora handlar delvis om utveckling av nya digitala metoder för humanistisk forskning, vilka förutsätter tekniskt kunnande, tvärvetenskapligt samarbete samt att programmerare tidigt kommer in i forskningsprocessen och driver den framåt i återkommande dialog (Snickars 2016). Debatten om vilka metoder som ska användas är intensiv, men bruket av beteckningen digitala metoder helt vedertagen (Clement 2016; Moretti 2013; Jockers 2013). Även inom den mer samhällsvetenskapligt orienterade medieforskningen har frågan om ett slags metodmässigt skifte naturligtvis diskuterats när det gäller mediemätning och datainsamling – även så i Sverige. Min kollega, medievetaren Jonas Andersson Schwartz har exempelvis i ett flertal publikationer arbetat med insamlad Twitterdata, och utfört empiriska studier av svensk politik på Twitter (Andersson Schwarz et al 2015a; Andersson Schwarz 2015b). Denna typ av datainsamling ger helt nya forskningsperspektiv, men Andersson Schwartz har också varnat för att när verktyg och metoder låter analytikern närläsa dataflöden ”i realtid och på löpande basis [så är] riskerna för överdrifter och felaktiga tolkningar uppenbara” (Andersson Schwarz et al 2016).

En annan medieforskare med ett snarlikt perspektiv, Klaus Bruhn Jensen, har i ett flertal artiklar diskuterat den metodologiska skillnad som uppstår i en nätkontext mellan att vetenskapligt arbeta med upphittad och tillverkad data – det vill säga, att exempelvis samla in Twitterdata via hashtaggen #svpol jämfört med att i en enkät fråga människor hur de använder sociala medier. Att den senare datan är tillverkad (och stundtals rentav fabricerad) är uppenbart om man kort betänker hur en typisk SOM-fråga är ställd: ”Hur ofta har du under de senaste 12 månaderna använt internet?” Å den ena sidan finns här uppenbara metodologiska problem i det att själva frågeställningen reglerar svarsinnehållet, i analogi till ordspråket, ”som man frågar får man svar”. Å den andra sidan innefattar den här typen av subjektiv självskattning (som frågan triggar) alltid ett slags introspektiva knivigheter; hur ärlig ska (eller kan) man vara? Tillverkad data vilar såtillvida på ett slags dubbelt epistemologiskt tvivelaktig grund. Det är denna diskrepans Bruhn Jensen menar karakteriserar skillnaden mellan tillverkad och upphittad data, vilken är speciellt iögonfallande i en nätkontext. ”The issue of data of, about, and around the Internet highlights the common distinction between research evidence that is either ’found’ or ’made’”, skriver han – med tillägget att all evidens och bevisföring som nätforskaren behöver redan finns för handen: ”all the evidence needed for Internet studies is already there, documented in and of the system, with a little help from network administrators and service providers.” På så vis menar Bruhn Jensen att själva systemet – det vill säga internet – blir till en sorts metod per se: ”the system is the method” (Bruhn Jensen 2011:52). Det är ett synsätt värt att hålla i minnet i det följande. I en senare artikel har Bruhn Jensen också framhållit den skillnad (gentemot tidigare medieformer) som numera alltid återfinns i digitala nätverkssammanhang eftersom all nätanvändning alltid lämnar spår: ”digital networks, in and of their operation, document aspects of technologically mediated communication that were mostly lost in previous media – meta-data that can be found and that invite further research” (Bruhn Jensen 2012:436).

Att få korn på denna typ av metadata som Bruhn Jensen skriver om står på flera sätt i fokus för det forskningsprojekt om Spotify som jag leder, ”Strömmande kulturarv. Filförföljelse i digital musikdistribution”. Med hjälp av digitala metoder och digital etnografi har min (och min forskargrupps) ambition varit att observera musikfilers färd genom det digitala eko-system som utgör den strömmande mediekulturens svarta låda. Inom projektet har vi ofta använt oss av en brevbärarar-metafor för att beskriva vad vi försöker åstadkomma – det vill säga, att följa ett paket (en musikfils) väg från skapelse till mottagare. Visserligen ’rör’ sig inte musikfiler rent datatekniskt inom Spotifys musik-ekosystem på ett sådant sätt överhuvudtaget. Tekniken är långt mer komplicerad, och innefattar bland annat en fristående Spotify-klient (ett eget program), som också finns i en webbaserad version liksom för en rad olika mobilplattformar. Själva strömningstjänsten av musik vilar därtill på fildelningsteknik, där olika användares uppkoppling används för att skicka (och cashe-lagra) musik sinsemellan för att minimera bandbredd. Till det ska man addera den så kallade ”den musikaliska hjärnan” som företaget EchoNest byggt upp av ”1,2 miljarder små loggade informationsbitar som berättar någonting om världens musik”, användardata som sparas i en databas som Spotify betalade 800 miljoner kronor för 2014 (Larsson 2016).

Begreppet filförföljelse är med andra ord strängt taget en metafor. Ändå är grundtanken med vårt forskningsprojekt att digitalisering av medieobjekt förändrat hur de bör konceptualiseras, analyseras och förstås, och det just med utgångspunkt i de spår av information och betydande mängder metadata som (musik)filer oupphörligen lämnar i olika nätverk; som sagt – systemet är metoden. Från ett akademiskt perspektiv handlar det om att helt växla fokus; från studiet av statiska musikartefakter till ökat vetenskapligt fokus på dynamiskt aktiva filer med ett slags inherent information om exemplevis bredbandsinfrastruktur, fildistribution och så kallad aggregation, användarpraktiker, klick-frekvens, sociala spellistor, delning och upprepning – ”all the evidence needed for Internet studies is already there, documented in and of the system”, som Bruhn Jensen framhållit. Det är med andra ord inte ett forskningsprojekt om musiklyssnande – snarare om hur strömningstjänster som Spotify reglerar och paketerar lyssnade, samt vilken data som kan extraheras fram när musik ’färdas’ genom Spotifys nätverk.

Metamodeling—3D-(re)designing Polhem’s Laboratorium mechanicum

The other day I got a book chapter accepted for a forthcoming joint German and English book publication – with the working title, The Virtue of Models 2.0. The book is part of work being executed within the German digital humanities working group, Digitale Rekonstruktion, and the title of my chapter is tentatively: ”Metamodeling—3D-(re)designing Polhem’s Laboratorium mechanicum.” In many ways, the text will be a kind of first output of the project, Digitala Modeller, that I am heading. What I envision (or at least have promised) to write is the following:

Christopher Polhem (1661-1751) was a Swedish scientist and pre-industrial inventor—sometimes described as “the Father of Swedish Technology” (Lindroth 1951; Johnson 1963, Lindgren 2011). His so called, Laboratorium mechanicum (or the Royal Model Chamber) was a collection of several hundred educational models of wood of contemporary equipment, machines and building structures, water gates, hoistings and locks, invented (mostly) by himself. Basically, the Laboratorium mechanicum was a facility for training Swedish engineers, as well as a laboratory for testing and exhibiting Polhem’s models and designs. It was set up by him in the late 1690s, became a Swedish state-funded institution for information and dissemination of technology and architecture in 1756, and was during the 19th century used at the KTH Royal Institute of Technology in Stockholm.

Around 1930 Polhem’s model collection was transferred to the Swedish National Museum of Science and Technology. Ever since it has served—and been frequently exhibited—as a kind of meta-museological artifact, since Polhem’s designs proved to be pedagogical museological objects avant la lettre. Within the new interdisciplinary research project, “Digital Models. Techno-historical collections, digital humanities & narratives of industrialisation” (funded by approximately one million euro by the Royal Swedish Academy of Letters, History and Antiquities, between 2016-19) parts of this collection will be 3D scanned and 3D reconstructed by different software. In short, the project set up is part of the trend were heritage institutions are exploring how 3D technologies can broaden access to their collections (Urban 2016; Ioannides 2014). More specifically we are interested in Polhem’s so called “mechanical alphabet”. Initially, it consisted of 80 wooden models of basic machine elements like the lever, the wheel and the screw. Since a writer naturally had to know the alphabet in order to create words and sentences, Polhem argued that a contemporary mechanicus had to grasp his mechanical alphabet to be able to construct and understand machines.

3D modelling the mechanical alphabet, however, can be done in various ways. Within our research group, we have for example started co-operating with the animator Rolf Lindberg; on YouTube he has uploaded a number of videos of Polhem’s models (Lindberg 2016). Lindberg, however, did not 3D scan these mechanical models—he computer-animated them. Hence, from a museological perspective, rather than 3D scanning heritage items, it seems easier, and perhaps also more pedagogical and visually enticing to simulate them—that is, building and constructing a brand new virtual object. The original item collected in the museum then becomes a model (rather than vice versa). One of the objectives of the London Charter on computer-based visualisation of heritage promotes “intellectual and technical rigour in digital heritage visualisation”—yet, is a 3D scan (in our case) more rigour than a simulation? (London Charter 2009). Furthermore, in the case of Polhem’s models, the theme of (digital) reconstruction also has a profound historical dimension, since he sincerely believed (as a pre-industrial inventor) that physical models were always superior to drawings and abstract representations. Then again, metamodeling as a scholarly and museological practice might not agree that the same hold true for digital representations—or?

Kommande reportage om digital humaniora i Respons

Under den senaste veckan har jag skrivit på ett reportage om digital humaniora för tidskriften Respons; texten bör väl publiceras under hösten. Tanken var att skissa fram ett slags lägesrapport av detta forskningsfält, med utgångspunkt i den stora DH-konferens som hölls förra veckan i Krakow. Men min rapport börjar med en klassisk svensk sjuttiotalsroman!

Det framstår som något av en ironi att den närmast oöverträffade nidbilden av den digitale humanisten uppträder redan före detta forskningsfälts egentliga existens. ”Jag har kodat båda böckerna nu, med optisk läsning”, heter det nämligen i Lars Gustafssons Tennisspelarna från 1977. Det är dataoperatören Chris som i en betongbunker, långt under jorden i Fort Worth, Texas, sitter och matar in bland annat Strindbergs Inferno i ”computorn”. Han är uttråkad av att spana efter ”små vita prickar” på den ”självlysande kartan över Södra luftförsvarsområdet”:


Det tog hela natten och jag är öm i fingrarna av allt bläddrandet. Nu på morgonen håller maskinen på att gödelnumrera alltihop.
– Förlåt, vad menar du med gödelnumrera?
– Va? Jaså. Vet du inte det? Man numrerar alla bokstäver i alfabetet, alla skiljetecken och skaffar ett särskilt nummer för ordmellanrum. Sen tar du produkten av det för varje sats. Och sen tar du primtalen och höjer dem till dom där produkterna så att dom blir potenser av primtalen. Eftersom varje heltal är entydigt bestämt av sina primfaktorer har du märkt varje sats på det sättet, så att den aldrig mera kan komma bort. Sen är det bara att multiplicera satsernas gödeltal med varandra, så får du bokens gödeltal.
– Det måste bli rätt stora tal?
– Det blir det …
– Kul. Och vad kommer ut?
– Jag har ingen aning. Jo, det blir en tredje bok förstås. Den tredje boken om Inferno.


Gustafsson var här spot on – och det avant la lettre. Många uppfattar fortfarande forskning inom digital humaniora ungefär på det här sättet. Digital humaniora (ofta förkortat som DH) handlar nu visserligen mycket sällan om ”böckers gödeltal” – men räknas görs det så det förslår. Det var fallet under sjuttiotalet, då under namnet ”humanities computing” (vilket Gustafsson säkerligen kom i kontakt med under sin tid i USA), och så är även fallet idag. Utan att darra på manschetten proklamerade exempelvis litteraturvetaren Andrew Piper nyligen att, ”there will be numbers” – vilket också var titeln på hans programmatiska inledningstext till den nystartade tidskriften, Journal of Cultural Analytics.

Presenting at DH16 in Kraków

This week, the major digital humanities conference in Kraków takes place; apparently some 900 digital humanist will partake. The program looks really interesting – it can be found here – and I will do a presentation on Thursday entitled, “SpotiBot–Turing testing Spotify”. Basically, my take will be based on an article (written by me and Humlab developer, Roger Mähler), that was submitted to the journal Digital Humanities Quaterly a month ago (or so). Slides and the PDF of the lecture (as it stands at the moment) can be downloaded here: snickars_slides_DH16 and snickars_talk_DH16. I do look forward to the conference.

Oscar II och medierna

För en kommande bok om kungar och utställningar håller jag på och skriver en text om Oscar II och medier – i relation till Stockholmsutställningen 1897. För nästa tio år sedan gjorde vi en bok om denna utställning, därtill har jag ägnat mig en del åt prins Wilhelms mediala kvarlåtenskap – så ämnet kan jag. Oscar II är högintressant eftersom han var kung i en medial brytningstid. För närvarande ser anslaget till artikeln ut på följande sätt:

En av de mer märkliga memoarerna i svensk politisk historia publicerades hela 53 år efter det att författaren gått bort: Mina memoarer av Oscar II. Norstedts förlag gav ut tre volymer 1960, men i sin officiösa framställning framstår de idag som ganska så träiga. Visserligen omtalades Oscar II under sin livstid ”som Europas lärdaste monark”. Han kunde tveklöst uttrycka sig pregnant i skrift, var beläst i historiska och litterära ämnen och hade en levande associationsförmåga. Likafullt menar Svenskt biografiskt lexikon att kungens memoarer knappast gjorde ”honom till en stor monark i eftervärldens ögon.”

Nej, det speciella med Mina memoarer är att Oscar II skrev dem i anslutning till händelser han just genomlevt. Memoarerna framstår som ett slags fotografiska utsnitt, ögonblicksbilder eller journalistiska impressioner, där kungen reflekterade över det som nyss inträffat. ”Konung Oscar stod mitt uppe i sin gärning i händelsernas centrum, då han etappvis utarbetade sina memoarer”, påtalade just utgivaren Nils F. Holm i sitt ”Företal”. Texter och fragment var därför i allmänhet ”nedskrivna helt kort tid efter de relaterade händelserna.” Att det tog mer än ett halvt sekel innan memoarerna publicerades berodde på kungens egna instruktioner; de skulle förvaras i ”igenlödda blecklådor”, och allra tidigast göras tillgängliga för forskning efter 1950. Av den anledningen är Oscar II:s memoarer också stundtals lätt förvirrande. Aktuell tidskontext lämnades ofta därhän, bakgrunden till olika skeenden togs liksom för given. Utgivaren Holm såg sig därför tvungen att inför varje kapitel ge ”en kort sammanfattning av den politiska situationen och det händelseförlopp kapitlet redovisar.”

Det gäller även de sätt som Oscar II beskriver den Allmänna konst- och industriutställningen på Djurgården i Stockholm sommaren 1897. I sina reminiscenser från utställningen – nedtecknade redan den första december samma år – menade kungen att det inte var nödvändigt att skriva något mer utförligt om själva utställningen. ”Tidningarna för denna sommar äro fulla av beskrivningar om alla detaljer, och jag kan hänvisa till dem”. Passagen är typisk för hans memoarer. Men förmodligen tog Oscar II också stöd i de klippböcker från utställningen som hovet sammanställt. Urklippen från Stockholmsutställningens första månad är många; de fyller mängder av sidor i en av klippböckerna i Oscar II efterlämnade personarkiv.

Det framstår därför som symptomatiskt att kungen i sina memoarer bara ville yttra sig om en enda sak i samband med Stockholmsutställningen 1897: dagspressen. ”Endast om en kongress, journalistkongressen, vill jag yttra ett par ord. Liksom alla andra ville pressen ock hava sina möten”, skrev kungen. Han påtalade att han först ställt sig tvekande inför ”ett värdskap”, eftersom det kunde ”giva anledning till ’intervjuer-fantasier’ [med] obehagliga följder.” Men kungen hade samtidigt deltagit i det diplomatiska spel som föregick utnämningen av Stockholm som plats för den internationella journalistkongressen. ”All tvekan försvann”, skriver han, ”vid tanken på den ovedersägliga nytta som mitt fosterland kunde skörda av ett besök av världens mest framstående tidningsmän.” Kungen lät sig med andra ord användas – ja, rentav utnytjas av presskåren – vilken i sin tur givetvis använde sig av honom och den rojalistiska strålglans som förknippades med. ”Jag ångrade mig ej heller”, skrev Oscar II i sina memoarer, ”ty både Sverige och jag själv med måste sägas hava skördat fördelar av journalistkongressen. … Världspressen genljöd länge och väl av lovord, mycket smickrande och glädjande att lyssna till för svenska öron.”

Som presshistorikern Patrik Lundell framhållit framstår Stockholmsutställningen 1897 som en tidpunkt ”då för första gången pressen, kungamakten och kapitalet på allvar drack varandra till, till ömsesidig båtnad.” Den symbios som där framträdde mellan medier och kungamakt – med utställningen som ett slags gränssnitt – framstår både som profetiskt framåtblickande, men också som nostalgiskt tillbakablickande. Framåtblickande i den bemärkelse som vi idag alla känner till, nämligen kopplingen mellan medier, monarki och kändiskultur vilken numera hör den traditionella kungabilden till, filtrerad och populariserad som denna blivit genom 1900-talets massmedier.

Men också tillbakablickande i den bemärkelsen att Oscar II framstår som den siste svenske monark som krampaktigt höll fast vid forna tiders kungaideal. Han var kung inte bara i en medial brytningstid. Han var också den siste svenske monark som kröntes, och främhärdade därför envist med ceremonier och tillställningar, uppvaktningar och supéer vilka alla bidrog skulle bidra till att understryka kungens värdighet och ryktbarhet. Här kunde han dra nytta av tidens medier. Alltsedan 1809 års regeringsform, där riksdagen fick kraftigt utökade maktbefogenheter (men där kungen behöll den verkställande makten), bevittnade Oscar II under sin långa levnad – mellan 1829 till 1907 – hur kungens personliga makt minskade i takt med regeringens och riksdagens allt starkare ställning. Mina memoarer framstår på flera sätt som en svanesång över denna förändring.

Men Oscar II var inte ensam att förtvivla; den rojalistiska kursändringen inte heller bara en svensk företeelse. I sin magistrala bok om 1800-talet, The Transformation of the World, beskriver Jürgen Osterhammel hur den konstitutionella monarki, där kungen (eller drottningen) blev till en statschef med strikt begränsade befogenheter, var kännetecknade för de flesta kungahus i världen under 1800-talet. Osterhammel påpekar emellertid att kungahusens realpolitiska förfallshistoria inte sällan gav upphov till en sorts rekyleffekt på det mer symboliskt-politiska planet – och då ofta i symbios med den framväxande mediemoderniteten. Den tyska kejsaren Wilhelm II – vars regenttid (1888-1918) ställvis sammanfaller med Oscar II – brukar av forskningen idag ofta framhållas som en sannskyldig ”Medienkaiser”. Samma typ av mediala symbios som framträder i Oscar II:s beskrivning av journalistkongressen 1897, känneteckande Wilhelm II:s relation till samtidens medieformer överlag. Mediekejsaren Wilhelm använde sig av, ”och blev i sin tur utnyttjad av, dagspressen, fotografiet och filmen”, skriver Osterhammel. På så vis framstår han som Tysklands ”första (och sista) kungliga mediestjärna genom sina frekvent offentliga framträdande.”

Noisy Media Theory

I am currently writing an article with my colleague Johan Jarlbrink, where we are trying to make (at least) some sense of a small selection of the ten million newspaper pages that have so far been digitised by the National Library of Sweden. Working with the resulting XML-files from Aftonbladet between 1830 and 1862, these contain extreme amounts of noise: millions of misinterpreted words generated by OCR, and millions of texts chopped off randomly by the auto-segmentation tool. Noisy media theory hence seems applicable – and below is a draft from the more theoretical parts in our forthcoming article:

In classic information theory, the signal is usually defined as useful information and noise as a disturbance. Noise has generally been understood as distorting the signal, making it unintelligible and/or impossible to understand. Eliminating noise was, in short, paramount within information theory—but, it also lead to the fact that noise per se became an analytical category. Information theory was, in short, always interested in noise. As is well known, Claude Shannon’s article, “A Mathematical Theory of Communication” (1948)—which envisioned a new way to enhance the general theory of communication—basically concentrated on noise. Already in his first paragraph, Shannon stated that he wanted “to include a number of new factors, in particular the effect of noise in the channel”, where the fundamental problem of communication, to his mind, was that of “reproducing at one point either exactly or approximately a message selected at another point”. As a consequence, Shannon’s article featured frequent discussions of both “noiseless systems” as well as channels “with noise”. As is evident, contemporary digitisation activities display a number of resemblences and affinities to these remarks and arguments.

As has often been remarked, Shannon was not interested in messages with “meaning”. All semantic aspects of communication were deemed “irrelevant to the engineering problem”—but noise was not. In part two of his article, Shannon for example wrote about a “discrete channel with noise”, where the signal was “perturbed by noise during transmission”. This meant that the received signal was “not necessarily the same as that sent out by the transmitter.” Furthermore, if a channel was too noisy it was not “in general possible to reconstruct the original message or the transmitted signal with certainty”. There were, however, “ways of transmitting the information which are optimal in combating noise” (Shannon 1948).

Classical information theory became popular when Shannon and his co-author, Warren Weaver, published their book, The Mathematical Theory of Communication in 1949. The same year, Weaver had published an article analysing the ways in which a “human communication theory might be developed out of Shannon’s mathematical theorems” (Rogers & Valente 1993, 39). In it Weaver stated that “information must not be confused with meaning”, but more importantly (for our article), he wrote a longer passage on “the general characteristics of noise”. How does noise “affect the accuracy of the message finally received at the destination? How can one minimise the undesirable effects of noise, and to what extent can they be eliminated?”, Weaver asked. If noise was introduced into a system—like a digitisation process—then the “received message contains certain distortions, certain errors, certain extraneous material, that would certainly lead one to say that the received message exhibits, because of the effects of the noise, an increased uncertainty.” Yet, as Weaver paradoxically stated, if uncertainty is increased, then information is also increased—”as though the noise were beneficial!” This type of uncertainty which arose because of errors or “because of the influence of noise”, Weaver however described as an “undesirable uncertainty” (Weaver 1949).

Within classical information theory, noise could in other words also be described as beneficial. In general, however, noise was a dysfunctional factor; the task was combating noise. Consequently, Shannon and Weaver’s mechanistic model of communication mostly dealt with the signal-to-noise ratio within various technical systems. Obviously, their model was indifferent to the nature of the medium. It has, however, since been argued that the arrival of a new medium always changes the relation (or ratio) between noise and information. Digitisation processes are no exception. Within German media theory, noise has for example often been used as a productive analytical category. Friedrich Kittler’s recurrent claim that technical media records not only meaning, but always also noise derives (to some extent) from Shannon. It should hence not come as a surprise that Kittler was the one who translated and introduced Shannon in Germany—with a book ingeniously entitled, Ein/Aus (Kittler 2000).

Then again, critiquing the ways classical information theory morphed into cultural studies and content based readings within media studies, Wolfgang Ernst has polemically asked if, indeed, it makes sense at all “for media studies to apply the information theory of Claude Shannon and Warren Weaver to the analysis of TV series?” From a distinct media archaeological perspective, Ernst has claimed that the only message of television is its signal—“no semantics” (Ernst 2013, 103, 105). The archaeology of media searches the “depths of hardware for the laws of what can become a program”, Ernst has furthermore stated. In doing so, media archaeology concentrates on the “non discursive elements” of the past: “not on speakers but rather on the agency of the machine” (Ernst, 2013, 45). What looks like “hardware-fetishism”, Ernst once stubbornly postulated in his inaugural lecture in 2003, is only “media archaeological concreteness” (Ernst 2003).

Emphasis on media specificity is hence always to be found within this German media theoretical tradition, and perhaps foremost so within the more digitally inclined media archaeology which often tries to look under the hood of contemporary technology. In this sense, media archaeology is part of a gradually shifting emphasis towards media specific readings of the computational base and the mathematical structures underlying actual hardware and software, a transition with analogies to Shannon that also resonates with an increased interest in technically rigorous ways of understanding both software and the operations of material technologies. Analysing accidents, errors and deviations has, for example, been one strategy to approach systems and technologies that are hard to grasp as long as they function properly. As Jussi Parikka has written (in his English introduction to Ernst writings), “more than once, Ernst asks the question ‘Message or noise?’”—a question that, according to Parikka, is “about finding what in the semantically noisy is actually still analytically useful when investigated with the cold gaze of media archaeology” (Parikka 2013). Another German media theorist, Sibylle Krämer, has even stated that various forms of analyses under the hood is the only way to make the functions of media technologies visible: “only noise, dysfunction and disturbance make the medium itself noticeable” (Krämer 2015, 31).

One does not, however, have to accept these media theoreticians definitive claims, to make noise beneficial in an analysis of the digitisation technologies that transform printed texts to digital files. Misinterpretations produced by the OCR makes explicit what graphical elements the software interprets as important and ‘meaningful’, and errors in the auto-segmentation show what the tool is programed to recognise as a ‘text’. No more, no less. Our perspectives and analyses in the following are thus more profane and empirical—yet, still informed by the noisy media theories described above.