Noisy Media Theory

I am currently writing an article with my colleague Johan Jarlbrink, where we are trying to make (at least) some sense of a small selection of the ten million newspaper pages that have so far been digitised by the National Library of Sweden. Working with the resulting XML-files from Aftonbladet between 1830 and 1862, these contain extreme amounts of noise: millions of misinterpreted words generated by OCR, and millions of texts chopped off randomly by the auto-segmentation tool. Noisy media theory hence seems applicable – and below is a draft from the more theoretical parts in our forthcoming article:

In classic information theory, the signal is usually defined as useful information and noise as a disturbance. Noise has generally been understood as distorting the signal, making it unintelligible and/or impossible to understand. Eliminating noise was, in short, paramount within information theory—but, it also lead to the fact that noise per se became an analytical category. Information theory was, in short, always interested in noise. As is well known, Claude Shannon’s article, “A Mathematical Theory of Communication” (1948)—which envisioned a new way to enhance the general theory of communication—basically concentrated on noise. Already in his first paragraph, Shannon stated that he wanted “to include a number of new factors, in particular the effect of noise in the channel”, where the fundamental problem of communication, to his mind, was that of “reproducing at one point either exactly or approximately a message selected at another point”. As a consequence, Shannon’s article featured frequent discussions of both “noiseless systems” as well as channels “with noise”. As is evident, contemporary digitisation activities display a number of resemblences and affinities to these remarks and arguments.

As has often been remarked, Shannon was not interested in messages with “meaning”. All semantic aspects of communication were deemed “irrelevant to the engineering problem”—but noise was not. In part two of his article, Shannon for example wrote about a “discrete channel with noise”, where the signal was “perturbed by noise during transmission”. This meant that the received signal was “not necessarily the same as that sent out by the transmitter.” Furthermore, if a channel was too noisy it was not “in general possible to reconstruct the original message or the transmitted signal with certainty”. There were, however, “ways of transmitting the information which are optimal in combating noise” (Shannon 1948).

Classical information theory became popular when Shannon and his co-author, Warren Weaver, published their book, The Mathematical Theory of Communication in 1949. The same year, Weaver had published an article analysing the ways in which a “human communication theory might be developed out of Shannon’s mathematical theorems” (Rogers & Valente 1993, 39). In it Weaver stated that “information must not be confused with meaning”, but more importantly (for our article), he wrote a longer passage on “the general characteristics of noise”. How does noise “affect the accuracy of the message finally received at the destination? How can one minimise the undesirable effects of noise, and to what extent can they be eliminated?”, Weaver asked. If noise was introduced into a system—like a digitisation process—then the “received message contains certain distortions, certain errors, certain extraneous material, that would certainly lead one to say that the received message exhibits, because of the effects of the noise, an increased uncertainty.” Yet, as Weaver paradoxically stated, if uncertainty is increased, then information is also increased—”as though the noise were beneficial!” This type of uncertainty which arose because of errors or “because of the influence of noise”, Weaver however described as an “undesirable uncertainty” (Weaver 1949).

Within classical information theory, noise could in other words also be described as beneficial. In general, however, noise was a dysfunctional factor; the task was combating noise. Consequently, Shannon and Weaver’s mechanistic model of communication mostly dealt with the signal-to-noise ratio within various technical systems. Obviously, their model was indifferent to the nature of the medium. It has, however, since been argued that the arrival of a new medium always changes the relation (or ratio) between noise and information. Digitisation processes are no exception. Within German media theory, noise has for example often been used as a productive analytical category. Friedrich Kittler’s recurrent claim that technical media records not only meaning, but always also noise derives (to some extent) from Shannon. It should hence not come as a surprise that Kittler was the one who translated and introduced Shannon in Germany—with a book ingeniously entitled, Ein/Aus (Kittler 2000).

Then again, critiquing the ways classical information theory morphed into cultural studies and content based readings within media studies, Wolfgang Ernst has polemically asked if, indeed, it makes sense at all “for media studies to apply the information theory of Claude Shannon and Warren Weaver to the analysis of TV series?” From a distinct media archaeological perspective, Ernst has claimed that the only message of television is its signal—“no semantics” (Ernst 2013, 103, 105). The archaeology of media searches the “depths of hardware for the laws of what can become a program”, Ernst has furthermore stated. In doing so, media archaeology concentrates on the “non discursive elements” of the past: “not on speakers but rather on the agency of the machine” (Ernst, 2013, 45). What looks like “hardware-fetishism”, Ernst once stubbornly postulated in his inaugural lecture in 2003, is only “media archaeological concreteness” (Ernst 2003).

Emphasis on media specificity is hence always to be found within this German media theoretical tradition, and perhaps foremost so within the more digitally inclined media archaeology which often tries to look under the hood of contemporary technology. In this sense, media archaeology is part of a gradually shifting emphasis towards media specific readings of the computational base and the mathematical structures underlying actual hardware and software, a transition with analogies to Shannon that also resonates with an increased interest in technically rigorous ways of understanding both software and the operations of material technologies. Analysing accidents, errors and deviations has, for example, been one strategy to approach systems and technologies that are hard to grasp as long as they function properly. As Jussi Parikka has written (in his English introduction to Ernst writings), “more than once, Ernst asks the question ‘Message or noise?’”—a question that, according to Parikka, is “about finding what in the semantically noisy is actually still analytically useful when investigated with the cold gaze of media archaeology” (Parikka 2013). Another German media theorist, Sibylle Krämer, has even stated that various forms of analyses under the hood is the only way to make the functions of media technologies visible: “only noise, dysfunction and disturbance make the medium itself noticeable” (Krämer 2015, 31).

One does not, however, have to accept these media theoreticians definitive claims, to make noise beneficial in an analysis of the digitisation technologies that transform printed texts to digital files. Misinterpretations produced by the OCR makes explicit what graphical elements the software interprets as important and ‘meaningful’, and errors in the auto-segmentation show what the tool is programed to recognise as a ‘text’. No more, no less. Our perspectives and analyses in the following are thus more profane and empirical—yet, still informed by the noisy media theories described above.

Data och Goliat – ur en kommande recension

Jag precis skrivit klart en recension av Bruce Schneiers långa bok, Data och Goliat. Dold datainsamling och makten över världen. Min recension är lika lång som kritisk – och innehåller bland annat följande: “Den stora poängen med Schneiers bok är att den väver samman kommersiell och statlig datainsamling och övervakning. På flera sätt är boken ett slags post-Snowden-studie. Inte bara avslöjade Edward Snowden att Prism var ett globalt dataövervakningsprogram som amerikanska National Security Agency (NSA) ägnat sig åt under många år; det visade sig också att flera nätjättar i Silicon Valley samarbetat med amerikanska staten och lämnat ut användaruppgifter – samt till och med planterat in så kallad fulkod i produkter och applikationer. På ett förtjänstfullt sätt drar Schneier ihop statlig myndighetsövervakning och olika kommersiella affärsmodeller kring datainsamling och visar på återkommande likheter och mönster. … Ändå läser jag hans bok med stigande irritation. Boken är pladdrig; den staplar exempel på hög och upprepar sig; den handlar bara om USA; Schneier skryter om sina tidigare böcker (en har sålt i hela 180 000 exemplar i ”två utgåvor”); Schneier använder få (om ens några) andra referenser än sina egna iakttagelser. Att boken inte har någon litteraturlista kommer inte direkt som någon överraskning. En författare som påtalar att han ”varenda minut under skrivprocessen arbetar med hela boken samtidigt”, tar man liksom inte på allvar. Det är kort sagt något som skaver rejält under läsningen. Jag googlar författaren och inser att han själv, personligen, arbetar inom det affärsområde, ”säkerhet och teknik”, som boken handlar om. På företaget Resilient systems hemsida framgår att Bruce Schneier är deras ”Chief Technology Officer”, samt att han omtalats som a “security guru” av The Economist. Bolagets mål är uppenbarligen att sälja säkerhetslösningar: ”to empower organizations to thrive in the face of cyberattacks or business crisis.” Det finns med andra ord ett kommersiellt egenintresse hos Schneier att skriva fram en sorts rädsla för övervakning, cyberattacker och allmänt digitalt krismedvetande.”

Texten är skriven för Respons – och kommer i ettdera nummer framöver.

Fine-tuning Turing

I have finally been able to get some time to start working with a long due article, together with system developer Roger Mähler at HUMlab. The working title is, “Turing Testing Spotify”, and basically we will describe some of the different experiments we have conducted within our Spotify-project, as well as saying a few words in general about working analytically with audio sources. Even if audiovisual material today amounts to a steadily increasing body of data to work with and research, such media modalities are still relatively poorly represented in the field of the digital humanities. The purpose of the article is hence to provide some findings from our ongoing audio (and music) research project, and – via a meta commentary around Alan Turing – at the moment the piece starts like this:

In mid May 1951, Alan Turing gave one of his few talks on BBC’s Third Programme. The recorded lecture was entitled, “Can Digital Computers Think?”—and by the time of the broadcast, basically a year had passed since the publication of Turing’s (now) famous Mind-article: “Computing Machinery and Intelligence” [Turing 1950]. The BBC program—stored on acetate phonograph discs prior to transmission—was approximately 20 minutes long, and basically followed arguments Turing had proposed earlier. Computers of his day, in short, could not really think and therefore not be called brains, he argued. But, digital computers had the potential to think and hence in the future be regarded as brains. “I think it is probable for instance that at the end of the century it will be possible to programme a machine to answer questions in such a way that it will be extremely difficult to guess whether the answers are being given by a man or by the machine. I am imagining something like a viva-voce examination, but with the questions and answers all typewritten in order that we need not consider such irrelevant matters as the faithfulness with which the human voice can be imitated.” [Turing 1951]

The irony is that Alan Turing’s own voice is lost to history; there are no known preserved recordings of him. The phonograph discs from 1951 are all gone—but the written manuscript of his BBC lecture can be found at the collection of Turing papers held in the Archive Centre at King’s College in Cambridge—partly avalable online [Turing Digital Archive]. The BBC also made a broadcast transcript, taken from the recording shortly after it was transmissioned. As Alan Jones has made clear, Turing’s radio lecture was part of a series the BBC had commissioned under the title “Automatic Calculating Machines”. In five broadcasts, an equal number of British pioneers of computing spoke about their work. The fact that these talks were given by the engineers themselves, rather than by journalists or commentators, was “typical of the approach used on the Third Programme”. Naturally, it is also “what makes them particularly interesting as historical sources” [Jones 2004].

Then again, Jones was only able to examines surviving texts of these broadcasts. Consequently, there is no way to scrutinze or explore Turing’s oral way of presenting his arguments; his intonation, pitch, modulation etcetera—in short, analyzing the way Turing spoke through for example speech recognition. Perhaps, he was simply presenting his ideas in a normal way, yet according to the renowned Turing biographer Andrew Hodges, the producer at BBC had his doubts about Turing’s “talents as a media star”—and particularly so regarding his “hesitant voice” [Alan Turing Internet Scrapbook]. The point to be made is that, audiovisual sources from the past have by and large been used by historians to a way lesser degree than text. Sometimes—as the case of Turing—archival neglect is the reason, but more often academic research traditions stipulate what kind of source material to use. In many ways, however, the same goes within the digital humanities. Even if audiovisual material today amounts to a steadily increasing body of data to work with and reserach, it is still relatively poorly represented in the field of DH.

Projektsajt för Digitala modeller

Nu på måndag (25/4) startar det nya projektsamarbetet mellan Tekniska museet och HUMlab vid mitt universitet i Umeå: “Digitala modeller. Teknikhistoriens samlingar, digital humaniora & industrialismens berättelser”. Vår projektsaft finns nu ute online i en betaversion – http://digitalamodeller.se/. Vi kommer givetvis successivt att fylla på med material; det är ett långt projekt – som inte kommer att sluta förrän sista december 2019. Stay tuned med andra ord.

Inledningsutkast till om artikel om bokmediets omvandling

Det forsknings- och samtalsprojekt som jag och litteraturvetaren Alexandra Borg drivit i ett par år, “Kod(ex). Bokmediets omvandling”, håller vi nu på att bearbeta i endera artikelform. Tanken är dels att redovisa några teman och frågeställningar vi ägnat åt oss i ett antal workshops, dels att sammanfatta en del av diskussionen om och kring bokbranschens digitala omställning. Ett första utkast (av inledningen) till vår artikel ser ut som följer:

En enkel sökning på termen ”e-bok” i Retrievers artikelsök ger en antydan om hur elektroniska böcker diskuterats i Sverige under de senaste 20 åren. Under 1990-talet figurerar exempelvis bara ett par enstaka artiklar. “Tänk dig att du kan bära med dig alla böcker du önskar utan att de tillsammans väger mer än ett kilo”, hette det exmpelvis i en artikel i Göteborgs-Posten 1999. ”Det blir möjligt när e-böckerna slår igenom.” Bokbranschen menade dock att de elektroniska böckerna mest föreföll lämpade för instruktions- och läroböcker. E-böcker skulle alls inte fungera ”för de böcker man läser som förströelse. De läsarna är oerhört konservativa”, hävdade en förlagschef. Samtidigt var andra, framför allt IT-experter, måna om att lyfta fram e-boken som det nya mediet som framgent skulle ta bokläsarna med storm. ”För sex-sju år sedan var det nästan lite Klondykestämning kring e-boken”, påtalade till exempel E-libs dåvarande VD i en artikel några år senare, 2007 – med den träffande rubriken: ”E-boken går framåt i sin egen takt.” Men, fortsatte hon, “de förväntningarna har inte uppfyllts.” Om e-boken tidigare betraktades som förlagsbranschens framtidshopp, hade den under 00-talet blivit flankerad av både CD-boken och MP3-boken. Boken höll helt enkelt på att bli en digital medieform bland andra – och frågan som många förläggare började ställa sig när den digitala omvandlingen av bokbranschen tog fart, vad en bok egentligen var för något?

Åren efter 2010 förändrade sig den nationella diskursen om e-böcker gradvis igen. Nu var frågan (ånyo) inte om e-boken skulle ta över – utan när det skulle ske. Under 2012 såldes exempelvis mer än 130 000 e-böcker, mer än dubbelt så många som året innan. ”Den nya trenden ökar lavinartat”, hette det bland annat i ett stort julreportage i Aftonbladet där olika läsplattor även betygsattes (ingen fick mer än tre plus). ”Försäljningen av e-böcker ökar hela tiden: i USA står försäljningen för 50 procent i vissa genrer och jag är helt övertygad om att vi kommer hamna där i Sverige också”, menade bland annat VD för Svensk Bokhandel. Hypen av e-boken under dessa år – mellan främst 2011 till 2013 om man följer det brant stigande antalet artiklar i Retriever – gav givetvis också upphov till en rad rekyleffekter. Somliga förläggare var exempelvis offentligt aktiva motståndare till e-böcker (Svante Weyler), regeringen oroade sig för att datorerna hotade läsandet (Litteraturutredningen 2013), och den amerikanske författaren Jonathan Franzen liknade till och med Amazons VD, Jeff Bezos vid Antikrist. Inför bokmässan i Göteborg 2013 prydde han DN Kulturs omslag, och frågade sig om apokalypsen skulle komma med de stora nätbokhandlarna.

Mottagandet av e-boken har under de senaste 15 åren med andra ord rört sig från en fascinerad avvaktande hållning mot ett entusiastiskt hyllande (med vissa undantag), för att under de senaste två åren plana ut i ett mer nyktert konstateranade att den analoga och fysiska boken gör rejält motstånd, och fortsatt säljer bra. Faktum är nämligen att vi numera ägnar oss åt, läser och lyssnar på böcker alltmer – men på högst olika sätt. För första gången på åratal har förlagens försäljningskurvor börjat peka uppåt; under de första månaderna av 2016 har det totala antalet sålda böcker ökat med mer än fem procent. Adderar man ljudbokens uppgång med geniala plattformar som Storytell – där förlagens intäkter för ljudböcker ökade med 95 procent mellan 2013 till 2014 – inser man snabbt att den digitala omvandlingen av bokbranschen, för vilken e-boken kommit att bli mer eller mindre synonym, är allt annat än enkel att överblicka.

Försöka duger dock, och under två års tid har vi författare drivit ett forsknings- och samtalsprojekt (med stöd av Riksbankens jubileumsfond) som just handlat om bokmediets omvandling. Vi har i ett antal workshops samlat ett trettiotal företrädare från bokbranschen och akademin, där de senare kommit från en mängd olika vetenskapsdiscipliner. I den här artikeln vill vi redovisa några av de teman och frågeställningar som vi diskuterat (löst sammanfogade under rubrikerna ”produktion”, ”distribution” och ”konsumtion”). Utgångspunkten för vårt projekt har varit att den samtida boken – i olika elektroniska format – bör betraktas som en ny medieform. Vi har därefter diskuterat vilka implikationer det fått för såväl bokproduktion (i både analog och digital bemärkelse), distribution av böcker som data (liksom i fysisk form genom nätbokhandlare), liksom konsumtion av böcker (i olika modaliteter) på skiftande plattformar. En konsekvens av bokbranschens digitalisering är att andra än litteraturvetare börjat att på allvar intressera sig för bokmediet – en annan är att litteraturvetare blivit varsamma att böcker numera faktiskt är en medieform bland många andra.

Det kanske viktigaste resultatet av vårt samtalsprojekt är emellertid att även om e-boken inte blev den frälsare (eller förgörare) som förläggare eller experter förutspådde, och som är uppenbart om man klickar sig igenom de tusentals artiklar i ämnet som Retriever listar, så har den verkat på en helt annan nivå. E-boken har nämligen fått oss forskare, liksom givetvis bokbranschen, att utforska bokens mediespecificitet – i såväl digital som analog skepnad. Genom e-boken har det exempelvis blivit uppenbart att även pappersboken har ett gränssnitt; genom att frångå det klassiska kodexformatet kan, i korthet, andra uttrycksformer utforskas. Därtill är de digitalt kodade, mediemodala skillnaderna mellan bok, musik, foto och video faktiskt inte längre speciellt stora, vilket bland annat inneburit att det inte längre bara är förlagsbranschen som sätter dagordningen för bokens framtid, utan lika mycket globalt mäktiga teknikföretag.

The Signal & The Noise – presenting at conference in Lincoln, Nebraska

I am currently in Lincoln, Nebraska attending the conference, The New and the Novel in in 19th-Century Studies. Together with my colleague Johan Jarlbrink I will today deliver a presentation entitled “The Signal & The Noise—Digitizing 19th Century Newspapers at the National Library of Sweden”. It relates to work we are doing with in the research project Digitala lägg. My slides as a PDF can be downloaded here: snickars_nebraska.

PSK-rapport klar

Den rapport som jag i egenskap av medlem i mediebranschens public-service-kommission arbetat med under mer än ett halvår är nu klar – och har precis presenterats på MEG16 i Göteborg. Ur ingressen: “Fler medier ska kunna producera public service, de statliga programbolagen ska få ett tydligare uppdrag och radio- och tv-avgiften ska avskaffas. Det är tre centrala förslag i den rapport som Public service-kommissionen idag presenterar vid ett seminarium på Mediedagarna i Göteborg.” Rapporten kan laddas ned här: PSK_2016.

Keynote på Digikult 2016

Imorgon bitti ska jag hålla en keynote på konferensen Digikult, en nordisk mötesplats för digitalt kulturarv i praktiken med fokus på tillgänglighet, delaktighet och utveckling. Min ambition är dels att göra en sorts undersökning av ”det digitala” med utgångspunkt i några av mina digitala kulturarvsprojekt – och då framför allt det nya projektet ”Digitala modeller”. Därtill vill jag i en längre avslutning lyfta fram ett par digitala kulturarvsparadoxer – eller möjligen motsatspar, eller rentav tankefigurer – som visserligen också undersöker av ”det digitala”, men på ett mer övergripande plan handlar om relationen mellan forskning och kulturarv med utgångspunkt i digital humaniora.

Har satt ihop exakt 100 slides – som kan laddas ned som PDF här (39 MB): snickars_digikult_16_digitala_modeller

The Formation of Swedish Media Studies 1960-1980

Together with my two colleagues Mats Hyvönen and Per Vesterlund, I am currently writing an article about the formation of Swedish media studies during the 1960s and 1970s. The piece is a continuation of the book we published last year, Mass Media Problems. The Formation of Media Studies [Massmedieproblem. Mediestudiets formering] where we argued that Swedish media studies departed from, and emerged within a rather diffuse borderland between the media industry, national cultural politics as well as academia. Our idea is to have it published in the journal Media History – presently the article starts like this:

In spring 1962, a study was published that attracted a great deal of attention in the Swedish daily press. It bore the title, Swedish Popular Press 1931-1961 [Svensk Populärpress 1931-1961] and was written by Göran Albinsson. His book doesn’t come across as a particularly unusual press historical study today. Through measurements of press and magazine circulation, size and content, and by analysing prices and the financial results of publishers—related to the socio-economic status of Swedish readership—Albinsson was able to show how reading weekly press and magazines had gradually increased over the thirty-year period studied. The last two years (1960 and 1961), however, showed a marked downturn, which Albinsson attributed to the expansion of Swedish television. Studies using a similar methodology had been done before, primarily in the U.S; Albinsson referred, for example, to NBC’s published television measurements. He also alluded in more general terms to “American surveys of mass media’s ability to influence opinions, values and behaviour”. J. T. Klapper’s recently published, The Effects of Mass Communications (1960) was, in addition, singled out as a “highly exhaustive summary”.

Today, it might seem somewhat odd that Albinsson’s study was commissioned by Åhlén & Åkerlund, the biggest publisher of weekly magazines in Sweden at the time. But just as it appeared increasingly important to study mass media, it seemed natural that media companies themselves would be responsible for studies of their own business. Hence, the circumstance was not subjected to any special criticism in reviews of Albinsson’s book—and there were, indeed, many of them. The lavish attention around his study can be perceived as part of an increasing interest in Sweden during the early 1960s around mass media issues. Debates raged more or less constantly within the public sphere. Journalists, writers and academics attacked or (occasionally) praised the content of media and regularly commented on matters of form. Especially, the significance of television was intensely discussed in Sweden at the time, and mostly in negative terms: falling cinema attendance, decline of newspaper circulation and fewer books being read—everything could be blamed on TV. In one such opinion piece about the media, author Arnold Rörling, for example, stated in 1961: “The weekly press is one of the links in a dangerous chain—the shackle of the mass media. But the phrase ‘mass medium’ is already so overused that it means nothing to us. We hear it spoken, but it produces no associations in us—least of all any warnings of danger.”

The interesting point about Rörling’s article does not concern himself—a relatively well-known writer in Sweden at the time, who published a fine essay on objectionable mass culture. This was a common journalistic theme. What is realy striking with Rörling’s argumentation, was on the one hand that the term mass medium in 1961 was so widely used that an ordinary Swedish cultural commentator (as Rörling) could express genuine ennui about it. On the other hand, he also described different media’s symbiotic dependence on each other, a kind of media convergence, as an entirely reasonable idea to be held in mind already in 1961. Indeed, the various mass media in Sweden were so closely linked that they were best captured in the most striking of agrarian metaphors, the shackle that tethered cattle (i.e society, citizens or audiences) in a manner that was as unbreakable as it was painful.

In Sweden the term ‘mass media’ had by 1960 established itself as a buzzword in public discussions. Looking back, the reasons seem obvious: never before had so many platforms, based on so many different media technologies competed for people’s time and attention. There was consequently a need for a common language that described the shifting media landscape of the archetypal Social Democratic welfare state, as well as for novel methods by which media phenomena could be studied. Around 1960, the politics of the emerging media society in Sweden tended to fixate on the formative functions of communication. If (old media) as art, film or literature could function as instruments for changing the opinions and attitudes of individuals and groups, it was the job of the new mass media (especially television) to convey these instruments (on a national scale) in a fair and effective way. According to the discourse at the time, mass media had the task of providing citizens with the knowledge they needed. But the mass media could also provoke and create opinion, and influence the whole of society. The monopoly of public service broadcast media, press subsidies and film policy were some of the issues around which uncertainty about the new media landscape prevailed.

By and large, however, the history of media research remains to be written. Naturally, over the years in Sweden—as in other countries—a few national (more or less nostalgic) retrospectives have been published, almost always written from an intra-academic, media and communication studies perspective. The anthology, The history of media and communication research: Contested memories (2008) is one example. Compared to other humanistic and social science disciplines, old ways of studying media has gained scarce attention. This is somewhat surprising, since the history of media research has a number of socio-political implications. Understanding media, in short, meant understanding society. The consolidation of media research in Sweden during the 1960s and 70s, for example, was hence far from an intra-academic endeavour. On the contrary, to reduce the formation of media studies to a question of the emergence of media and communication studies (or film studies for that matter) is to miss the point. The central question regarding the formation of Swedish media studies between 1960 and 1980 is not about how university research disciplines were established, but rather how a changing media landscape prompted a broad social and discursive activity, within government and politics, the media industry and the public sphere—as well as at universities.

In fact, from a media historical perspective, it is as relevant to consider research (and debates) around the media as part of media’s national history as it is to perceive such studies as solely belonging to academic history. This article, hence, has a meta perspective regarding the formation of media studies in Sweden. With its focus on tangential and overlapping fields, institutions and players, it seeks to complicate and problematize the development of media research, as well as situating the study of media within a broader Swedish media history. The fact that media inquires were often commissioned by a number of institutions (outside of universities)—ministries, archives, defence forces, media companies and opinion pollsters—falls in line with such an approach. The rapid development of mass media, and research about the same media, were simply different sides of the same coin. In addition, since the media was constantly discussed within the public sphere by the nation’s intelligentsia, what emerges is a truly complex media history, in which academia was sometimes even marginalised. Then again, the question remains what media research at the time defined as its objects of study—that is, as media—and what was defined as research. In Sweden during the 1960s and 1970s, there were many different interpretations. And almost as many answers.