More Media, More People

During autumn I have been working as a guest professor at Södertörn university, within a project run by my colleague Lars Degerstedt at the School of Natural Science, Technology and Environmental Studies. Together we are now writing an article which deals with new forms of media intelligence, and different challenges for the competitive intelligence business. The idea is to have a finished article in mid January and submitting it to Nordicom Review. At present the article has the title: “More Media, More People—Conceptual Challenges for Social and Multimodal Data Driven Competitive Intelligence”. The introduction gives some hints of what we are trying to do, and the text starts like this:

Today, the amount of data produced in a single minute is mind-numbing. Streams—if not floods—of social and multimodal data consequently pose a pivotal challenge for companies within the competitive intelligence business. One of these, the computer software company Domo, has marketed itself as a service designed to provide direct and simplified, real time access to business data without IT involvement. According to Domo, the contemporary data deluge shows no sign of slowing down. “Data Never Sleeps” has hence been the appropriate title of a series of infographics the company has released. The latest version 3.0 was presented in August 2015. Much of what we do every day happens in the digital realm, Domo states. These activities leave an ever increasing digital t®ail “that can be measured and analysed”. Correspondingly, the infographic “Data Never Sleeps 3.0” revealed that every minute users liked a staggering 4,166,667 posts on Facebook, 347,222 tweets were sent on Twitter, at Netflix 77,160 hours of video were streamed every minute—and 300 hours of video uploaded on YouTube. Furthermore, 284,722 images were shared on Snapchat, and at Apple 51,000 apps were downloaded. Notably, these social data transactions occurred every minute, around the clock (Domo, 2015).

Sleepless data hence seems to be the perfect description of today’s global information landscape. Crowd or community based social media, in short, produces data flows that are both a blessing and a curse for competitive intelligence businesses. Handling new forms of social and multimodal data, however, requires new skills—conceptually as well as technologically. However, no data is error-free. On the contrary. There are a number of myths that flourish within the contemporary hype of Big Data. So called data cleansing for example, always has to be performed before, say the depicted data in Domo’s infographic can be analysed. Moreover, the same data also has to be interpreted. All forms of information and media management within the competitive intelligence business basically follows the same pattern: data needs to be collected, entered, compiled, stored, processed, mined, and interpreted. And, importantly: ”the final term in this sequence—interpretation—haunts its predecessors”, as Lisa Gitelman has stressed in the aptly titled book, Raw Data is an Oxymoron (Gitelman, 2013, 3).

With “each click, share and like, the world’s data pool is expanding faster than we comprehend”, the Domo infographic informs potential customers. At a Domo event prior to the launch of the infographic 3.0, the data artist—yes, that is the way he describes himself—Jer Thorp, stated that “not only are we doing more with data, data is doing more with us”. For consumers and business users alike, “improving our lives” thus requires a better understanding of what contemporary “interactions with data” actually mean, according to both Thorp (and Domo). And naturally this is exactly what is being marketed: only Domo can help a business make sense of the “endless stream of data”. The company even has a business intelligence tool with the enticing name “Magic”, that lets customers “cleanse, combine and transform” their data. Data combinatorics provides greater insights, Domo asserts, and thus enables customers to see the whole picture. “Magic provides several intuitive tools to help you prepare your data”—and especially so, if Magic is combined with the company’s presentational tool kit that “quickly interprets the data for you, and suggests how to visualize it for maximum impact and clarity” (Domo, 2015). In other words, the infographic of Domo is aesthetically pleasing for a reason. Today within the competitive intelligence business, maximum impact simply requires Beatiful Data—which happens to be the title of a fascinating book by Orit Halpern. According to her, all data “must be crafted and mined in order to [become] valuable and beautiful” (Halpern, 2014, 5).

Domo is in many ways a successfull American start-up, currently funded by venture capital, but also with a cristal clear business plan. In a video demo, Domo state that their core idea revolves around “the future of business management”. The demo gives viewers an “exclusive look at Domo”, ending with the invocation: “what you need is a platform that brings your people and all the data they rely on together in one place.” In short, Domo is all about business intelligence as social data. Via this video demo, the beautiful infographic and the sleepless data presented by Domo, the purpose of this article is to address similar challenges facing competitive intelligence in an a gradually modified information landscape. When data structures information—what to collect and analyse? If Domo promises it’s customers that their platform makes it “easy to see the information you care about”, how is data perceive and conceptualised? (Domo, 2015). In this article, we argue that data driven competitive intelligence—which is basically what companies like Domo do—particularly needs to pay attention to new forms of (A.) crowd orientated and (B.) media saturated information. If business intelligence traditionally has referred to a set of techniques and tools that transforms textual data into useful information for business analysis, such techniques need to consider that the media landscape has been altered in both a social and non textual direction.

If more data—is better data (as some would have it), accordingly more people that create more media, should be understood in a similar way. This article will consequently start with some introductory remarks around the broader concept of “media intelligence”, and the ways that competitive intelligence businesses has adapted to a transformed media environment—turned datascape. In the subsequent sections, the notions of “social competitive intelligence” and “media analytics” are used as two further concepts that media intelligence evolve around. Firstly, social competitive intelligence tries to understand how a changing information environment will impact organizations and companies by monitoring events, actors and trends. Information today doesn’t only want to be free—information wants to be social. If general usage of technology was once described with terms like social engineering, the linchpin of today’s culture of connectivity is social software. By presenting some findings from the so called CIBAS-project, we thus describe how organisations and companies increasingly rely on (more or less) (in)formal social networking structures and individual decision making as a means to increase rapid response and agile creativity. Secondly, if business analytics focuses on developing insights primarily on textual data and statistical methods, media analytics basically does the same—yet giving priority to audiovisual media streams, often with a slant of sociality—so called social video is for example perceived as increasingly trendy in the way businesses will use social media in years ahead. In our article we use “fashion analytics” as an example, gleaned from a commercial sector where audiovisual big data is currently in vogue. Finally, some concluding remarks are presented.

Om digital bildning i DN Kultur

Jag har idag publicerat en text på DN Kultur om ‘arkivets’ olika datalogis riktningar – I den digitala bildningen kan framtiden ersätta historien. Det hela är ett försök att fundera kring digital bildning; ingressen ger en ungefärlig bild av artikeln: “Med digitaliseringen och insamlingen av information har kunskapsbanker vuxit fram som en ny sorts kulturbas och kulturarv. Pelle Snickars visar hur big data kommer att påverka vår syn på bildning i framtiden.”

Om censur i DN

Jag har idag publicerat en kommentar i DN Kultur, Medieutredningens censur av min artikel är besynnerlig, med anledning av att en text jag skrivit för Medieutredningens kommande forskningsantologi plockats bort – och detta med hänvisning till att jag sitter med i mediebranschens public service-kommission. Det är aldrig trevligt med censur.

Tillägg – det blev under dagen en del diskussion om mitt inlägg. Vilket fick till följd att både Mittmedia och SR ville publicera den censurerade artikeln i sin helhet. SR skriver exempelvis om det hela här, Därför publicerar Medieormen Pelle Snickars artikel som stoppades av Medieutredningen. Min artikel har titeln, “Personifierad data, informationskonsumism och datahandlare. Inför en grön datahushållning” och är en text på 9 000 ord. Den har nu publicerats på Mittmedia här, Inför en grön datahushållning – och på Sveriges Radios Medieormen-webb här, Pelle Snickars om personifierad data, informationskonsumism och datahandlare. I det senare fallet handlar det alltså om följande: en artikel blir censurerad av en statlig medieutredning på grund av att författaren sitter med i påstådd anti-public-service-kommission – varefter texten publiceras av just public service. En medial ironi om något.

Bok nummer 17 klar – Massmedieproblem. Mediestudiets formering

Idag lanserar vi boken Massmedieproblem. Mediestudiets formering. Jag har redigerat den tillsammans med Mats Hyvönen och Per Vesterlund. Det är min sjuttonde bokpublikation. Följer man baksidan handlar den om följande:

Under 1960-talet blev begreppet massmedia ett nytt modeord i svenskt samhällsliv. Med televisionens intåg, populärkulturens uppsving, och häftiga debatter runt filmpolitiken kom behovet av kunskap om medier att framstå som akut. De så kallade massmedieproblemen turnerades ständigt i den allmänna diskussionen.

Boken Massmedieproblem – mediestudiets formering handlar om hur den svenska medieforskningen etablerades. Det skedde i hög grad i ett tämligen diffust gränsland mellan bransch, politik och akademi – detta är en av bokens centrala tankar. De expanderande massmedierna tycktes erbjuda såväl löften som ständiga problem. Perioden mellan 1960 och 1980 var en tid som utmärktes av en stark tilltro till politiska lösningar på olika samhällsproblem, och den gryende medievetenskapen och det offentliga utredningsväsendet kom därför tidigt att vävas samman. I boken fokuserar forskare ur flera generationer och från olika discipliner den svenska kunskapsproduktionen kring medier under 1960- och 1970-talen. Historien kastar emellertid långa skuggor framåt i tiden; den svenska medievetenskapens förflutna präglar fortsatt mediestudiets formering. Om 1960-talets medieforskning drevs fram i en tidsanda – artikulerad i samhällsdebatt och i samhällsstyrning av medier i samverkan med akademi – är frågan hur det egentligen är ställt numera. Går dagens medieforskning i takt med sin tid? 

Boken är CC-licensierad och kan laddas ned här.

Our ”Spotify-project” – an update

The research project “Streaming Heritage: ‘Following Files’ in Digital Music Distribution”, funded by the Swedish Research Council, is now in its second year. The project team consists of Pelle Snickars (project leader), Rasmus Fleischer, Anna Johansson, Patrick Vonderau and Maria Eriksson. The project is located at HUMlab where developers Roger Mähler and Fredrik Palm do the actual coding. In short, the project studies emerging streaming media cultures in general, and the music service Spotify in particular (with a bearing on the digital challenges posed by direct access to musical heritage.) Building on the tradition of ‘breaching experiments’ in ethnomethodology, the research group seeks to break into the hidden infrastructures of digital music distribution in order to study its underlying norms and structures. The key idea is to ‘follow files’ (rather than the people making or using them) on their distributive journey through the streaming ecosystem.

So far research has focused basically four broader areas: the history and evolvement of streaming music in general and Spotify in particular (Fleischer), streaming aggregation’s politics and effects on value and cultural production (Vonderau), the tracing of historical development of music metadata management and its ties to knowledge production and management that falls under the headline of ‘big data’ (Eriksson), and various forms of bot culture in relation to automated music aggregation (Snickars). One article has been published, and more preliminary results are to be presented in a number of upcoming articles and conferences during 2016. Eriksson, for example recently submitted an article around digital music distribution increasingly powered by automated mechanisms that capture, sort and analyze large amounts of web-based data. The article traces the historical development of music metadata management and its ties to the field of ‘big data’ knowledge production. In particular, it explores the data catching mechanisms enabled by the Spotify-owned company The Echo Nest, and provides a close reading of parts of the company’s collection and analysis of data regarding musicians. In a similar manner, Johansson and Eriksson are exploring how music recommendations are entangled with fantasies of for example age, gender, and geography. By capturing and analyzing the music recommendations Spotify delivers to a selected number of pre-designed Spotify users, the experiment sets out to explore how the Spotify client, and it’s algorithms, are performative of user identities and taste constellations. Results will be presented at various conferences during next year. In addition, Snickars has continued working with the HUMlab programers on various forms of “bot experiments”. One forthcoming article focuses the streaming notion of “more music”, and an abstract for the upcoming DH-conference in Kraków (during the summer of 2016) is entitled: “SpotiBot—Turing testing Spotify”. It reads as follows, and gives an indication of the ways in which the project is being conducted:

Under the computational hood of streaming services all streams are equal, and every stream thus means (potentially) increased revenue from advertisers. Spotify is hence likely to include—rather than reject—various forms of (semi-)automated music, sounds and (audio) bots. At HUMlab we therefore set up an experiment—SpotiBot—with the purpose to determine if it was possible to provoke, or even to some extent undermine, the Spotify business model (based on the 30 second royalty rule). Royalties from Spotify are only disbursed once a song is registered as a play, which happens after 30 seconds. The SpotiBot engine was be used to play a single track repeatedly (both self-produced music and Abba’s ”Dancing Queen”), during less and more than 30 seconds, and with a fixed repetition scheme running from 10 to n times, simultaneously with different Spotify account. Based on a set of tools provided by Selenium the SpotiBot engine automated the Spotify web client by simulating user interaction within the web interface. From a computational perspective the Spotify web client appeared as black box; the logics that the Spotify application was governed by was, for example, not known in advance, and the web page structure (in HTML) and client side scripting quite complex. It was not doable within the experiment to gain a fuller understanding of the dialogue between the client and the server. As a consequence, the development of the SpotiBot-experiment was (to some extent) based on ‘trial and error’ how the client behaved, and what kind of data was sent from the server for different user actions. Using a single virtual machine—hidden behind only one proxy IP—the results nevertheless indicate that it is possible to automatically play tracks for thousands of repetitions that exceeds the royalty rule. Even if we encountered a number of problems and deviations that interrupted the client execution, the Spotify business model can in short be tampered with. In other words, one might ask what happens when—not if—streaming bots approximate human listener behavior in such a way that it becomes impossible to distinguish between a human and a machine? Streaming fraud, as it has been labeled, then runs the risk of undermining the economic revenue models of streaming services as Spotify.

Finally, during the following weeks the project group will do presentations in the U.S. The first one is called, “Spotify Teardown”, and consists of a project presentation and roundtable at the Center for Information Technology and Society at the University of California, Santa Barbara. On the one hand the presentation will have a focus on methodology, background research and preliminary findings, and on the other hand try to initiate a discussion with three focused areas: (1.) ”Ethical and Legal Limitations”: What are the ethical/legal issues that arise in relation to activist projects, and how to tackle them? (2.) ”Metaphors for Research”: What metaphors are useful, or more useful than conventional metaphors such as “platform” or “platform responsibility”? and (3.) ”New Qualitative Methods and Old Disciplinary Frameworks”: What are the key challenges of working with qualitative, inter- and transdisciplinary methods in institutional environments? In addition, Pelle Snickars will also do another project presentation in New York at Cuny (The City University of New York) at the conference, ”Digging Deep: Ecosystems, Institutions and Processes for Critical Making”.

Robotar tar över reklamen

Idag intervjuas jag i Göteborgs-Posten kring vad jag lite löst kallar för “den automatiserade offentligheten”. “Att du får mindre och ny typ av spam innebär inte att de digitala reklambudskapen minskar. Avsändarna agerar allt mer i det fördolda”, står det i ingressen. Intervjun är tämligen kort – och kan läsas här: Robotar tar över reklamen.

Vem ska tämja internet?

Idag recenserar jag Andrew Keens nya bok, Internet är inte svaret i DN; inleder så här: “I webbens barndom gjorde science fictionförfattaren William Gibson prognosen att framtiden redan är här – ”den är bara inte särskilt jämt fördelad”. Det är en stilig oneliner, och fungerar som ett slags motto för Andrew Keens ny bok. Som författare och it-entreprenör har han under det senaste decenniet gjort internationell karriär som den kanske mest oförsonlige nätskeptikern av alla.” Recensionen kan läsas här: Vem ska tämja internet?.

Publiceringsfigurens död

Idag har jag föreläst på konsthögskolan kring vad jag lätt spekulativt kallat publiceringsfigurens död. Tanken var att använda termen ”publicering” som ett plastiskt och dynamiskt begrepp vilket i allra högsta grad kan användas för att förstå ”det digitala”. I föreläsningen turnerar jag därför begreppet ”digital publicering” – förstått i synnerligen vid bemärkelse – från tre olika mediala perspektiv vilka alla hänger samman med medieforskning som jag ägnat mig åt, eller som jag för närvarande arbetar med: 1. Multimedial publicering & remix, 2. Illegal publicering som bevarande, och 3. Publicering som explorativ och undersökande metod. Föreläsningen kan laddas ned som PDF här: snickars_konsthögskolan_2015.

Ny bok äntligen klar – Universitetet som medium

Forskningsantologin Universitetet som medium – redigerad av Matts Lindström och Adam Wickberg Månsson, bägge vid Stockholms universitet – är efter en alltför lång produktionstid nu äntligen klar. Boken levererades från trycket häromdagen, och finns i linje med bokserien Mediehistoriskt arkivs policy fritt att ladda ned. Baksidan av boken ger en vink om vad det hela handlar om:

Vem är framtidens humanist om empiri är på väg att ersättas av Big Data, och tolkning kompletteras av algoritmer? Som en följd av samhällets digitalisering tycks universitetets traditionstyngda institutioner vara på väg att förändras. Sedan en tid talar man allt oftare om digitala humanistiska laboratorier och om en digital humaniora som är på väg att ta plats invid den gamla. Om universitetet är ett medium byggt på andra medier – från pergament, föreläsningssalar och tryckpress till dagens Internet och datorer – betyder nya medier också ett nytt universitet? Med utgångspunkt i frågor som dessa samlar den här boken ett antal texter som vill belysa universitetets mediemateriella komplex – både genom historiska fallstudier och samtida reflexion. I fyra tematiska avdelningar diskuterar forskare från olika discipliner ämnen som akademins fysiska plats i relation till nya digitala praktiker, universitetets historiska och samtida mediesystem, ordens och språkets roll, och introducerar samtidigt också nya kritiska perspektiv på universitets långa historia. Tillsammans tecknar de tolv bidragen ett utkast mot universitetets långa och komplexa förflutna – såväl som dess rika samtid och mångfacetterade framtid.

Själv var jag den som ännu dazumal initierade detta bokprojekt – och fick Matts Lindström och Adam Wickberg Månsson att trots intensiva doktorandstudier lägga tid och kraft på detta bokprojekt. Det är befriande att boken nu äntligen är klar. Själv har jag skrivit en artikel i boken med titeln, “Publikationshack”.

Boken kan laddas ned här: universitetet_som_medium

PITHON – Hera Research Proposal

Today – together with a number of European colleagues (notably Eggo Müller and Andreas Fickers) – we have finalised our Hera project proposal (which previously made it to the second round). PITHON is the acronym – Pirated Television Heritage: Online Video, Counter-narratives and European Identity. The summary reads as follows:

PITHON is steered by the following research question: How do online video remix cultures engage with Europe’s past, and how do these practices contribute to the popular history and memory of Europe?” In the past decennium, a massive body of audiovisual heritage has become accessible online on websites of archives and video sharing sites as YouTube and Vimeo. PITHON is the first project to investigate the re-use of television heritage in an European perspective. So far, research has focused on institutionally legitimized forms of re-use as in historical television documentaries, museum exhibitions or on websites of European initiatives. However, re-use today includes new popular forms as playlists, memes, spoofs, remixes and mashups creating parodies and counter-narratives to established forms European history. PITHON investigates the cultural dynamics and political power of pirated and re-circulated television heritage combining novel computational methods as video fingerprinting and digital historiography with discourse analysis and the (virtual) ethnography of remix practices. Whereas the content industry seeks to control piracy, PITHON will use digital methods to analyse the contribution of remix cultures to European history and memory. PITHONS Research Team is formed by researchers affiliated with the Centre for Television in Transition (Utrecht University), the Institute for Contemporary History (Prague), the DigHumLab (Umeå University) and the Digital History Laboratory (University of Luxembourg). The project cooperates with major European initiatives in the archival and educational sector: the portal for European audiovisual heritage EUscreen.eu, the European Digital Library Europeana.eu and the European Association of History Educators EUROCLIO.eu. The outcomes of the project will contribute to a deeper understanding of the dynamics of participatory engagement in European history, memory and identity, and to new methods in Digital Humanities. It will also provide customized tools for online video recognition and for digital source critique professionals in the archival and educational field.

Umeå university and HUMlab will be in charge of work package 1 – Tracking & Tracing Pirated TV Heritage. According to the description, we are going to do the following:

The aim of WP1 is to accomplish the PITHON objectives by delivering a technical solution through software set-up and a designated video comparison platform. By tracking and tracing the flow of video content online and the circulation of reused audiovisual heritage, researchers in humanities gain a better understanding of the socio-cultural ’life of data’ in general, and the different ways that footage from the past is reused on video sharing sites in particular. WP1 is geared towards studying the cultural significance of code, using ideas from the emerging field of digital artifact and critical code studies (Chun 2011; Montfort et.al. 2012; Manovich 2013; Casemajor 2014). The underlying rationale of WP1 is to provide the technical solution that allows various research issues to be explored and executed in the subsequent WPs. Methodologically, WP1 strives to follow the medium’s evolving digital methods (Rogers 2014), by repurposing technical solutions and building upon already existing applications and software. The digital humanities centre HUMlab (UMU) is responsible for platform design; whereas tools for video tracking, tracing and matching will be supplied by open-source (project Squid by Kennisland) and third party software (INA Signature). Technically, the work will consist of several steps: firstly, identifying a set of video files, footage and collections of interest in relation to the PITHON objectives, in collaboration with the online collections of EUscreen partners. Since video processing is data-intensive and requires adequate hardware, supplied by HUMlab, the identification process of the content is decisive. Scholarly input and knowledge around participatory video cultures is essential. Thus, within WP1 there are a number of fundamental overlaps between the work of WP1, WP2 (UU) and WP3 (UL). Secondly, the video files selected will be downloaded from a video repository (as EUscreen) for local fingerprinting. Thirdly, these files are processed using an (a) open source tool set Squid, and (b) third party commercial software Signature that creates a fingerprint for each file. These, highly compressed, are then stored locally at HUMlab. Fourthly, a specific but major collection of videos is identified online at the YouTube channels of EUscreen partners, which contain previously fingerprinted footage. WP1 is responsible too for capturing, processing and fingerprinting these videos and/or collections. The final phase in the work of WP1 consists of a comparison using open source software such as Squid, or a licensed access to an INA cloud service, which does the actual matching between video fingerprints. A positive match indicates that the online content contains the sequences of reused old footage which we are seeking. These will then be further analysed and researched in consecutive WPs.