Datadriven humanistisk forskning – workshop på KTH 15/9

Tillsammans med professor Patrik Svensson (för närvarande gästprofessor på UCLA) arrangerar jag fredagen den 15/9 en workshop på KTH om datadriven humanistisk forskning. Ur programmet:

“Digitalt driven humanistisk forskning har på senare år visat på potential att sammanfläta kritiska och teknologiska perspektiv, där det varken handlar om att bara hantera data och bygga verktyg eller att endast studera digitala företeelser på avstånd. I detta sammanhang kan digital humaniora ses som ett humanistiskt projekt som drivs av en kärna av (mer eller mindre uttalade) digitala humanister, men där forskare inom andra humanistiska discipliner, olika gränsområden, kultarvsinstitutionerna och datadrivna discipliner utanför humaniora också är helt centrala. […] Vi hoppas att det ska kunna bli en tankeväckande och produktiv workshop med engagemang, nyfikenhet och spänstighet, samt intellektuell och institutionell drivkraft. Att humanistisk forskning, utbildning och praktik utvecklas, utmanas och aktualiseras av det digitala i vid bemärkelse är i det närmaste givet. Det rör material, verktyg, forskningsinfrastruktur, perspektiv, forskningsfrågor, uttryckssätt och studieobjekt. Här spelar området digital humaniora en viktig roll samtidigt som digitalt anknuten forskning och experimentell praktik i allt högre grad blivit en fråga för de flesta humanistiska ämnen. Digitaliserat material på kulturarvsinstitutioner är också centralt i sammanhanget. Det digitala samhället är en viktig nationell fråga och regeringen har bland annat prioriterat datadriven forskning i forskningspropositionen.”
Evenemanget är öppet för envar (först till kvarn) – program, beskrivning och anmälningslänk finns i dessa två PDFer: Program workshop datadriven humanistisk forskning 15 sept 2017 och Inbjudan datadriven humanistisk forskning KTH 15 sept 2017

Intervjuas om transmedialt berättande i Resumé Insikt

För en tid sedan gjorde jag en intervju med journalisten Julia Lundin på Resumé Insikt kring transmedialt berättande (som jag för många år sedan gjorde en bok om, Berättande i olika medier. Reportaget ligger nu ute online med fokus på norska succéserien Skam. “Transmedial storytelling är en berättelse som är för stor för ett enda medium – och får utökat liv genom andra plattformar. Begreppet är absolut inte nytt. Star Wars, Matrix och Dr Who är alla exempel på transmedia. Medieprofessorn Pelle Snickars går tillbaka så långt som till 1678 och John Bunyans klassiker “Kristens resa”, med en fascinerande medial historia.” Vanligen ligger Resumé Insikt bakom betalvägg, men Lundins reportage är öppet tillgängligt: (Skam)lig succé.

On Turing and Bots

In mid May 1951, Alan Turing gave one of his few talks on BBC’s Third Programme. The recorded lecture was entitled, “Can Digital Computers Think?”. By the time of the broadcast, a year had passed since the publication of Turing’s (now) famous Mind-article, “Computing Machinery and Intelligence”, with its thought provoking imitation game (Turing 1950). The BBC program—stored on acetate phonograph discs prior to transmission—was approximately 20 minutes long, and basically followed the arguments Turing had proposed in his earlier article. Computers of his day, in short, could not really think and therefore not be called brains, he argued. But, digital computers had the potential to think and hence in the future be regarded as brains. “I think it is probable for instance that at the end of the century it will be possible to programme a machine to answer questions in such a way that it will be extremely difficult to guess whether the answers are being given by a man or by the machine”, Turing said. He was imagining something like “a viva-voce examination, but with the questions and answers all typewritten in order that we need not consider such irrelevant matters as the faithfulness with which the human voice can be imitated” (Turing 1951).

The irony is that Alan Turing’s own voice is lost to history; there are no known preserved recordings of him. The acetate phonograph discs from 1951 are all gone. The written manuscript of his BBC lecture, however, can be found at the collection of Turing papers held at King’s College in Cambridge—partly available online (Turing Digital Archive 2016). The BBC also made a broadcast transcript, taken from the recording shortly after the programme was aired. As Alan Jones has made clear, Turing’s radio lecture was part of a series the BBC had commissioned under the title “Automatic Calculating Machines”. In five broadcasts, an equal number of British pioneers of computing spoke about their work. The fact that these talks were given by engineers themselves, rather than by journalists or commentators, was “typical of the approach used on the Third Programme”. Naturally, it is also “what makes them particularly interesting as historical sources” (Jones 2004). Then again, Jones was only able to examine surviving texts of these broadcasts. Consequently, there is no way to scrutinze or explore Turing’s oral way of presenting his arguments. His intonation, pitch, modulation etcetera are all lost, and we cannot conceive the way Turing actually spoke. Perhaps, he was simply presenting and talking about his ideas in a normal way. Yet, according to the renowned Turing biographer Andrew Hodges, the producer at BBC had his doubts about Turing’s “talents as a media star”—and particularly so regarding his “hesitant voice” (Alan Turing Internet Scrapbook 2016). The point to be made is that audiovisual sources from the past often tend to be regarded as textual accounts. By and large audiovisual sources have also been used by historians to a way lesser degree than classical (textual) documents. Sometimes—as the case with Turing—archival neglect is the reason, but more often humanistic research traditions stipulate what kind of source material to use. In many ways, however, the same goes within the digital humanities.

In theoretical physics, the concept of fine-tuning refers to circumstances when parameters of a theory (or model) needs to be adjusted in order to agree with observations. In essence, a substantial part of our ongoing Spotify project— were we are repeatedly working with bots—has been about fine tuning the both highly influential and widely criticized classical Turing test. By focusing on the deceptive qualities of technology—particularly regarding the difference between man and machine—a number of the notions proposed in Turing’s essay “Computing machinery and intelligence” have never really lost their relevance. The imitation game, Turing stated in his 1950 essay, “is played with three people, a man (A), a woman (B), and an interrogator©”. The object of the game was for the interrogator to determine “which of the other two is the man and which is the woman”. Already at the beginning of his essay, Turing however, asked what would happen if “a machine takes the part of A in this game?” As N. Kathryn Hayles famously put it, gender hence appeared at the “primal scene” of humans meeting with their potential evolutionary successors, the machines (Hayles 1999). Still, following her interpretation of Turing, the ‘gender’, ‘human’ and ‘machine’ examples were basically meant to prove the same thing. Aware that one of the two participants (separated from one another) was a machine, the human evaluator would simply judge natural language conversation (limited to a text-only channel) between a human and a machine—designed to generate human-like responses. If the evaluator could not reliably tell the machine from human—the machine was said to have passed the test. It might then be termed artificially intelligent.

Towards the end of Turing’s article, the initial question, “Can machines think?” was consequently replaced by another: “Are there imaginable digital computers which would do well in the imitation game?” (Turing 1950). Naturally, Turing thought so—and only 15 years later, the computer scientist Joseph Weizenbaum programmed what is often regarded as the first bot, ELIZA. She (the bot) had two distinguishing features that usually characterize bots: intended functions that the programmer built, and a partial function of algorithms and machine learning abilities responding to input. In his article, “ELIZA—A Computer Program For the Study of Natural Language Communication Between Man And Machine”, Weizenbaum stated that “the program emulated a psychotherapist when responding to written statements and questions posed by a user. It appeared capable of understanding what was said to it and responding intelligently, but in truth it simply followed a pattern matching routine that relied on only understanding a few keywords in each sentence.” (Weizenbaum 1966]). ELIZA was hence a mock psychotherapist—online today it is still possible to interact with her. In 2005, Norbert Landsteiner reconstructed ELIZA through the implemention elizabot.js: “Is something troubeling you?”, the bot always starts by asking. Later, Landsteiner added graphics, real-time text and even speech integration: “E.L.I.Z.A. talking”—complete with both American and British intonation (Landsteiner 2013).

The element of artifice programmed into ELIZA again testifies to the deceptive qualities of technology (which the Turing test underlined). In fact, ever since, fraudulence (in one form or the other) seems to be a distinguished part of an evolving bot culture constantly capitalising on advancements in artificial intelligence. When Weizenbaum decided to name his bot ELIZA, he did so with explicit and ingenious reference to the flower girl, Eliza Doolittle in George Bernard Shaw’s play Pygmalion (1913), as well as—one might assume—to the more recent Hollywood musical adaptation, My Fair Lady (1964). The ancient Pygmalion myth—in Ovid’s poem Metamorphoses, Pygmalion was the sculptor who fell in love with his own statue—has often been artistically deployed to examine human(s) ability to ‘breath life’ into, for example, a man made object. In other words, the myth belongs to the domain of artificial humanity; a copy of something natural, with Shaw’s play acting as an ironic comment on class society. Learning impeccable speech and cultivated behaviour without real understanding (like a bot), his play was about a bet where a phonetics professor claimed that he could train a flower girl (Eliza Doolittle) to pass for a duchess at a garden party. Or as Weizenbaum declared: “Like the Eliza of Pygmalion fame, [ELIZA] can be made to appear even more civilized, the relation of appearance to reality, however, remaining in the domain of the playwright” (Weizenbaum 1966).

Bots appear to be human—which is why they are interesting. Bots give an impression of being able to act as a normal user and/or person. If they could (almost) pass for humans half a century ago (like ELIZA) the possibilities of such intelligent machines (or rather software robots) have since naturally increased. Today, the most sophisticated bots react instantly to public information, like the advanced algorithmic bots on the stock option market. They seems almost like disembodied cyborgs, part human and part automaton. Nevertheless, bot culture, artificial intelligence and ultimately the Turing test has naturally also been criticized. The latter has, for example, been deemed not particularly useful to determine if a machine can think (or not)—most famously so by John Searle in his 1980 article, “Minds, Brains, and Computers”. Via his “thought experiment” with himself “locked in a room and given a large batch of Chinese writing”, such text was to Searle just “so many meaningless squiggles”. As is well known, Searle’s experiment also involved him being given “a second batch of Chinese script together with a set of rules for correlating the second batch with the first batch. The rules are in English, and I understand these rules”, he stated. Via these rules, but without understanding a word (or sign) of Chinese, it would theoretically be possible, Searle argued, to appear fluent in Chinese. Searle, in short, thought of himself as a bot. In fact, in the beginning of his article he made an explicit reference to ELIZA, stating that it could pass the Turing test simply by manipulating symbols of which ‘she’ had no understanding. “My desk adding machine has calculating capacities, but no intentionality”, he summed up. Searle hence tried to show that a computational language system, in fact, could have “input and output capabilities that duplicated those of a native Chinese speaker and still not understand Chinese, regardless of how it was programmed.” Famously, he concluded: “The Turing test is typical of the tradition in being unashamedly behavioristic and operationalistic” (Searle 1980).

Hayles 1999. Hayles, K. How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics. Chicago, University of Chicago Press.

Jones 2004. Jones, A. Five 1951 BBC Broadcasts on Automatic Calculating Machines. IEEE Annals of the History of Computing 26(2), 3–15.

Landsteiner 2013. Landsteiner, N. E.L.I.Z.A. Talking. Available at: http://www.masswerk.at/eliza/

Searle 1980. Searle J. Minds, Brains, and Computers. Behavioral and Brain Sciences 3, 417-424.

Turing 1950. Turing, A. Computing Machinery and Intelligence. Mind 49, 433–460. Available at: http://www.csee.umbc.edu/courses/471/papers/turing.pdf

Turing 1951. Turing, A. Can digital computers think?. Annotations of a talk broadcast on BBC Third Programme 15 May. Available at: http://www.turingarchive.org/browse.php/B/5

Turing Digital Archive 2016. Available at: http://www.turingarchive.org/

Weizenbaum 1966. Weizenbaum J. ELIZA—a Computer Program for the Study of Natural Language Communication between Man and Machine. Communications of the ACM (9) 6. Available at: http://web.stanford.edu/class/linguist238/p36-weizenabaum.pdf

MIT Press manuscript slowly emerging

Within our research project around Spotify we are currently together writing and building a book manuscript to be delivered to MIT Press before summer. Things are proceeding, albeit somewhat slowly. I am presently in charge of Chapter 2, which takes a closer look at how files become music on Spotify. The text will be thoroughly edited, but the first pages currently reads as follows:

Spotify paints it black. This short message was announced on the Spotify company blog in January 2015—with the promise to bring “Windows Phone users the best-looking Spotify ever.” By introducing a darker theme, including a refreshed typography with rounded iconography, playing your favourite music has “never looked so good”, the blog post argued. With its “refined interface” the dark theme “lets the content come forward and ‘pop’, just like in a cinema when you dim the lights.”

Interfaces indeed pop forward—and by doing so hide all infrastructures behind. Consequently, it is well known that the story of music services (like Spotify), or basically any platform or service typically accentuates, and gives prominence to touchy and shiny surfaces—which constantly seems to get updated with fancy features. Still, graphical interfaces (GUI) are not only designed to look (and haptically feel) good, they are also somewhat paradoxically made to disappear from our perception. Just like in a cinema looking at the screen, viewers/users should ideally forget about mediating mechanisms—in this case, how files are becoming music—and instead willingly enter into a frictionless, coded diegesis of smooth and endless sounds.

Of course, listeners are usually aware (in one way or the other) of the technology and infrastructural framework behind the interface, whether they use a smartphone, a tablet or a computer (with their different semblances). After all, experiencing music as software differs from listening to a CD or an LP—not the least so since the ‘lean back experience’ is way less prominent. Online input is always needed. Active listeners are accordingly familiar with the demands of the service and the ways that Spotify summons its users: “Know what you want to listen to? Just search and hit play … Go get the music … Check out … discover new tracks, and build the perfect collection” [our italics]. As Jeremy Wade Morris has argued—and explicitly discussed at length towards the end of his book, Selling Digital Music, Formatting Culture (2015)—music as software has lucidly introduced a new “technological relationship” to processes of searching and discovering, listening and liking, exchanging or buying music. When music at streaming services is coded and redefined as a purely data-driven communication form—with, on the one hand, content (as audio files and metadata) being aggregated through various external intermediaries, and on the other hand user generated data being extracted from listening habits—the singularity of the music experience is transformed and blended into what Wade Morris has termed “a multimediated computing experience.”

Today’s multimediated and exceedingly computational experience of listening to music takes on different, and sometimes personalised forms. Nevertheless, in order to understand the logic and rationale of streaming music services as Spotify, one should not shy away from, but rather ask what exactly happens when data is turned into music—and vice versa. That is: what occurs and takes place beneath the black shiny surface of, say, the Spotify desktop client, with its green and greyish interface details and whited fonts and textures? Research on the cultural implications of software— whether in the form of software studies, digital humanities, platform studies or media archaeology—has repeatedly stressed the need for in-depth investigations how computing technologies work, combined with (more or less) meticulous descriptions of technical specificities. Our analyzes of Spotify resembles such media specific readings of the computational base—that is, the mathematical structures underlying various interfaces and surfaces, and hence resonates with media scholarly interests in technically rigorous ways of understanding the operations of material technologies.

A first thing to note going under the hood, however, is that the Spotify infrastructure is hardly a uniform platform. Rather it is downright traversed by data flows, file transfers and information retrieval in all kinds of directions—be they metadata traffic identifying music, aggregation of audio content, playout of streaming audio formats (in different quality ratings), programmatic advertising (modelled on finance’s stock exchanges) or interactions with other services (notably social media platforms). Spearheading the new data economy of the 21st century Spotify resembles a sprawling network of interaction that include musicians and listeners alongside other actors and interests that have little to do with cultural commodities or media markets in a traditional sense. The constant data exchanges that occur—ranging from interactions with social media to car manufacturers—are all located elsewhere, outside of the so called platform of Spotify. We find that notion troublesome, and instead prefer describing Spotify as an evolving and open-ended data infrastructure, even if and perhaps needless to say, Spotify does not provide an open infrastructure for music listening. But since media environments “increasingly essential to our daily lives (infrastructures) are dominated by corporate entities (platforms)”, there exist today a scholarly tension between these two concepts as modes (or models) for critical examination. Platform studies have repeatedly acknowledged the dual nature of commercial platforms; YouTube, Facebook, Twitter and the like support innovation and creativity—but also regulate and curb participation with the ultimate goal to produce profit for platform owners. In short, platform affordances simultaneously allow and constrain expressions. Spotify, however, differs from traditional ‘web 2.0’-platforms. Content wise it is a service geared towards and catering to record labels and artists that seeks to provide a regulated and commercialized streaming service with professional music, and not a semi-open platform with user-generated content. There is, after all, a difference between Spotify and SoundCloud. As a consequence, we find the term platform both problematic and inadequate, and have hence refrained from using it in this book.

Föreläsning om bibliotek och digitalisering på Regionsförening sydosts årsmöte i Karlskrona

På måndag ska jag hålla en föreläsning på Svensk biblioteksförenings Regionsförening sydosts årsmöte i Karlskrona. Jag har kallat det hela för “Vad är ett bibliotek?” och tänker mig att fokusera på tre områden: (1.) vad & var är ett bibliotek idag?, (2.) bibliotek & digitalisering: hur går det egentligen? och (3.) bibliotek & bildning – om en pågående diskussion. Mina slides är nu något sånär klara och kan för den intresserade laddas ned som PDF: snickars_karlskrona_2017.

Kunskap om medier 1960-2020 – utkast till en kommande VR-ansökan

Tillsammans med mina kollegor Mats Hyvönen (Uppsala universitet) och Per Vesterlund (Högskolan i Gävle) håller jag för närvarande på att arbetar med en större ansökan till Vetenskapsrådet med inriktning mot utbildningsvetenskap. “Kunskap om medier 1960-2020” är arbetsnamnet, och projektansökan är en utveckling av den bok vi gjorde tillsammans för något år sedan: Massmedieproblem. I skrivande stund ser de första sidorna av ansökan ut på följande sätt:


I november 2016 överlämnade Medieutredningen sitt slutbetänkade till kulturministern. I en tid när digitalisering och teknisk utveckling underminerat tidigare affärsmodeller såg mediebranschen fram emot progressiva förslag som förhoppningsvis skulle rädda en allt mer sårbar och krisande sektor. En gränsöverskridande mediepolitik (SOU 2016:80) innehöll också en analys av den svenska mediemarknaden, liksom ett antal slutsatser och förslag om framtidens mediepolitiska inriktning. Ett nytt mediestöd föreslogs – och principiellt kan slutbetänkadet sammanfattas med att staten och samhället i demokratins namn måste skydda sina massmedier. Medier definierades i utredningen i huvudsak som offentlig nyhetsförmedling. Några månader senare var det dags för andra ministrar (gymnasie- och kunskapslyftsministern respektive bostads- och digitaliseringsministern) att ge sin syn på hur samma tekniska utveckling och digitalisering inneburit förändringar i arbetslivet och samhället – men nu från ett diametralt motsatt perspektiv. Regeringen beslutade nämligen i mars 2017 att förtydliga och förstärka läroplaner för grundskola och gymnasieskola med fokus på ökad digital kompetens. ”I samhällskunskap ska det ingå hur digitaliseringen påverkar samhället. Vi förstärker också skrivningarna om medie- och informationskunnighet”, påpekade en minister. ”Skolan måste bli bättre på att ge eleverna förutsättningar för att fungera som medborgare i en tid när digitaliseringen förändrar samhället”, menade en annan (Regeringen 2017). Elever – och i förlängningen landets medborgare och rentav samhället – framstår som alltmer sårbara och utsatta för exempelvis falska nyheter; de måste därför skyddas från medierna.

I generell bemärkelse handlar utbildningsvetenskap om lärande och kunskapsbildning inom samhället i stort. Hur kunskap skapas och formeras är en komplicerad samhällelig process. Med hjälp av en vid utbildningsvetenskaplig optik är frågan vi ställer oss i denna projektansökan var kunskap om medier egentligen uppstår. Medieutveckling och kunskap om denna har alltid samspelat, men mediekunskap implementeras, vidareutvecklas och cirkulerar också inom en mängd olika samhälleliga kontexter och offentligheter. Det kan förefalla paradoxalt att vår samtid präglas av föreställningar om att samhället både behöver skydda sina massmedier – och att skyddas från dem. Det sena 1900-talet kan mediehistoriskt beskrivas som framväxten av allsidiga medier som en sorts fundament för det demokratiska samhället. Men medier har samtidigt alltid betraktats som farliga, inte minst för barn och unga – från det tidiga biografeländet, över den kolorerade veckopressens kommersiellt förljugna ideal till 1980-talets videovåld. Olika slags mediekunskap har därför ingått i skolans läroplaner. Nu i trumpismens tidevarv framstår inte längre mediers snedvridna ideal som det mest riskabla, snarare ligger faran i hur medier sprider förljugna nyheter och sakuppgifter. I en uppdaterad digital skolpolitik betonas därför behovet av kunskap om hur Facebooks och Googles algoritmer förvränger nyhetsflödet, men dessa amerikanska nätgiganter tar också den nationella mediebranschens annonspengar. Våra traditionella massmedier (press, radio och teve) måste därför värnas – därav behovet av en framsynt mediepolitik.

En hypotes i denna medvetet brett upplagda projektansökan är att olika nationella aktörers och institutioners förhållningssätt till medier ofta formulerats i termer av hot, stöd och beskydd. Mediestudier triggas av upplevda faror där samhället och dess medborgare (återkommande) bör skyddas från de problem som ett nytt oöverskådligt medielandskap innebär. Det kan ske genom restriktioner vad gäller vissa medier (som filmcensur) – eller genom stöd till andra (som presstöd). Att iaktta hur motiveringar för och emot dessa begränsande respektive stödjande praktiker samspelar över tid – idémässigt såväl som i politisk praktik (och där dessa ofta ställts mot varandra) – är en grundläggande tankefigur i ansökan. På ett utbildningsteoretiskt plan handlar Mediestudiets formering i Sverige därför också om hur kunskap uppstår i samhället över tid.

Mediala farhågor florerade tidigt under 1900-talet, men med televisionens genomslag som nytt medium kring 1960 accelererar massmedieproblemen (Hyvönen, Snickars & Vesterlund 2015). I grundskolans läroplan från 1962 påtalades exempelvis ”massmediernas frammarsch” och hur dessa ”fått ökad betydelse” varför eleverna borde ”bibringas goda lyssnar- och åskådarvanor” (Lgr 62, 47). Med televisionens intåg bredvid press och radio, med populärkulturens uppsving, och med de häftiga debatter som fördes runt filmpolitiken, fanns ett behov för ett språk som beskrev den moderna svenska välfärdsstatens mångskiftande medielandskap. Det är just under denna period som begreppet massmedia blir ett nytt modeord i svenskt samhällsliv. Massmediebegreppet var naturligtvis inte svenskt – och inte heller nytt. Men det fyllde en viktig funktion när nya frågor skulle formuleras, både inom offentlig debatt, vetenskap, skola, politik och mediebransch. Mediesamhällets politik tenderade att ta fasta på kommunikationens formativa funktioner. Medier hade till uppgift att förse medborgarna med den kunskap och information de behövde – liksom med underhållning och förströelse. Men massmedierna kunde också väcka och skapa opinion, i förlängningen rentav påverka hela samhällets utformning. De var just därför som de utgjorde problem – för samhället, för kulturen och för skolan. Ett belysande exempel utgörs av den svenska skolans succession av avlösande läroplaner i vilka det enkelt går att notera de massmedier som elever skulle ges kunskap om, samt inte minst vilka av dem som innebar risker och faror.

Målet med ansökan är dels att kartera kunskapsbildning om medier i Sverige under en längre period, dels att få korn på hur medier alltid inneburit föreställda risker och/eller möjligheter – vilka resulterat i olika samhälleliga interventioner. Perspektivet är medvetet nationellt hållet; mediestudier ska här inte förväxlas med framväxten av ett (inter)nationellt forskningsfält. Ansökan syftar till att undersöka kunskapsbildning om medier i samhället – varför en internationell utblick hade gjort den alltför vid. Mediestudier uppstår i ett diffust gränsland inom (och mellan) utrednings- och utbildningsväsende, akademi, mediebransch och mediepolitik. I Sverige namnges ämnet medie- och kommunikationsvetenskap (MKV) så sent som 1990. Akademisk forskning om medier uppstår därför istället i andra disciplinära sammanhang (företagsekonomi, pedagogik, psykologi, eller litteraturvetenskap) där frågor om påverkan, kommunikation, eller presshistoria utforskas.

Intervju i tidningen Aktum

Umeå universitet ger ut en personaltidning, Aktum. I det senaste numret finns en längre intervju med mig: “Digital revolution ur fågelperspektiv” – som också visuellt tar fasta på samma tematik (med fotografier från Umeå flygplats givet mitt pendlande till Norrland). Hursom, det blev en rättså bra intervju – som kan laddas ned som PDF: snickars_aktum

DHN 2017 Gothenburg: ”Digitizing Industrial Heritage: Models and Methods in the Digital Humanities”

The second Digital Humanities in the Nordic Countries Conference is upcoming this week in Gothenburg. I will participate with a panel around, Digitizing Industrial Heritage: Models and Methods in the Digital Humanities, together with my colleagues Anna Foka and Finn Arne Jorgenssen. Departing from the ongoing project, Digital Models, the panel will interrogate the intersection between digitizing archives and visualizing history, with the ultimate goal of developing a methodology of high relevance for the cultural heritage sector. The DHN program looks really promising, and I am truly looking forward to the conference.

Presentation at German DH-conference in Bern

I am currently in Bern, in order to participate in the German DH-conference, DhD2017 Bern Digitale Nachhaltigkeit. I will talk tomorrow and have just finished my presentation and slides. The topic are the ways in which we have 3D digitized Christopher Polhem’s mechanical alphabet within the research project, Digital Models. My talk has the title: “3D-Metamodeling Polhem’s Laboratorium mechanicum” – slides and written presentation can be dowloaded here: snickars_presentation_metamodeling_bern_17 and snickars_bern_2017.

Bildning 2.0

Journalisten Moa Larsson har gjort ett fint radioprogram om bildning som nu ligger på URPlay: Skolministeriet – Bildning 2.0. Jag medverkar och pratar en del om hur bildningsbegreppet kan tänkas förändras i en digital värld. Påannonseringen lyder som följer: “Vilken plats har den klassiska bildningen i en tid när arbetsmarknaden kräver spetskompetens och konsumtionen av kultur och medier individualiseras? Är bildningsbegreppets innebörd konstant eller behöver det uppdateras i takt med att samhället förändras? Vi träffar tre personer med tankar om bildning i vår tid: filosofiprofessorn Sharon Rider, professorn i medie- och kommunikationsvetenskap Pelle Snickars och Macarena de la Cerda från föreningen Megafonen.”