Posthuman Reading

La Maleta de Portbou is a Spanish magazine of Humanities and Economy, edited by the philosopher Josep Ramoneda and published bimonthly in both print and digital format. The magazine is named in memory of Walter Benjamin, who committed suicide in Portbou in 1940 when he fled the Nazi persecution. Benjamin wanted to launch a journal focused on literary and philosophical criticism, Angelus Novus, which never came into existence. Benjamin did write an inaugural editorial for the journal, in which he argued that ‘the vocation of a journal is to reflect the spirit of its age’.

The current issue of La Maleta de Portbou (#22 March/April) includes a special section on ‘Reading in the Digital Age’, which Santiago Zabala invited me to contribute an essay to. Other contributors to this section include Mark Kingwell, Hanna Kuusela, and Antonio Monegal.

My contribution (in Spanish) is available here: http://lamaletadeportbou.com/articulos/lectura-posthumana/

You can find an English version underneath.

Posthuman Reading

In the early 2000s avant-garde Canadian poet Christian Bök embarked on an ambitious project called the ‘Xenotext Experiment’. Interested in creating what he calls ‘living poetry’, Bök proposed to embed a poem into the DNA of a non-human life-form, a bacterium. In reply this bacterium would write further poems as it mutated and evolved. Bök choose to implant the first line of his pastoral sonnet, translated into the genetic alphabet of DNA, into the genome of a Deinococcus radiodurans. This is a virtually indestructible bacterium, a so-called extremophile, which is a life-form capable of surviving in the most enduring circumstances. Similar to a piece of code, Bök’s poem, consisting of a set of instructions, can ‘run’, it can be ‘expressed’ by the organism. The instructions in the embedded poem cause the organism to produce a protein, which is itself yet another text. Once deciphered, the protein’s amino acid string becomes the next line of the poem ( ‘Any style of life/ is prim…’), a reply to Bök’s original insertion (‘The faery is rosy/ of glow…’). With this experiment Bök will, in theory, engineer a life-form to become one of the most durable and persistent archives for storing texts and data. Yet he also turned the bacterium into an, as he calls it, ‘operant machine’ for writing a poem.

Illustration by Alexander B. Kim
Illustration by Alexander B. Kim

Machine Reading

Bök is not the first to experiment with biopoetry or DNA text encoding: artist Eduardo Kac encoded a sentence from the book of Genesis into DNA, implanted it into a microbe, and manipulated it with ultraviolet light to cause mutations in its DNA and with that in the sentence; data scientist Pak Chung Wong encoded the lyrics of one of the world’s most popular songs (‘It’s a Small World After All’) into Deinococcus radiodurans; and molecular geneticists George Church translated his book Regenesis into ‘life’s language’.

These experiments exemplify the ongoing changes in how we store, mediate and access texts. However, Bök’s code poem is represented or expressed not through the ‘media’ we most commonly use to read poetry, the printed codex—or even nowadays, the digital screen—but through a living, non-human organism. Even more striking, this organism can be perceived as both a storage device, and, as Bök states, as a reading-writing ‘machine’, where by mutating and replicating it creates other poems in response. Yet what is a reading-writing machine exactly? What does it do? Did the bacterium actually ‘read’ the poem and respond to it by writing another poem, or did it merely process and run it, based on instructions received?

The answer to these questions partly depends on what we perceive ‘reading’ to be in the first place. Any reflection on the future of reading therefore needs to consider how our conception of what reading is—in relation to who reads, how we read and what is read—is continuously changing. One thing that becomes apparent form these biopoetic experiments is that they invite us to re-examine our notion of reading as a quintessential human endeavour. Texts have always been mediated by different technologies, the book being the technology we still most readily associate with reading. Yet eventhough the media through which we nowadays access texts have moved beyond the book to include computers, tablets and phones, do we also need to move beyond the perspective of a human reader? When on a day-to-day basis algorithms crawl through billions of texts to discover and analyse patterns, sifting through anything from our abundant social media posts to masses of digitised heritage, how does this scale to our own reading experiences? When we use machines, or even bacteria, to aid us in our reading or to increasingly do our reading for us, how does this change our conception of what reading is and does? One area of research that has explored these questions in depth, is the digital humanities.

Franco Moretti

Italian literary theorist Franco Moretti is well known within the digital humanities for coining the term ‘distant reading’. Moretti opposes distant reading to ‘close reading’, the hermeneutically based method of interpreting and analysing texts that scholars in the humanities commonly prefer and use. Moretti first introduced the term to explore conceptual issues around the study of world literature: how can we study the large canons of texts that this field’s comparative global analyses require, when in our reading methodologies we keep focusing on national or regional literatures? How can and do we actually read large canons of texts? Moretti’s answer to these questions doesn’t revolve around reading more texts however; distant reading is not based on questions of scale in this respect, but it is focused on a change in strategy, on reading differently, with a greater ‘distance’ to the text. For Moretti this means focusing on alternative aspects or units of a text: on themes, tropes, genres and systems. His is a more formal interpretative method—more appropriate, he argues, for large canons of text.

Moretti’s strategy is emblematic of a movement within the humanities towards an increased usage and popularity of computational methods to analyse different kinds of cultural ‘texts’. Here the focus is on techniques such as computational modelling, quantitative analysis, data mining, and visualisations, and on the application in general of more scientific methods to analyse sets of data, whether databases, archives or other large humanities corpora. Distant reading has therefore come to represent a specific understanding of literature, a method to read and analyse not by studying particular texts, but by aggregating and analysing large amounts of data to find certain patterns.

Moretti’s ideas are not new nor are his methodologies necessarily related to the use of digital technologies. Nonetheless, distant reading techniques have gained in popularity in a period that has seen massive digitisation efforts of cultural artefacts by commercial entities such as Google, but also by digital platforms such as Europeana, which has digitised and posted online millions of items from European museums, libraries, archives and multi-media collections. Large volumes of texts are now digitally available, in a form which makes them easily accessible for us to peruse online, but more importantly perhaps, in a format that is also machine-readable. If openly available (which is unfortunately not always the case) these databases of digital materials enable easy mining and crawling of texts with the aid of specifically designed algorithms. It is these developments which have profoundly fuelled the rise of machine and algorithmic reading within the humanities.

51g7outrk9l-_sy344_bo1204203200_

A popular tool among digital humanists applying this ‘new’ distant reading methodology is the Ngram viewer, developed by Google to scan texts by word units (n-grams). Historians Dan Cohen and Fred Gibbs used it to text mine all the titles of books published in English in the Victorian age in the UK—as available on Google’s databases that is. They specifically looked at certain keywords within these titles, such as ‘faith’ and ‘hope’, in order to revisit the classic 1957 book, The Victorian Frame of Mind, written by historian Walter E. Houghton. In this book Houghton identifies certain key treats as being emblematic for Victorian thought and attitudes, based on a ‘traditional’ close reading of a small canon of books. Cohen and Gibbs compared the treats Houghton identified with the data they extracted through their keywords searches, to explore whether they would find similar patterns.

Despite their growing uptake, distant and machine reading techniques have not had an easy time within the humanities. Their increased popularity has been accompanied by a growing critique towards the kinds of computational methods they embody. This critique is based around the strong divisions distant reading enthusiasts tend to introduce—Moretti clearly positions it as a superior method to close reading for example—combined with the perceived failure to provide really meaningful or interesting insights based on their analyses. Computational humanists are also accused of advancing scientism in the humanities, based on the premise of objectivity their data-led methods tend to carry with them—by some seen as superior to the more subjective interpretative analyses of ‘deep reading’. Similarly, computational humanists are seen as favouring ‘practice’ or methodology over theory. This, critics argue, tends to be accompanied by a rhetoric of cultural authority that sees these new methodologies solving the crisis of legitimacy the humanities are seen to be facing.

Cohen and Gibbs and others have tried to counter this critique by arguing that it is important to not think about reading in the form of binaries between close and distant, or human and machine reading. They explain that they used the Ngram tool as a ‘supplementary method’ to assist more traditional close reading methods. They criticise the presumed cleft between close reading and computer-enhanced distant reading, as they call it, and instead argue for a movement between different methods of text analysis according to need, research interest and available evidence. For them basic text mining procedures can very much complement existing research processes in fields such as literature and history.

Gary Hall
Gary Hall

Yet, reflecting on this, in what sense does such a focus on interaction, complementary methods, supplementary strategies, and the merging of different ways of reading end up brushing aside the question of what reading actually is? Instead of complicating this question, does a perhaps too easy acknowledgement of ‘a plurality of adaptable reading strategies’, also mean we do not need to ask difficult questions about what reading is in a digital context anymore? This is one of the claims made by British media theorist Gary Hall, when he states that these different reading methodologies might actually be incommensurable. Following this argument, it is exactly in these productive tensions between different reading methodologies that we reconsider what it means to read. Their incompatibility demands of us to ask important questions about what reading is and from their clashes new forms of reading can arise. Through their antagonism might discern where issues of power and culture in the context of reading in a digital environment are being played out. Therefore we need to keep this relationship between different methodologies open, and study the clash between different forms of reading as one that is always already being complicated.

Indeed, what is perhaps partly underlying this felt unease among humanists, is that these reading techniques are not only complicating human interpretation, they disrupt our understanding of what it means to be human more in general too. Distant reading is again a reminder of the role other, non-human actors play, and will continue to play, in reading, in mediating texts, in performing them, and in finding and analysing their meaning and patterns—whether the reader is human, a computer or a bacterium. In this respect reading can be seen as a ‘more than human’, indeed even a posthuman endeavor. Yet, it is not the digital environment that has ‘triggered’ posthuman forms of reading, where new reading methodologies and techniques, from distant reading to algorithmic crawling, are part of an ongoing development in which humans adapt to new technological possibilities and affordances, from books to screens to encoded DNA. In this respect reading has always been posthuman, determined as it is by specific agencies, materialities, technologies and contexts for its mediation.

Bernard Stiegler

New digital technologies have not only influenced the reading experience, they have also influenced us through our readerly interactions. French philosopher Bernard Stiegler explains this process through the concept of ‘originary technicity’, in which technology is not seen as being instrumental, but as originary or constitutive of the human. In other words, technology is not external, added to the human from the outside as a tool that brings about certain ends, but technology and the human mutually co-constitute each other. This means that humans and tools continuously modify each other; we come about out of our relationship to technology. Stiegler’s work is very useful in this context to explore how changes in technology influence the production of human subjectivity. The question of what reading is and will be, is therefore simultaneously caught up in the question of how new reading techniques, and the contexts in which they are developed and deployed, alter what it means to be human. It might therefore be more productive and interesting to keep the very question of the relationship between the (post)human, reading and technology open.

Machines Are (Reading) Us

What then are some of the implications of new machine reading and pattern recognition algorithms for the human subject itself, how do they destabilize who or what we perceive the human to be? To explore this question it might be helpful to focus on one example in particular: ‘algorithmic filtering’.

Machines, computers, and software are becoming more prevalent as part of our methods of reading, in how we read. Yet they are also involved in determining what we read. In an age of information overload we are increasingly using algorithms to aid us in our search for information. But these algorithms don’t only help us find what we are searching for; they also hierarchise that information for us. Algorithmic filtering is integral to our search engines whilst Facebook and Twitter are increasingly selecting which posts show up prominently in our feeds under the guise of ‘tailored’ or ‘personalised’ suggestions. In order to provide these personalised hierarchisations, companies such as Apple, Google and Amazon track the digital platforms, devices, and apps that we use to search for and read information online, and they record and store this ‘customer information’ on their servers.

Amazon for example collects information about its customers’ browsing, buying and reading habits to recommend similar books to them (‘Customers Who Bought This Item Also Bought…’). This is a process Amazon calls ‘collaborative filtering’, where our habits are correlated with the ‘crowd’ preferences generated from its customer databases. Amazon even constructs detailed data profiles about us, storing data about which books we buy, and how we read them—what we highlight, how long we take to finish a book, etcetera. This process is more apparent in reading devices such as Kindle, Nook and Kobo, but it also takes place on the reading apps we use on our phones and tablets and on the social reading platforms we visit online. The rise in digital books has thus helped make reading into something that is quantifiable and measurable.

Yet eventhough these filter and recommendation algorithms are becoming ubiquitous in our day-to-day interactions with information, they also make people uncomfortable. Amazon received a barrage of criticism when it accidentally deleted George Orwell’s 1984 and Animal Farm from Kindle devices and when it filtered out LGBT titles from its search results by accidentally classifying them as ‘adult’ titles. On top of that Facebook has already experimented with customising feeds in order to manipulate moods, and Amazon has, based on their customers’ data, helped publishers and authors develop specific books that appeal directly to the market.

Ted Striphas
Ted Striphas

American cultural theorist Ted Striphas is concerned in this respect about the rise of what he calls ‘algorithmic culture’. Increasingly, by determining what we read, companies and algorithms are actively producing our culture. Algorithmic recommendations feed back into culture, influencing the information that gets produced and how we sort our cultural artefacts. But Amazon and other technology companies are also altering our idea of culture. This has partly to do with issues of trust, Striphas explains. What we read is no longer determined by ‘elite culture’, by the experts that used to influence our reading and buying decisions. We increasingly prefer to trust algorithms instead, as they are perceived as more objective and personalised—they embody a particular cultural authority or credibility. There is a clear objective trust bias here, Striphas argues, where at the same time we don’t have access to what lies behind these algorithms. It is impossible to determine how they interpret the data they are extracting from us, as they are patented, their code hidden; they are unintelligible to us, only to be read and understood by machines.

On top of that algorithms have strong disciplining effects. They are constructed in such a way that they provide us with suggestions based on what they think we want and need. This continuous personalised filtering creates isolated online worlds and carved-out information niches, curated by non-transparant algorithms feeding our pre-existing beliefs. Are we still able to produce new habits of thought, conduct and expression when continuously confronted with these filtering processes? Internet activist Eli Pariser’s coined the concept ‘filter bubble’ to characterise the personalised web. We are increasingly exposed to information that, instead of challenging us, reinforces how we and others within our filter bubbles already think. This means we are less likely to encounter information that is different from what we already know, that complicates our worldview or challenges our established opinions. Filter bubbles are even seen to have contributed to phenomena such as Brexit and the ‘post-fact society’: facts no longer penetrate filter bubbles.

Katherine Hayles
Katherine Hayles

New reading techniques, supported by digital tools and platforms for reading, are not only changing the status and nature of culture and knowledge, they are involved in the constitution of a different form of human subject too. As Hall has argued, we need more time and care to think through the relationship between different forms of reading; how they change what reading is and with that what it means to be human. Therefore it is important that we keep open the question of reading, and don’t bracket off dichotomies between human and machine reading, nor smooth out antagonisms between different forms of reading. Instead we should challenge the conventional distinctions between them and the premises on which they are based. American media theorist Katherine Hayles has argued that the claim ‘computers cannot read’ is a form of ‘species chauvinism’. Reading and the human are both not fixed entities but fluid and entangled constructs, constantly changing what reading is and who reads. In our reading we can’t simply isolate ourselves from our technologies or our contexts. In this sense we are all reading machines.

Comments

Leave a Reply