Text Comparison and Digital Creativity – Day I

 Last Thursday and Friday the two day colloquium ‘Text comparison and digital creativity’ was held at the KNAW, as part of its 200th anniversary year. The symposium was a joint initiative from the Virtual Knowledge Studio (VKS) and the Leiden based Turgama project. For more information on both organizers simply follow the links.

One of the main points of the colloquium was the paradox of digital creativity. The digital stands for the objective, calculative ‘scientific’ method, where creativity stands for subjectivity, interpretation and scholarship. The concept of digital creativity is coupled to the practice of text comparison, leading to the question what the influence of the recent digital developments has been on the field of textual comparison. One of the main questions asked to the speakers was whether ICT developments only lead to a speeding up of the research process or whether it truly introduces new methods of investigation. Moreover, in what sense does the computer affect the creation and representation of knowledge and data?

 

Another conceptual theme used during the colloquium was the dichotomy of ‘presence’(materiality, ‘physics’) and meaning (meta-physics, interpretation). The question is whether the hegemony of meaning has come under attack by a re-awakened interest in presence, as stated by Wido van Peursen and Ernst Thoutenhoofd in their introductory text to the colloquium. More focused on the topic of the colloquium they raise the question:

 

How [does] the computational, analytical work done in digital scholarship relate to the subjective moods of interpretation and intuition that characterize traditional philology?¹

 

Isn’t text comparison becoming more and more like an exact science with the coming of computation? And is it being separated from text interpretation in this respect? Or does interpretation still play an important role in textual comparison?

 

One of the keynote speakers was David Crystal, who explored the changing nature of text in his lecture, focusing on the emergence of what he coins Digitally Mediated Communication (DMC). In his lecture he compared DMC (looking at both continuities and discontinuities) with other traditional ‘texts’ (speech, writing and sign), by taking a look at the salient features of these mediums of communication. He concludes that although DMC has more properties linked to writing, it deploys properties of both writing and speech. More interesting however is the fact that, as Crystal argues, DMC has lead to the rise of texts with properties that have no written/speech equivalence (he mentions SPAM filters, search engine rankings and moderated/filtered texts), that are multi-authored (Wiki’s) and have no boundaries (texts are never finished). He argues that the salient features of DMC are still for a large part unknown or uncertain, urging for the study of its properties from within linguistics.

 

The session on texts as artefacts showed how artefacts can be represented or studied using digital technologies. Bruce Zuckerman gave a tour of the InscriptiFact website/database, which offers different ways of searching for the (texts as) artefacts and inscriptions (using a wide range of indexing techniques) and of representations of (texts as) artefacts, using pictures that are for example movable around the screen and searchable themselves. Zuckerman also talked about an experimental feature of the database where artefacts can be viewed under different angles of lighting, using a light dome thus greatly improving their ‘presence’. These techniques, as he argued have led to different levels of interpretation that were not or almost not possible before. Roger Boyle introduced a technique that makes it possible to take a look ‘inside’ paper, which helps with the identification and finding of watermarks. Watermarks can often be unintentional marks of value and time, which can help to establish the attribution and dating of texts. He argues that computer science can and does bring more than a bag of techniques to improve pictures for codicologists, paleographists and papyrologists.

During the session on texts as objects of transmission, David Parker gave a lecture on the virtual Codex Sinaiticus, which in its original form is scattered around different locations. Four partner institutions are now working together in creating a virtual CS. Parker explored the similarities and differences between the ancient production and its electronic reproduction. Ulrich Schmid gave a kind of similar lecture about his endeavors with transmitting the New Testament online, exploring specifically the question how the digital medium can help us facing challenges in text editing, while also seeing a lot of challenges still surrounding the creation of a fully interactive digital edition (from technical difficulties to platform, preservation and copyright issues).

 

Eep Talstra’s lecture focused more on philosophical and methodological questions concerning bible study in the digital age. Talstra asked whether we speed up classical techniques, or whether we develop a new domain of techniques for access to classical texts. Basically he asks the question how to use computer technology in the domain of Bible and Philology. For in the study of classical texts three layers of text analysis come together: text as a literary composition, as a linguistic structure and as a source for the study of language. Can computer assisted textual analysis help us to do justice to the three layers present in the classical data, more than classical tools could do for us?

 

Mats Dahlström explored the issue of scholarly editions and editing. Are they just recordings of matters of fact? In this respect he mentions one of the biggest tensions in scholarly editing, namely the tension between different scholarly and scientific ideals: are scholarly editions a representation of facts or interpretations? He argues that in this respect the pattern of conflicts is not medium specific; it is rather a general trait of textual transmission. The new medium will not do the tensions away; it will in some cases even enhance them, in which Dahlström sees similarities between the tensions in library digitization and in scholarly editing, which he goes on to compare during the rest of his lecture.

Some of the main points made during the first day were that technology should serve as a tool for the scholar/scientist, not the other way around, and that Humanists should be proactive in their demands towards technological implementations. There should be a dialogue between implementation from above and humanities input from below.

 

More on Day II of the colloquium will follow shortly.

 


Posted

in

by

Comments

One response to “Text Comparison and Digital Creativity – Day I”

  1. […] Digital Humanities, KNAW, text comparison, Turgama, VKS Long overdue, here are my notes on the second day of the colloquium Text Comparison and Digital […]

Leave a Reply