Highlights of APE 2009 – Day 2

classroom-coe-college-cedar-rapids-iowa-2007-by-eric-william-carrollOn the second and final day of APE, Sebastian Mislej of the Jozef Stefan Institute in Ljubljana, talked about videolectures.net, a website streaming online video lectures that can be viewed for free. All the content on videolectures.net is scientifically approved (it has been peer reviewed) so in its entirety it forms a complete scientific repository of free top conferences’ content. The site is hosted and supported by well known academic institutions, where some of the content partners and contributors are for instance MIT Open Course Ware, the Mellon Foundation and the University of Cambridge. The site makes use of semantic web applications and additional functionalities like streaming video with synchronized slides. They also add links to other resources. As Mislej states, the website functions as a learning culture, the links that are necessary to understand the topic will and can be added. In this way videolectures.net can be seen as a kind of scientific YouTube, a portal to high quality scientific video content on the web.

As Mislej explains, 99% off all people giving the lecture are very interested in putting their video online; they really want to put their videos online. Publishers of conference proceedings are also positive, since it is good promotion for their content. In the future videolectures.net will improve the web portal (redesign, improve navigation, automatic knowledge object linking), it will be extracting semantic information (speech indexing, text mining, video mining, automatic ontology construction, user tracking and profiling) and it will focus on solving some important issues regarding intellectual property on content, and problems regarding video formats, mobile platforms and accessibility.

 

Hans Pfeiffenberger, from the Helmholtz Association, gave a lecture on publishing data, focusing specifically on “Earth System Science Data”, a data publishing journal. As Pfeiffenberger stated, in polar research the incentive is to preserve the data and their meaning for centuries in the future. This data preservation is best done by publishing them. The question is how publishing can help comply with the requirement of quality assurance for research data. As Pfeiffenberger remarks there are of course different kinds of data and this means we will also need different methods to take care of them. According to him review guidelines for data should focus on originality, significance and data quality. The peer reviewers will have to look into the data itself and look at its quality and the connection to the article. Articles can then be seen as interpretations of the data.

Pfeiffenberger states that it is also important to have incentives for researchers to publish data. We need to have rewards for data publication, it needs to be citable and it needs to be part of the impact factor. And, as stated before, it needs to be quality assured data. Preservation and (open) access to data are also critical issues. The aim should be to reuse and reproduce the data. The data will be provided by the scientists but who will provide the infrastructure? And what about licensing and long-term preservation? Pfeiffenberger concludes that these are issues that we will need to consider in the future.

 data-visualisation

 

During the afternoon panel on Open Books, three panellists from the publishing world, Eelco Ferwerda from Amsterdam University Press, Frances Pinter from Bloomsbury Academic and Barbara Kalumenos from STM publishers, where asked to discuss two questions.

The first question focused on the Academic book in the digital age: What it is now and in 5 years – what will users expect?

Eelco Ferwerda first said a few introductory words about the OAPEN project and afterwards replied to the first question by stating that users will in the future expect to find, access and search within books online. He stated that it was Google that changed the whole idea of books for us. Because of Google, books have now become an integral part of the Internet and in this way have gained a new future. Now where is the book heading? Ferwerda recalls Robert Darnton’s pyramid model, in which the book is seen as a pyramid consisting of different layers: the book itself and comments, updates, e-learning, primary sources and datasets in other connected layers. Ferwerda gave the example of the Driver II project, focusing on enhanced publications, where research data, extra materials and post-publication data will be added to the primary publication. The moment scholars recognize the value of these types of additions, they will become the norm.

 

Frances Pinter from Bloomsbury Academic went on to compare the old publishing model with a new future model, The old model is based on printed content, on publishers as gatekeepers who verify and brand the work, and on publishers as bankers. In this model costs can be a barrier to dissemination together with a limited range of formats. In the new model however, she states that there can be multiple versions and formats of content, on different locations and channels. In this new model here will be competition with free versions and it will be uncertain who will pay for the publishing process.

One existing online business model revolves around publishers charging for the premium content and putting free content around the premium content in order to generate the traffic. Pinter asks what would happen if you inverted that model? What if you would offer the free premium content online (with a CC license) and then would charge for the activities around it, like the print edition and a variety of other services and activities.

According to Pinter, this is what academic authors will want because they do not need publishers anymore. Publishers need to find some new models that sustain the user needs whilst still upholding the quality added value system and rewarding structure.

 books

Barbara Kalumenos from STM Publishers, states that STM has focused mostly on journals. The problem, as she sees it, with future forecasts when it comes to digital books has to do with the fact that there is still way too little hard factual material available on digitized books. She also states that the term books is way to general, we need to differentiate between textbooks, monographs etc. and then focus on these categories specific. What Kalumenos especially regrets is the lack of numbers on the amount of books that are already digitized. As she states, we need to do some basic empirical research on what the status quo is at the moment. Only then can we speculate what will happen in five years. And it also depends heavily on the discipline. The users however are in the centre of this development. What does the user want? The user wants its content easy, directly and with very few clicks, as Kalemunos remarks, you loose users after more than three clicks as user interaction research has shown. This kind of research can also show how the users search and interact with the material. Kalemunos doubts that monographs in HSS will only be used in digital environments, which means that web 2.0 tools will be developed for books too, but maybe not for the full catalogue of books. So she concludes that we should look at user behavior and what they expect of electronic books in the digital age.

 

During the discussion that followed after the first question remarks where made about the book format, the development to more article use and production in HSS and the possibility of the emergence of a middle category in between the article and the book.

Next to that the funding possibilities of monographs were discussed. As Eelco Ferwerda remarked, the book is different in this respect: the economic model of distributing books is becoming impossible. Academic publishers suffer selling proper monographs. An Open Access motive for books might be to come up with a new business model to keep the book alive as a research format. Frances Pinter made the point that with books, whether they are expensive or not, we need to look at, what are the actual costs of reading a book in print and online.

monograph-cover-made-by-six1 The second question focused on Open Access publishing models for books:  How will they work, change scholarly communication and change the market?

Eelco Ferwerda starts by talking about the IMISCOE series. The basic Open Access model for this kind of series, he remarks, is a hybrid model, where on both focuses on online and print. The basic online edition is free and the printed edition is sold, where the author retains the copyright. OAPEN wants to expand this model; they want to develop a common approach or model to fund the Open Access edition, in collaboration with research councils. In their view a network is needed and funders need to see this as a service. Funding model will revolve around a fee for direct costs and revenues from additional services.

 

As Frances Pinter remarks, the situation for Bloomsbury Academic is rather different. Bloomsbury Academic is a commercial company so needs to cover its costs completely. Printing editions appear simultaneously with the online edition. They are offering traditional publishers service along with free online access and added value services.

As Pinter mentions, this is also a start up: additional added value services to sell around the content still need to be developed on top of that. What about licensing contracts? Authors do no longer need the publishers, so we will no longer have exclusive licenses between author and readers in both ways. The big question is whether people will really pay for the added services. According to Pinter that is a risk for the publisher to take. But who is going to fund the added value services that the publisher provides? Pinter asks what would happen if we would not see publishers as people who take the risk? What if they become more like the service arm for scholarly work? Pinter imagines an independent party that tenders between different publishers for a service contract to put in that added functionality. It would be a more streamlined system in this way according to her.

Barbara Kalumenos however remarks that putting a layer between publishers as Pinter says, might not work well in a system made up of all kinds of different country policies and it will probably only lead to extra bureaucracy. Kalumenos thinks more in the lines of Open Access as part of the research costs. Open Access should be paid as part of this process. Afterwards remarks were made concerning the degrading of the publisher in these kinds of new models from the value adding / risk taker to a service offering party. Important in this respect is the second process of reviewing for the commercial sustainability of a monograph. This also has an added value for it helps to bring out the better publications. What will happen with this when publishers will be service providers: what about the needed commercial filter to see if this book is fit to be published? Finally Eelco Ferwerda remarks that publishers will always be in competition for content, based on their reputation.

Comments

One response to “Highlights of APE 2009 – Day 2”

  1. […] by renowned publisher Frances Pinter. I have written about Pinter and the Bloomsbury model before here. Basically their business model revolves around the same idea I wrote about yesterday in my post on […]

Leave a Reply