The academic publisher in 2020

My friend and colleague Ronald Snijder has written a very interesting forecast related to academic publishing. He asked me to publish it here, which I am happy to do. I would also like to draw your attention to the very interesting article Ronald wrote, entitled ‘The profits of free books: an experiment to measure the impact of open access publishing’ published in Learned Publishing. This article is the culmination of the research on Open Access and books he did for Amsterdam University Press.

You can reach him at r.snijder@aup.nl

 

The academic publisher in 2020

 

In April 2020, professor Snijder publishes a title on rubber ducks. It comprises of a discussion of monograph length and a data file. After 6 months it becomes clear the chapter ‘Metafysical Ducks’ is read extensively at Californian universities and pictures of red rubber ducks are downloaded frequently in South East Asia. Based on this, professor Snijder received a grant for research. Working title: ‘Religion and color in bathing rituals’.

In 2020 all scholarly titles are published digitally. This has several implications. Firstly, the publications must be formatted in such a way that it is readable on many different devices, ranging from phones to cinema screens. Secondly, it becomes much easier to attach research data, or the data becomes part of the publications. Whether these will be called books, is unclear. To enable this, the publications will not contain one big mash of words; the contents must be saved in a structured format, with separate formatting instructions for screen display. Whether this structure is called XML, ePub, RDF is not really relevant. At the very least, the structure must be understood by all devices; it must follow global standards.

Publishing all information digitally enables the publisher to make it globally available without much trouble. In my opinion, one of the main tasks of the publisher is to make sure that the publications are used by their intended audiences. This is nothing new, but in 2020 usage can be measured in minute detail. At this moment, Google Books enables publishers to measure the amount of pages read, and in which countries those readers reside. This will only become more sophisticated, and that offers new opportunities for publishers. The publisher who can promote its publications to the right audiences better than the competition, has an enormous advantage. This will not only matter for the authors, but also for granting organisations.

Structured publications also lead to other possibilities: it is possible to measure the use of smaller parts of the publication – such as a chapter. Mostly, the term ‘granularity’ is used. Based on this, publishers can give detailed feedback to their authors; again an opportunity. We did see that data are becoming part of the publications. Consequently, publishers need to take the data also into account when organising the peer review process. This is a different specialisation, and publishers may need to expand their list of available reviewers.

A final remark on Open Access. Given the current rise, it will probably be normal practice in 2020. While some publishers have fully embraced OA publishing, it will not remain a distinguishing feature. If a publisher wants to stand out, it is now time to build up expertise on reviewing research data, but mostly on usage statistics.

Comments

7 responses to “The academic publisher in 2020”

  1. […] This post was mentioned on Twitter by Jose Afonso Furtado, Joseph Esposito, Araman Consulting, Ronald Snijder, Janneke Adema and others. Janneke Adema said: New Guest Post: The academic publisher in 2020 – http://openreflections.wordpress.com/2011/01/22/the-academic-publisher-in-2020/ […]

  2. Jan Velterop avatar
    Jan Velterop

    In 2020, the likelihood is that the annual output in articles is double what it is now. Keeping abreast of relevant knowledge developments will have become all but impossible if just relying on reading. Even just abstracts. the primary activity of an academic researcher will – with the help of powerful computers – be the analysis of continually changing patterns in the available information, and then deciding which tiny fraction of the articles published need to be read in full, because the narrative and detailed argumentation needs to be understood to put the observed changes in the knowledge ‘map’ in context. This will indeed be a tiny fraction, because in 2020, even more than now, much of what is published is confirmatory, repetitive, tacit knowledge, or incremental ‘polishing’, and very little, as now, is changing the direction of our insight or knowledge development. ‘Publish-or-perish’ will still be there, because academia is a ‘acknowledge economy’, as Geoffrey Bilder calls it (“it’s the ego-system, stupid”, as I call it), but ‘read-or-rot’ will be even further from the conscious experience of a scientist as it is now. Furthermore, peer-review avant-l’édition will have vanished, and all manner of informal knowledge exchange (the then incarnations of the twitters, blogs, etc. of today) will play a major role, and may dominate the field.

    In that world, usage statistics will be even more meaningless than they are today. Re-use will be the metric. In any form. And originality (“has this assertion been made before, and if so, how many times”). Granularity *will* be a feature. Knowledge will be available in the form of assertions and clusters of assertions, and one will be able to reason over those assertions. Millions, no, billions of them. The articles will just be there as the ‘minutes of science’. For the archive. And for officialdom. Like the forms you always have to fill in and that are kept in boxes and never looked at again. (This happens with copyright transfer forms now. Seriously.)

    The future of today is not what it used to be!

  3. Ronald Snijder avatar
    Ronald Snijder

    @Jan Velterop. Thanks for your thoughtful comment.

    First of all, you discuss the usage by scientists. While they are a very important part of the public, they are not the only ones. You can image cases where funders of research also want the outcomes to be known by policy makers (for instance: immigration and integration research), entrepeneurs (to create new products etc.) or the ‘general public’. So, an academic publisher may need to proof the usage of these groups.

    Secondly, you state that only a tiny fraction of articles will be read. Still, when a publisher is able to measure that (who read this, how many times), it is useful for the publisher.

  4. Jan Velterop avatar
    Jan Velterop

    @ Ronald Snijder. You are right, of course, in saying that scientists are not the only audience/lectience of science articles. The non-scientist (or non-specialist, e.g. scientists, but from other disciplines) portion of the readership is hopefully going to increase as a consequence of more academic literature becoming open access.

    But reading a larger fraction of individual articles, or of the number of articles published, will be nigh impossible for many scientists in many fields. The current output of the life sciences alone, measured in terms of abstracts added to PubMed, is in the order of 2 for every minute of every day and night, all year. Reading even a tiny fraction is a struggle. I admit that I’m mainly talking about the natural sciences, an area I know and understand best in terms of behaviour.

    I seriously question the notion that publishers can measure usage, unless ‘usage’ includes downloads or bookmarks ‘for future reference’ (in reality it’s more likely just for guilt relief than for actual future reference – bookmark folders and ‘to read’ folders are full of stuff that will never be looked at again). Actual reading, if it takes place at all beyond the title, is more often than not limited to the abstract, perhaps the methods section, or the conclusions, or even just the references. I once attended a meeting at the National Academy of Sciences in Washington DC, where it was announced that a body of articles had been digitised and put on line with open access, when a distinguished member of the NAS stood up and declared that this was great news, as he could “now finally read the articles he’d been citing all the time.” I’m not sure he was joking. It certainly didn’t sound like he was.

    Measuring downloads and page views is useful for a publisher, certainly in the current system, and mainly used to justify the argument that a given librarian should maintain the subscription or licence to the publishers’ material. But on observation of trends and underlying changes, I don’t think the current system can last until 2020, when data and information inundation will have taken on biblical (or even just Australian) proportions.

    Not reading may however not be a major problem if and when methods are being developed for computer-aided extraction and connection of knowledge from the literature. 2020 may look very bright indeed!

  5. jannekeadema1979 avatar

    The question remains of course how you measure ‘reading’. As research has repeatedly shown, books and articles are being scanned, not even taking into account the difference between paper-reading and screen-reading, which by 2020 according to expectation, will have risen dramatically. I foresee that different kinds of reading will proliferate and a rise in media literacy will play an important role in this development. Publishers (in continued discussion with scientists/scholars) will need to find a way to incorporate all these new kinds of knowledge transfer–at the moment mostly seen as informal communication–within a certain kind of accreditation system. Video lectures, visualizations, podcasts, even animations (for instance the brilliant RSA animations) will become more accepted (and quicker ways) of knowledge accumulation, including other forms of text-based informal communication through blogs, Twitter and ‘social networks’ such as Facebook and Zotero.
    Still, in the Humanities, a place will necessarily need to be preserved for thorough and critical reading of various kind of media outputs, including books or other kinds of long-form narratives. The struggle for publishers will be to cater to these different needs of various scientific communities whilst at the same time trying to connect with their ‘intended public’ (inside and outside of academia). Openness in my opinion is the only logical route to achieve this, although this again creates its own kinds of problems.
    Accreditation and legitimation will, as I see it–depending also on the field one is in–be based on a very complex mix of formal methods (including peer review and statistics) and an acknowledge economy (from mentions on twitter to blog followers and re-use of argumentation).

  6. […] a previous guest post where he developed an interesting forecast related to academic publishing, Ronald […]

Leave a Reply