Seed Questions on altmetrics business

Re: Seed Questions on altmetrics business

by Leah LOCKHART -
Number of replies: 0

I'm loving this module!

Challenges faced by NPG and PLoS seem to be largely around culture as opposed to developing technology. For example, comments from interviewees for research for The role of academic publishers in shaping the development of Web 2.0 services for scholarly communication, Stewart, J., Procter, R., Williams, R., & Poschen, M. (2012) about use of blogs by publishers reflect skepticism, suspicion about a lack of integrity/quality and lack of seriousness of using web spaces to present, rate, link to and from and discuss articles. I found the lack of engagement seen by both publishers in areas with open commenting available and rating systems available *very* interesting. One interviewee said, ‘Things like citation rates that come out of a formal process can be tracked… but reader comments and ratings would be so open to abuse, it’s hard to imagine that people would interpret it as valid of the paper’s worth’ (422) It's interesting to me because it resonates so much with my own experience working in government and encouraging online collaborative working but also because this type of comment or attitude assumes the existing rating and citation systems are *not* open to abuse. It also assumes that a more bigger and more general population of commenters have opinions or things to say that are not of value. It's very counterintuitive to me that a finite number of organisations or people should have a stronghold over what is worthy of rating, *especially* when web based communities are at our fingertips to surface stronger discussions and therefore comments and ratings. I think we can learn a couple of things from the stories of NPG and PLoS. First of all, the individual needs, anxieties of a group of specialists needs to be taken into consideration when looking at behaviour change like this. It seems introducing new technology/new ways of working and communicating in a oner may have made some of the failures in take up seem more prominent- though it's not a bad way to isolate the keen beans then focus on them for further development. All of this is about testing assumptions that academics might value a more social way of communicating but perhaps some research could have been done before rolling anything out to see what areas might respond really well, focusing energy there then using that as a demonstrator for more apprehensive areas. The other thing we can learn is that this stuff takes time! The traditions of ranking and measuring influence of a piece of work are so enshrined that to break that down will take a while. I'd be interested to see the growth in this way of working, however, when 'digital natives' start moving into the academic scene. 

In my field of work we are looking at social media monitoring and measurements in different, more meaningful ways so I suppose that is our altmetrics. Not only are social media monitoring platforms getting more powerful (we use HootSuite and I am trying to push for using SoDash in Scottish Government- those are examples of businesses selling services on social media altmetrics) but the type of interactions we hope to have more of with citizens will mean measuring things like the *quality* of an interaction just on a human level but also for intelligence. 

(Edited by Robin Williams - original submission Wednesday, 11 March 2015, 9:38 AM)

 

Thanks for these interesting points Leah

One of the things that came up with the comments function on the online journals was that users didn't feel  to post comments - perhaps because they didn't feel they had a warrant  to post critical comments - perhaps because they didn't want to locate critical comments attributably in a public space that might impinge collegiality. So these systems depend upon communities having a shared sense of what behaviours are allowed/validated....  cheers Robin