My initial reaction is that the single-day webinar in which I presented on Tuesday (and in which I was the only humanist – !) was a success. Excepting some minor and very intermittent technical difficulties with sound and visual, the presenters were clear, to the point, and situated at the frontiers of heterodox approaches to scholarship and research evaluation.
And, as is becoming increasingly common, I think, the Twitterverse was ablaze with live summaries and reactions throughout the webinar. Ernesto Priego, of Altmetric.com, has helpfully collected and Storified the tweets for #scholnet: see Part 1 here and Part 2 here. (See also Priego’s thoughts on the event here)
I missed Jeremy Frey’s (physical chemist at University of Southampton) and Gregg Gordon’s (CEO of Social Science Research Network) presentations in Part 1, but caught most of William Gunn’s (from Mendeley). The gist of it seemed to be that scholarly communication is increasingly moving online, but the Internet has far more to offer academics than venues they have already taken advantage of. What will really motivate a strong internet – which Gunn takes to be equivalent to public – presence for academic research is the development of communities that value sharing, openness, and collaboration with other interested individuals – people who do not need to be academics in order to engage. In other words, why push for academics to develop their own networks online – e.g. a ‘Facebook for scientists’ – when those networks – e.g. Facebook – already exist and can be used for scholarly purposes just as well as not?
Gudmundur Thorisson’s (from ORCiD and the University of Iceland) presentation focused on the role of individuals in evaluation schemes, and what tools are available to navigate these ‘uncharted territories’ of internet-era scholarship, such as attribution and credibility. Associating with an organization such as ORCiD is simultaneously a means of delineating proper attribution transparently over many different and diverse venues of communication or dissemination, and also a means of securing credibility as an ‘expert’ – and not just another opinion-laden dweller in the Net.
The most interesting presentation for me was definitely Heather Piwowar’s (a postdoc at Duke University). There were some difficulties in the quality of the sound during her talk – and the PowerPoint did not go progress as smoothly as she intended – but the basic idea was provocative: impact comes in all different ‘flavors’ (like ice cream), and so the point with altmetrics is to provide the richness of a smorgasbord of possible flavors to compare with the particular flavor we choose for a particular occasion. Providing context for altmetrics, in other words, is of vital importance if they are to mean anything (that is, if they are to indicate anything about the usefulness or visibility of scholarly work), and not just point to the fact that scholarly work is being transmitted online.
ImpactStory.org, of which Piwowar is a co-founder (along with Jason Priem) is attempting to do this by, for example, providing percentile rankings indicating the interest in a particular item relative to the interest given to other items in that same venue. Say an article is tweeted 40 times. This information begs the question: how much is a lot of tweets for scholarly work? Giving percentile rankings for that item indicates not just that it has garnered attention on Twitter, but that it is perhaps a lot of attention – 40 tweet may be in the 90th percentile in terms of how many tweets are garnered by other similar items. ImpactStory currently provides such rankings for bookmarks on Mendeley and Facebook likes, comparing bookmarks and likes received by your work to those received by items indexed in the Web of Science. This is ingenious on the part of ImpactStory because it allows researchers to make the argument that their work is attracting more attention than items indexed in the WoS, which are widely understood to be of generally high quality.
I was also incredibly intrigued by Henk Moed’s comments in the discussion after Part 2. He bristled at the explicitly political bent that I took both in my presentation and in my comments. His concerns are something heard often from researchers, particularly scientists, in the US: that the politicization of science is something to be avoided because it impugns the credibility of scientists and the objective validity of scientific knowledge.
I understand both his resistance to political activity on the part of researchers and his disagreement with my conflation of public and scholarly impact. For the record, I wasn’t particularly clear about what I meant by saying that public impact is scholarly impact – the claim of mine with which Dr. Moed most disagreed. Britt Holbrook has argued elsewhere that NSF’s merit review criteria for grant proposals, Intellectual Merit and Broader Impacts, constitute a duck/rabbit scenario (see the image above): IM constitutes scholarly impact and BI public impact. So rather than making an equivalence between public impact and scholarly impact, I meant something more like they are two sides of the same coin. Rather than keeping these two categories functionally distinct, we should be creating reward structures and establishing internal evaluative standards that reinforce the notion that quality scholarship ought to be publicly impactful and publicly impactful scholarship ought to be high quality.
This is not only an epistemological and political argument; it is also a moral one. Researchers, I think, have a fundamental ethical responsibility to the public who funds them, and my argument here is that making good on that responsibility entails that researchers pursue work that addresses the needs and interests of the public, but without sacrificing quality along the way. And this is entirely possible if we academics are willing to take ownership of the process of re-defining what ‘quality’ means and looks like in light of the changes to the system of knowledge production required by owning up to our societal responsibility.
I’ll leave you with my favorite tweet of the day:
#scholnet Focus on metrics forgets key question – what they can’t show. Just because no one tweets when a tree falls doesn’t mean no impact
— Simon Linacre (@EmeraldAcademic) January 22, 2013