RESEARCH TRENDS / ELSEVIER LABS VIRTUAL SEMINAR

The Individual and Scholarly Networks

Recordings, presentations and further information from the Elsevier Labs and Research Trends virtual seminar - January 22, 2013

The Individual and Scholarly Networks was a two-part seminar organized by Research Trends and Elsevier Labs on January 22, 2013. Webcast live from Oxford, Amsterdam and New York, the presentations were recorded and are available here, along with links to slides and additional questions and answers. The seminar was led by Michael Taylor, Research Specialist at Elsevier Labs with additional contributions from Dr Henk Moed and Dr Gali Halevi of Research Trends.

Researchers are increasingly using social network type platforms to form relationships and ad-hoc research reading groups.

Part 1 - Building Networks focused on the ways in which these relationships are formed and maintained, and how they are changing the nature of scholarly relationships.

Part 2 - Evaluating Network Relationships explored the related areas of altmetrics, contributorship and the culture of reference. Altmetrics is one of the most explosive areas of interest in bibliometric analysis and is increasing in importance.

Part 1 - Building Networks

New ways to collect, curate and share information, Professor Jeremy Frey



Professor Jeremy Frey is Head of Physical Chemistry at University of Southampton

DOI for Jeremy's presentation: http://dx.doi.org/10.6084/m9.figshare.156596

The Importance of Yes and Value of No, Gregg Gordon, SSRN


Gregg Gordon is President and CEO Social Science Research Network

DOI for Gregg's presentation: http://dx.doi.org/10.6084/m9.figshare.156592

Evolving Networks of Expertise, Dr William Gunn, Mendeley


Dr William Gunn is Head of Academic Outreach at Mendeley

http://orcid.org/0000-0002-3555-2054

DOI for William's presentation: http://dx.doi.org/10.6084/m9.figshare.156591

Panel discussion and concluding remarks, chaired by Mike Taylor


Mike Taylor Technology Research Specialist Elsevier Labs

http://orcid.org/0000-0002-8534-5985

DOI for the discussion: http://dx.doi.org/10.6084/m9.figshare.156818

Part 2 - Evaluating Network Relationships

Identification, contribution, attribution - Digital scholarship, identity on the Web and ORCID, Dr Gudmundur Thorisson


Dr Gudmundur Thorisson is a Research Associate and works with ORCID and University of Iceland

http://orcid.org/0000-0001-5635-1860

DOI for Gudmundur's presentation: http://dx.doi.org/10.6084/m9.figshare.156594

Occupying Accountability: Evaluation by Whom and For What?, Kelli Barr


Kelli Barr is a Graduate Research Assistant at the Center for Study of Interdisciplinarity, University of North Texas

http://orcid.org/0000-0001-7048-4977

DOI for Kelly's presentation: http://dx.doi.org/10.6084/m9.figshare.156593

Impact beyond the impact factor, Dr Heather Piwowar, Impact Story

Dr Heather Piwowar is a Postdoc at Duke University and is a founder of ImpactStory

http://orcid.org/0000-0003-1613-5981

DOI for Heather's presentation: http://dx.doi.org/10.6084/m9.figshare.156595

Panel discussion and concluding remarks, chaired by Mike Taylor



With contributions from Dr Gali Halevi and Dr Henk Moed of Research Trends

DOI for the discussion: http://dx.doi.org/10.6084/m9.figshare.156836

Further questions and answers

Many of your questions were answered during the two video discussions, but we didn't have time to cover them all. The speakers kindly addressed the outstanding questions, which we publish below.

Stefanie Haustein (Université de Montréal): William and Gregg, could you comment on documents which are not added by users but by the platforms (Mendeley and SSRN) themselves and especially the biases that this could cause in terms of impact?

WG: Stefanie's question and the question about bringing quality into the equation are both good ones, but I don't have a short answer for them right now. It's too early yet, but given enough interest and open data, we will continue our research into this important area. Likewise for the question about maintaining quality without taking on costs that necessitate putting research behind walls, although I'm optimistic that post-publication review can go a long way to addressing this issue. We have some very interesting plans in this space.

GG: SSRN does not create documents. All submissions are by authors, organizations, conferences, or publishers.

Q: Do you publish manuscripts similar to those of Social Science Research Network?

GG: The SSRN eLibrary includes a wide variety of scholarly works such as working and accepted research papers, conference proceedings, books, dissertations, etc. and other papers such editorials, opinion pieces, book reviews, conference poster sessions, and presentations. We are testing audio and video files and data sets.

Q:Any specific examples yet of Reproducibility Index successes?

WG: The Reproducibility Initiative just launched in December, and so far we have had over 1000 researchers contribute their studies to the opt-in arm of the Initiative. We have recently gotten some industry partners to sign on, contributing materials or expertise (in addition to the 1000+ service providers already available via Science Exchange) We've been talking to a range of organizations who have expressed interest, so we've been pleased with the success so far, but of course the real success will come when the positive incentives provided by the Initiative begin to change the culture to favor more robust research.

Q: What kind of impact can Mendeley measure that citation cannot.

WG: As Heather showed in the second half of the program, impact comes in many flavors. Citation only captures intra-academic impact, and not even all of that. Mendeley can track the impact of publications between academics, but also the broader impacts on practitioners - doctors, small business, the public - as well as capture impact on a much finer temporal resolution than citation can. In addition, Mendeley's metrics look at impact from a publisher-neutral standpoint. We have an Open API, which means that anyone can access the readership of any item, from any publisher, without having to be a member of an academic institution or have a subscription to anything. Can you do that with citations? In the future, when we incorporate a broader range of research output beyond published papers, we'll be able to provide metrics on this expanded dimension of impact as well.

Q: What's the business model of the Reproducibility Initiative.

WG: The Reproducibility Initiative is a non-profit association between Science Exchange, Mendeley and PLOS. Researchers can write the money for replications into their grants, and some funding organizations have indicated that they may require this as a condition of funding. There are also some disease-focused organizations that are interested in starting with this collection of opted-in research as a place to begin initial studies. Just a word about costs: Because the replications are done by third-party professionals operating on a fee-for-service basis, it's extremely cost-efficient. We estimate that replication of the average experiment to cost less than one-tenth of the cost of the original experiment.

Q: can researchers upload manuscript to their ORCID profile?

GT: No, uploading full-text documents is not supported. ORCID is focused on enabling the creation of linking between i) authors/contributors and ii) research activities and outputs already catalogued in external systems. Users can certainly add new works by entering metadata manually or importing as BibTeX. But the main emphasis is on integration with external resources such as CrossRef, either via tools on the ORCID website itself (the search&add feature) or via other sites (see e.g. http://orcid.scopusfeedback.com/ and the just-released http://search.crossref.org).

Q: What is the scope of ORCID? Will affiliation/work/grant/patent be all? Or will it eventually replace initiatives like VIVO and provide more research information ?

GT: No, ORCID does aim to replace VIVO and other research information systems, but rather to link to and complement them. As mentioned above, ORCID is focused on creating *links* between persons and research works/activities. This means that ORCID does not aim to capture extensive information on, say, a faculty member's ongoing projects, courses, awards etc. Rather, the objective is to point to - and facilitate discovery of - this "deeper" information stored in other systems, including VIVO.

Q: Will ORCID serve Linked Data for the user profiles?

GT: Almost certainly. Linked Data support is on the feature request list, but has not yet been placed on the development roadmap. More information is available on the support/feedback site: http://support.orcid.org/forums/175591-orcid-ideas-forum/suggestions/3283848, http://support.orcid.org/forums/175591-orcid-ideas-forum/suggestions/3291844 and on the various Trello boards which are open for all to view: https://trello.com/orcid2

Q: Will ORCID replace a traditional biosketch (including employment, honors, etc.) or will it focus on publications?

GT: No. The focus is on linking scientists and publications/activities/grants, and through this to help streamline the creation of biosketches and similar materials by other tools. More on this topic in Laure Haak's blog post on ORCID and NIH's ScienCV platform: http://about.orcid.org/blog/2013/01/16/nih-testing-orcid-ids-sciencv-platform

Q: What specific stakeholders have adopted ORCID?

GT: The full list of organizations who have joined as members can be found here: http://about.orcid.org/about/community/members Of these, less than half have released ORCID-enabled versions of their tools or services. So, adoption is happening slowly, but it is happening nonetheless.

Q: The product sounds very interesting - Have any funders adopted this for grantee performance monitoring and if so, are these organizations listed on the ORCID website?

GT: No, not yet. Wellcome Trust is already a member (see http://about.orcid.org/about/community/members). It's impossible to say at this time when they or other funders will be ready to deploy ORCID for grant submission, monitoring or other purposes.

Q: How accountable are school librarians? What are the different parameters on which we can be evaluated?

KB: Admittedly, I am not very familiar with how librarians are evaluated for the quality of their work, nor with what their overall goals are. My general recommendation is more like a guideline for conducting evaluations: the goals for one’s institution, organization, department, or research unit ought to be determined at the outset in order to guide the overall evaluation. The specific of criteria and how they are applied, then, should aim at highlighting practices, activities, and outcomes that contribute to the strategy of achieving those particular goals. So librarians can ensure they are being accountable in whatever way in which they feel they need to be accountable by establishing among themselves their goals and identifying the proper criteria by which they ought to be evaluated. It’s a good thing if those goals and criteria are specific to the librarians that work in a particular library serving a particular community.

Q: Evalution Matters but whom, I think it varies from preson [sic] to person, institution to institution, how will this issue be addressed? - Dr. Ramzan, Karakoram international University, Gilgit, Pakistan

KB: I’m not sure I quite understand the question, but here’s my attempt at an answer. Addressing this issue, I think, amounts to “occupying” it. In other words, for scholars to address the issue of (for example) how their work ought to be evaluated, they must be involved in identifying the kinds of criteria that are and are not acceptable depending upon the kind of work they do. I emphasize case studies as a counterbalance to the overwhelming emphasis on standardization and generalization across fields and institutions that dominate rankings and qualitative methods for evaluations. Contextual specificity is only an issue if we privilege comparisons between fields andbetween institutions over giving fair evaluations that take into account the uniqueness of the person/people or work that is being evaluated. Currently, I see specificity and generality as being out of balance with one another.

Q: Jeremy talked about the importance of showing your work especially in the face of challenges from beyond the scientific community (e.g. climate change research kerfuffle). Both Gregg and William talked about accelerating the pace of making research available. How can prestige and authority be retained in this accelerated process? How might peer review adapt to keep apace with these faster timelines? (I worry that this discussion is conflating research and published research results...)

KB: Maintaining prestige and authority in an accelerated research environment is problematic only if the pace of research is the source of prestige and authority. It’s important that research not be rushed; for example, having reproduced and validated results is key to the validity of much scientific work. But it’s equally important to make research available to those who may find it useful in a reasonable amount of time so that it may actually be useful. Published research results do not encompass the whole of research activities, to be sure; they are, however, an important component in determining both the direction and the focus of research activities. I think the point is not to speed up the process of research so as to simply publish more (or, at least, I hope that isn’t the point…), but rather to remove barriers that prevent the timely communication of results to others throughcredible venues. Online communication seems to be a promising medium for removing some of those barriers, and also for establishing alternative but equally credible venues.

Q: This isn't a question but a comment. I disagree with that statement made by Kelli Barr though, as while we have myriad contexts and audiences, there will always be papers and research at the bottom of the heap simply because it just isn't good. I'd hate to see this research try to find value in everything because there isn't value in everything.

KB: I don’t disagree that some things at the bottom of the heap aren’t worth much. But I would challenge the impetus to rank research and draw some line after which everything else is labeled ‘trash’ – valueless. Why is it so important to know who is on top absolutely? Are those at the top of the heap there just because of the quality of their work? Or are there not other factors involved, such as the prestige of their institution, or the particular topic on which they work, or the resources they have amassed fordisseminating their work? If it’s the case that context matters (and I think you’d be hard-pressed to argue convincingly that it doesn’t) then determinations of the quality of research ought to incorporate considerations of its context – criteria such as its intended audience, the societal relevance of the work, or whether or not the authors have connections with communities for whom the work may be useful, to name a few. Just because a scholarly community may not be interested in a particular kind of research and therefore may label it of “low quality” (and really, what does this mean?) does not exempt the possibility that it is extremely relevant or useful to other communities – potentially ones outside of the academy. One person’s trash may well be another person’s treasure.

Q: For case studies: what evaluations are necessary, to ensure that the manuscripts are scholarly-research based?

KB: My short answer: by reading them! I understand that in many instances this is not a viable option, but in my mind, that it isn’t a viable option begs the question: ought we to be conducting evaluations if they are not fair? My argument has been that simply relying on existing “quality” metrics is unfair to a large amount of research that these measures are used to evaluate because of the hidden assumptions the metrics bring to the table. I’m in favor of a more balanced approach: measures to inform the context of theresearch and expert judgment to determine, in concert with the research itself, what they mean. So yes, more “informal,” if “informal” means based on judgment rather than measurement.

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
  • Issues

  • Twitter Feed

  • Events

    iConference Workshop 2013

    iConference Workshop 2013: Computational Scientometrics: Theory and Applications      

    Learn more

    14th International Conference of the International Society of Scientometrics and Informetrics Conference

    Learn more

    9th International Conference on Webometrics, Informetrics and Scientometrics (WIS) & 14th COLLNET Meeting

    Learn more

    CoLIS 8: The eight International Conference on Conceptions of Library and Information Science

    Learn more

    THE (Times Higher Education) World Academic Summit

    Learn more

    Science and Technology in Society Annual Meeting

    Learn more

    Australian International Education Conference (AIEC)

    Learn more

    World Social Science Forum

    Learn more

    Critical Issues in Higher Education Conference

    Learn more

    Small Nations Research Conference aka U21 Educational Innovation Conference

    Learn more

  • Polls

    What is your favourite bibliometrics indicator?

    View Results

    Loading ... Loading ...