by

Measuring Value: Societal Benefits of Research

A Ph.D. student in the Department of Plant Pathology, Physiology, and Weed Science at Virginia Tech.

In recent years, there has been a noted policy shift towards measuring the value and benefit of university-based research. Rather than measuring inputs (e.g. human, physical, and financial resources), the emphasis has switched to looking for outcomes (the level of performance or achievement including the contribution research makes to the advancement of scientific-scholarly knowledge) and ultimately to requiring an impact and benefit (e.g. the contribution of research outcomes for society, culture, the environment, and/or the economy). This marks a move away from seeing higher education as a vehicle of human-capital development to being an arm of economic policy.

Traditionally, the emphasis has been on measuring research income, bibliometrics, and citations. Simplistic application of this methodology has privileged the physical, life, and medical sciences – their large research earnings inflated by capital and equipment budgets, and the outputs generated by large teams with multiple authors. Bibliometric practices tend to benefit countries where English is the native language. Furthermore, the emphasis on global impact undermines the importance of regionally or culturally-relevant research, which is more likely – albeit not exclusively – to be a feature of the arts, humanities, and social sciences. The key point, however, is that impact has been measured by peer accountability rather than social accountability.

In response to growing public concern about value for money, and academic criticism of bibliometrics and rankings, a broader framework for research assessment is emerging. This includes identifying indicators which more fairly value all disciplines, and acknowledge that research is a continuum.

Reflecting the spirit of Ernest L. Boyer’s famous treatise on behalf of the Carnegie Foundation in 1990, there is increasing recognition of the role of challenge-oriented, use-inspired, practice-led or based, and translational research, rather than crudely apportioning a hierarchy between fundamental and applied. Many researchers have become active promoters of an engaged scholarship on the basis that research does not exist in isolation. Even Nature has gotten in on the act, reprimanding those researchers who benefit from the city for the quality of their lifestyle but choose to ignore it and its problems.

Once research is seen to have value and impact beyond academe, then what is measured, how, and by who must change. While peer-reviewed publications may continue to dominate, a 2010 E.U. report, Assessing Europe’s University-based Research, acknowledged research contributes through a diverse range of output formats, inter alia, audio visual recordings, computer software and databases, technical drawings, designs or working models, major works in production or exhibition and/or award‐winning design, patents or plant breeding rights, major art works, policy documents or briefs, research or technical reports, legal cases, maps, translations or editing of major works within academic standards. These provide evidence for policy making, social improvements or the translation of research into cost‐effective, practical, policy‐ and technology‐based interventions that improve people’s lives.

Wider dissemination and adoption of research by society requires adoption of digital repositories and Web‐based tools in line with the movement for open science and open source. This helps democratize knowledge production through greater public accessibility and transparency of scientific communication. As a consequence, peer-review can no longer be the sole or primary method by which impact is assessed. End-user or stakeholder esteem becomes a vital component of assessing economic, social, environmental, and cultural benefits.

These developments challenge academe and others who have used peer-review as a gatekeeper to implicitly confer status and promote the reproduction of an academic hierarchy. It also exposes a contradiction between policies which encourage interdisciplinary solutions to global challenges and research-driven innovation while slavishly pursuing academic rankings.

Measuring the impact and benefit of research is an emerging methodology. Several countries and universities are beginning to experiment with using case studies, end-user opinion, and relevant indicators. The latter may include expert tasks, popularized works, media visibility, external financing relating to research cooperation with non‐academic institutions, cooperation with the public and private sector outside academe, patents, start‐up companies, etc.

Two prominent examples are from the U.K. and Australia. Not surprisingly, the respective research-intensive universities (Russell Group and Group of Eight) opposed these efforts, preferring the heavily metrics-based systems in which they dominate.

•    The U.K. Research Assessment Exercise (RAE)  has been undertaken approximately every five years since 1986. Beginning in 2014, the Research Excellence Framework (REF)  will assign 20 percent of the score to “reach” and “significance” verified by detailed case studies.

•    Australia began testing the Research Quality Framework (RQF) in 2005 to demonstrate research influence on a discipline area or the wider community. This was abandoned in 2007 on the grounds that the measure was “unverifiable and ill-defined.” A new initiative using case studies, and this time involving the Group of Eight, is currently being trialed; a report is due fall 2012.

Drawing conclusions about the value of research based on its social impact and economic performance is complex. Demonstration of economic impact can lead to a focus on short-term job creation and innovation narrowly favoring science and technology disciplines, and perversely affecting the choice of research topics and project design. The timelines over which “impact” and “influence” are assessed are problematic, and the evidence can be difficult to verify. All assessment processes can become blinded by indicators and statistics – measuring what is easy rather than what is important. The more complex the assessment process is, the more heavily resource-dependent and time-consuming it can become. Indeed, U.K. faculty were some of the strongest critics of the RAE – but were suitably chastened when a simple metrics-based system was proposed.

Unless researchers have access to no-strings-attached funds, there will always be questions about its value and relevance. Although these approaches – including their implications for scientific‐scholarly practice – need to be evaluated over time, they are progressive and could eventually overcome many of the biases and limitations inherent in current bibliometric-based practices. As with any other change process, it is better to be in the tent helping to shape it, than outside looking it.

[Creative Commons licensed Wikimedia image by John McCormick]

Return to Top