Summary

In any sphere of social life it is not easy to assess how much influence particular people, ideas, products or organizations have on others in the same occupation or industry, or on other spheres of social life. We are forced to look for indicators or ways of measuring influence (‘metrics’), each of which (taken on its own) is likely to have limited usefulness and to be liable to various problems. In business fields the development of metrics is often most advanced, because there is a clear monetary value to many actors in knowing which advertising medium reaches most consumers, or which form of marketing elicits the greatest volume of eventual sales. Yet even the most well-developed metrics of influence only go so far – they tell us how many people read print newspapers, but not how many read each article. Online, we can say more – for instance, we know precisely how many people clicked on an article and how long they spent on each item. But we cannot know how many readers agreed with what they read, or disagreed, or immediately forgot about the argument. In short, metrics or indicators can tell us about many aspects of potential occasions of influence, but not what the outcome of this influence was.

Within academia, there has long been a studied disparagement of these ‘bean counting’ exercises in trying to chase down or fix the influence of our work. The conventional wisdom has been that we do not know (and inherently we cannot ever know) much about the mechanisms or byways by which academic research influences other scholars or reaches external audiences. On principle, the argument goes, we should not want to know, lest we are lead astray from the ‘pure’ and disinterested pursuit of academic knowledge for its own sake, and veer off instead into the perils of adjusting what we research, find or say so as to deliver more of what university colleagues or external audiences want to hear. Our job is just to put ideas and findings out there (via publications, conferences, lectures etc.), and then to sit passively by while they are, or are not, taken up by others.

We do not believe that this traditional approach is useful or valid in the modern, digital era. The responsibility of researchers and academics is to think their research through carefully from the outset, paying at least some attention to what ‘works’ in terms of influencing other researchers or external audiences. Researchers need to construct and maintain a portfolio of projects that help them make a difference to their discipline. They also need to try to ensure that the social sciences make some form of contribution to the wider social world and context in which the researcher is embedded. Whatever an academic or a researcher eventually decides to include in or leave out of their portfolio of projects, the only rational or responsible decisions to be made are those based on having good quality information about how their existing works have fared in terms of achieving academic impacts or external impacts.

We define a research impact as a recorded or otherwise auditable occasion of influence from academic research on another actor or organization. Impact is usually demonstrated by pointing to a record of the active consultation, consideration, citation, discussion, referencing or use of a piece of research. In the modern period this is most easily or widely captured as some form of ‘digital footprint’ (e.g. by looking at how often other people cite different pieces of research in their own work). But in principle there could be many different ways of demonstrating impact, including collecting the subjective views of a relevant audience or observing the objective behaviour of members of that audience.

Research has an academic impact when the influence is upon another researcher, university organization or academic author. Academic impacts are usually and most objectively demonstrated by citation indicators, the focus of the next four chapters. This is a ‘revealed preference’ approach to understanding academic influence, and an increasingly sophisticated one that now allows us to very promptly trace out flows of ideas and expertise in great detail, down to the level of an individual researcher or her portfolio of works (Harzing, 2010, p.2).

Sadly for the field, however, a range of crude and now-outdated methods are still deployed by academic departments, universities and governments when trying to assess the quality of academic work. A key example is using ‘journal impact factors’ (JIFs) which count how many academics cite a journal’s output of papers on average, or (even worse) subjective lists of ‘good’ and ‘bad’ journals (or ‘good’ and ‘bad’ book publishers) to evaluate the contribution made by researchers. As Harzing (2010, p. 3) points out, using JIFs or such lists is actually applying a ‘proxy indicator’ of quality, by assuming that an academic’s work is as good as the average article in the journal they publish it in, or that an academic’s book is as good as the average of all those put out by that publisher in that discipline. Yet in fact, all journals and publishers publish rather varied work, some that proves influential and much that does not. This is especially the case in the social sciences (and humanities) where even the most tip-top journals rarely achieve JIF scores above 2.0 – that is, an average of two other articles citing each paper within the first two years after its publication.

In addition, academic influence may also be gauged in a ‘stated preference’ way by developing recordable subjective judgements or qualitative assessments, which are systematically conducted and use a non-biasing methodology. Useful approaches include surveys of professional groups, academics voting online for their influences in a controlled ‘market, and newer forms of open-access online peer group evaluations.  Perhaps we might also include here government-designed or officially-mandated peer group review processes that seek to be comprehensive, such as the judgements of academic panels relied on in the UK’s Research Assessment Exercise (the ‘RAE’ which ran from 2000 to 2008, covering all academic disciplines) and the new Research Excellence Framework (REF, which seems broadly similar). These essentially use a committee and some set of rules to try and do the JIF/lists proxy categorization of publications and other academic outputs a bit more systematically. However, the jury is still out on whether such externally guided and bureaucratically driven exercises do anything more than crystallize certain priorities of officialdom, let alone representing academically valid or worthwhile exercises in assessing the impacts of research within higher education itself.

Research has an external impact when an auditable or recorded influence is achieved upon a non-academic organization or actor in a sector outside the university sector itself – for instance, by being used by a business corporation, a government agency, a civil society organization or a media or specialist/professional media organization. As is the case with academic impacts, external impacts need to be demonstrated rather than assumed.  Evidence of external impacts can take the form of references to, citations of or discussion of a person or work or meme (idea, concept or finding):

  • in a practitioner or commercial document;
  • in media or specialist media outlets;
  • in the records of meetings, conferences, seminars, working groups and other interchanges;
  • in the speeches or statements of authoritative actors; or
  • via inclusions or referencing or web links to research documents in an external organization’s websites or intranets;
  • in the funding, commissioning or contracting of research or research-based consultancy from university teams or academics; and
  • in the direct involvement of academics in decision-making in government agencies, government or professional advisory committees, business corporations or interest groups, and trade unions, charities or other civil society organizations.

Just as with academic citations, we could mainly follow a ‘revealed preference’ approach to finding external impacts, looking for a residue or ‘footprint’ and assigning to each reported influence as much credibility as the available evidence allows. Thus, extensive citation or use of distinctive research findings, concepts or memes would justify assigning more influence than scattered or isolated references. Similarly the commitment of more funding to commissioned research or showing that university academics were closely involved in external organizations’ decisions could all provide indications of a greater degree of achieved impact. Note that in our approach an external research impact, just like an academic citation, is an occasion of apparent influence only.

In addition, however, a ‘stated preference’ approach can be very useful, by asking external users of academic research how much contact they had with and how they rated the contribution of individuals, research teams, universities and bodies of literature. Of course, such judgments and assessments are subjective, and prone to potential distortions familiar with all reactive measures (such as potential ‘elicitation biases’ involved in how the questions are asked of respondents). Yet especially if the sample of external people surveyed are expert in the utilization and contribution of university research, and the questionnaires used or interview methods are rigorously designed, this approach can powerfully counteract some of the many problems that can occur in trying to trace academic contributions to economic, business or public policy change in terms of electronic or other footprints.

Figure I.1: The primary and secondary impacts of academic research


In our terms claiming an external impact from research does not say anything further about what follows from this influence. As Figure I.1 shows, we can draw out further possible changes that may or may not follow from an initial occasion of influence, the primary impacts on which we focus here. Academic work that influences other academics or external organizations forms part of a societal-wide ‘dynamic knowledge inventory’, a constantly developing stock of knowledge and expertise of which universities are important but by no means sole guardians, nor even necessarily the most important custodians. The role of ‘caring for and attending to the whole intellectual capital which composes a civilization’ is one that the philosopher Michael Oakeshott (1962, p. 194) once assigned exclusively to universities. Yet now that role is in fact widely shared, and the dynamic knowledge inventory is constantly looked after, activated and recombined by many different institutions – for instance, think Google or Wikipedia as much as (often perhaps far more than) individual universities.

For many conventional people in the business world, the concept of any kind of inventory, construed as ‘unsold goods’ or things unused, gathering dust on warehouse shelves, comes across exclusively as a bad thing. Contemporary businesses have invested a lot of time, money and energy in minimizing their inventories, paring down stocks to maximize efficiency, and bringing in ‘just in time’ delivery to transfer storage and inventory costs to other firms further up the supply chain. So in this view a dynamic knowledge inventory may sound no different in kind, an ‘odds and sods’ store of bric a brac knowledge without conceivable applications, expensively produced initially (often at taxpayers’ expense) and now kept in being in ways that must equally be costing someone to store, curate and maintain.

Yet there are fundamental differences between the static inventories of physical goods that are fixed-in-form (once created) and the dynamic knowledge inventory. There are multiple forces at work that strongly reduce over time the costs of storing knowledge ready to use. Knowledge that is employed or applied is always routinized and simplified in use and over time. Partly this is because ‘practice makes perfect’ at an individual level, and experience counts even for the most esoteric or unformalized forms of tacit knowledge and skill, such as craftsmanship (Sennet, 2008). In intellectual life also, devoting a critical mass of time (perhaps 10,000 hours) to perfecting capabilities is often associated with exceptional individuals achieving radical innovations or breakthroughs in perception of how to tackle problems (Gladwell, 2009).

But the same processes of re-simplifying the initially complex, or routinizing the initially sui generis, of converting the initially unique solution into a more generic one, is also implemented far more powerfully at the collective level, across groups, occupations, professions and communities of interest. We discuss below (in Chapter 5) the importance of ‘integration’ scholarship within the development of academic disciplines. The initial work here involves isolated and hard-to-fathom discoveries being recognized as related, re-conceptualized and then synergized into more complete explanations. At a more macro-level, many initially distinct-looking phenomena may be recombined and re-understood through new ‘paradigms’ that unify understanding of them in intellectually coherent ways. Later on, much of the detail of initial research advances becomes less relevant and is screened out by improved understandings. The final stage of integration scholarship is that new ideas or discoveries are filtered through many layers of the research literature and into authoritative core textbooks and the professional practices and teaching of academic disciplines. Through all these stages, and in all these ways, knowledge often becomes ‘easier’ to understand over time, less costly to curate, store and maintain, as the fragmentary or disorganized discovery knowledge moves further and further behind the research frontier and is re-processed and re-understood.

We also embody knowledge in multiple cultural artifacts that function to make far easier the next round of knowledge acquisition and re-use. At root we embody knowledge in new language and concepts, new intellectual equipment that makes the redeployment of old knowledge or the development of many new knowledge products (such as dictionaries, encyclopedias, textbooks, review articles and journals) that make information accessing more comprehensive, quicker and better-validated. Equally knowledge is embodied in physical tools and equipment, from laboratory equipment, through machine tools and process manufacture capabilities, through to first analogue and now digitized information storage and retrieval machines.

The modern period is of critical significance in this respect because of the divergence between what (Anderson, 2009) terms:

  • the ‘world of atoms’, where storage and retrieval are still expensive, inventories must be limited or minimized, and because everything costs, so everything has a price; and
  • the ‘world of bits’, where storage and retrieval are effectively free, information and inventories can expand (almost) without limit, and new marginal users of existing knowledge or information goods costs nothing. Hence companies like Google can build a business on ‘a radical price’, offering many services free.

Digitalization has already transformed private sector commerce and business, and has made feasible the ‘long tail’ in retailing, perhaps most notably for books (Anderson, 2006, or 2004). The digitalization of the dynamic knowledge inventory is the most important post-war step in human culture and development. And despite multiple premature sceptical voices, its implications have only just begun to track through academia, university research processes and the ways that they influence civil society.

Beyond the cumulation and sifting roles played by the knowledge inventory, it is possible that we can also disentangle and identify these secondary impacts of research in changing the activities or outputs or policies of firms, businesses, government agencies, policy-makers or civil society organizations. In at least some cases, we might be able to take this further, and to trace through the social outcomes that follow from such an influence. But we live in a complex social world where many different social forces contribute to the production of business or governmental activities, and to the evolution of social outcomes – as the blue oval box in Figure I.1 indicates. Any research impacts on outputs or outcomes in advanced industrial societies occur in an inherently multi-causal setting. Many influences are aggregated and cumulated by multiple institutions, so that dozens, hundreds or thousands of influences have some impacts, either simultaneously or in a lagged and cumulated way over time. In these conditions, it is not realistically possible to track in detail the outcomes of particular external impacts from individual pieces of academic work. Even if we were to look at the top set of influences, within academia or the university domain itself, environmental influences are so strong that tracing influences just on university outcomes from academic research is a tricky endeavour.

The final part of Figure I.1 concerns the evaluation of those social outcomes that are influenced by academic research – as positive, negative or indeterminate or contested for society. Even if we could track through the influence of any given piece of research amidst this welter of other influences, we cannot assume a priori that societal outcomes influenced by academic research are beneficial. Primary impacts are ‘brute facts’. There is no inherent evaluative colouring built into the concept of a research impact as ‘an auditable occasion of influence’. But once we consider secondary impacts mediated through changes of outputs and outcomes, this is rarely going to be a sustainable position. A scientific advance may help produce a cure for an illness, for example, or it may allow the construction of some new weapon or the manufacture of a severely addictive leisure drug. A social science paper may improve the efficiency of a business or governmental process, but it may also help to sway businesses or governments to make ill-advised choices that reduce the social welfare. The moral colour of the outcomes from any research impact is normally determined in subsequent use by others, and it cannot usually be controlled or even shaped by the original researcher.

However, not being able to track individual research work’s secondary impacts on outputs and outcomes, and not being able to impart normative evaluations of individual influence flows, does not mean that the cumulation of impacts across a whole academic field has no effect ore cannot be assessed. ‘Bottom-up’ processes of assessment are infeasible at this stage, but ‘top down’ and aggregate approaches are not. Indeed, at the level of primary impacts we can say a lot more in modern times by looking across researchers, research teams, institutions and indeed disciplines and countries. We can quantify and compare primary impacts (as occasions of influence), charting the extent to which different academics have influence with their peers in their discipline. And equally researchers themselves can make meaningful (if as yet only qualitative) analyses of how influential their different (large) strands of work have been, as we show below. Enhancing this capacity to understand academic influence can help all of us in the social sciences to become more effective as researchers. And for external actors, a better understanding of academic research can help organizations and governments to use it more intelligently and constructively to address contemporary social problems.

These warning words are likely to prove palatable to government officials and politicians, however. Governments worldwide demand that universities justify public funding of science and research efforts, effectively asking for an enumeration of outputs and outcomes linked to research, and for a systematic evaluation of these effects. In short they demand an itemization not just of primary impacts, which is do-able, but also of extended secondary impacts, which is not (in the current state of knowledge and technology). Yet scientists and universities in turn are tempted not to rebut such ‘naïve customer’ demands but instead to play up to them by producing inflated or mainly un-evidenced claims of their extended effects on outcomes and outcomes. These claims are then backed up using ‘case studies’ of research dividends, anecdotes and fairy tales of influence, and the organized lobbying of politicians and public opinion. The net effect is often to produce an unreal public discourse in which political and bureaucratic demands for unrealistic evidence co-exist with university claims to meet the actually unattainable criteria being set. The forthcoming Research Excellence Framework in England looks like becoming a classic example of this pattern, like its RAE (Research Assessment Exercise) predecessors.

This is not to say that no economic evaluation of the costs, benefits and values served by academic research is feasible – but only that what is currently achievable is likely to operate at a very aggregate level. We can look across countries, and perhaps within countries across disciplines, at how far investing in different kinds of university research is correlated with other social, economic or public policy changes that we value as positive. Standard cross-national regression analyses already provide some basic pointers to guide policy-makers here. Useful analytic techniques have been developed in environmental economics for imputing values to things not paid for, or assigning values to the continued existence of things even when they are not currently being directly used. They could potentially be extended to other areas, such as valuing cultural institutions (O’Brien, 2010), or valuing university education and research efforts, or unravelling the latent value of the dynamic knowledge inventory as a key factor separating advanced industrial states from those that are still developing and industrializing.

As we develop much better knowledge of the primary impacts of research (both on academia itself and externally), so we could expect the scope and detail of linkages between academic influences and output and outcome changes to increase. Generating better data on primary research impacts is also likely to greatly expand what it is feasible to accomplish in evaluating the mediated influence of academic work on social outcomes. But even with our current rapid advances in information technology and the pooling of information over the internet, these shifts are most likely to occur over a period of years, and certainly are not immediately possible. In this book we primarily seek to give a boost to the analysis of primary research impacts, from which we are confident that further major improvements in assessing secondary impacts should flow.

Summary

1. A research impact is a recorded or otherwise auditable occasion of influence from academic research on another actor or organization.

a. Academic impacts from research are influences upon actors in academia or universities, e.g. as measured by citations in other academic authors’ work.

b. External impacts are influences on actors outside higher education, that is, in business, government or civil society, e.g. as measured by references in the trade press or in government documents, or by coverage in mass media.

2. A research impact is an occasion of influence and hence it is not the same thing as a change in outputs or activities as a result of that influence, still less a change in social outcomes. Changes in organizational outputs and social outcomes are always attributable to multiple forces and influences. Consequently, verified causal links from one author or piece of work to output changes or to social outcomes cannot realistically be made or measured in the current state of knowledge.

3. A research impact is also emphatically not a claim for a clear-cut social welfare gain (i.e. it is not causally linked to a social outcome that has been positively evaluated or validated as beneficial to society in some way).

4. However, secondary impacts from research can sometimes be traced at a much more aggregate level, and some macro-evaluations of the economic net benefits of university research are feasible. Improving our knowledge of primary impacts as occasions of influence is the best route to expanding what can be achieved here.

 

Back to top

Print Friendly