Thinking Epistemically about Experts and Publics: A Response to Selinger, Stephen Turner

SERRC —  August 20, 2014 — 1 Comment

Author Information: Stephen Turner, University of South Florida, turner@usf.edu

Turner, Stephen. “Thinking Epistemically about Experts and Publics: A Response to Selinger.” Social Epistemology Review and Reply Collective 3, no. 9 (2014): 36-43.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-1B3

Please refer to:

Evan Selinger’s review nicely captures the main concerns of my collection of essays, The Politics of Expertise. He raises an important question that is touched on in several essays but not fully developed: the problem of getting expert knowledge possessed by academics into something like public discussion or the public domain. This is of course only a part of the problem of expertise and the larger problem of knowledge in society. But it can be approached in more detail than was done in the book, in terms of the basic ideas of the book, and I will try to do that here. Much of what I will say deals with issues I have addressed in other places, so I will, rather tiresomely, cite myself, for those who wish more elaboration. 

Aggregating and Heuristics: Some Basic Background

One of the key ideas of the book is that knowledge needs to be aggregated in order to use it for most decisions–collective or individual. Aggregating knowledge is organized, either formally or informally, and in ways that we normally don’t pay much attention to. The truth to be found in social constructionism is mostly a result of this: when we focus on “facts” we leave out how they became facts. Most of what we take to be fact is highly aggregated fact, facts which are the product of multiple conditions, techniques, sources, and complex bodies of background information created in highly organized social and technical institutions. The conditions that made them facts and the unfocused-on background that enables us to use them as facts are subject to change in ways that change the facts or their significance. As a consequence, as I point out in the book, a kind of “knowledge risk” or risk of error that attaches to the stuff that contributes to the fact—call it for convenience a “system.”

These organizational and background phenomena are difficult to make sense of epistemically: they don’t look at all like the classical “Otto sees a red cube” claims beloved of Logical Positivism. Nor do they fit the model of testimony of analytic social epistemology. So, as Selinger notes, I try to find some other ways to think about them, largely through concrete examples. There is a bit of theory here as well. I use the idea of heuristics to apply both to the reasoning of individuals and to the epistemic products of these organized systems, including science itself. The basic thought is that individuals necessarily rely on heuristics which have biases, and that systems are made up of agents with these heuristics, and that they organize, and often attempt to correct for, individual biases, but necessarily rely on heuristics themselves, and introduce biases of their own. This implies as well that there are risks in relying on any of these systems. Courts are systems of this kind. We rely on them to make decisions by following rules of evidence and procedural rules, but we also know that the process is biased by these very rules. They amount to a kind of collective heuristic that produces results—legal decisions. Science is another; so is public discussion: they are each collective processes with biases, made up of the actions of individual biased users.

Issues of legitimation, or to put it slightly differently, issues about what sort of heuristics a member of the “public” employs when assessing or deciding to attend to the claims of experts, indeed in determining whether they are experts, are intertwined with processes of aggregation: even experts need to decide who to take seriously, invest time in understanding, and so forth. They do so, necessarily, by using legitimating cues of various kinds—the status of the expert, what others believe or accept, on what I call certification, and so forth. As I point out in one chapter of the book, these activities of certification, including such things as peer-review, consume a large proportion of the time of scientists. These activities have consequences, though they are hard to conceptualize and track. Nor, it is important to add, are these processes free of interests, rewards, and penalties: they are embedded in organizational structures that operate like other institutions.

So how does all this apply to the problem of experts and the public? Not surprisingly, it complicates matters. But it does so in particular ways. In thinking about this topic, I have a simple model: the diffusion of agricultural techniques. This is perhaps the most studied of all cases of the use of expert knowledge. The standard story involves early adopters and community influentials, that is to say people who are open to and use expert knowledge, who are then imitated by those who are not early adopters. There are also laggards, who are not in the networks by which this kind of personal influence travels. The model doesn’t work for everything: far from it. But it is a good start on the complexities of even the most simple organized system for transmitting knowledge from experts to the public.

Normally this is not described in epistemic terms. But there is a lot of epistemic content here: the secondary adopter needs to make judgments about the early adopter, test them against past experience, test the technique, and in general navigate a complex epistemic environment before taking the risk inherent in changing a method or planting a new kind of crop. The adopter also has to calculate risks and rewards, based on their own experience and situation. The early adopter cannot simply trust the experts, but must think through what is known about the soil, the water, and so forth, as well as the risks of the market, among many other matters. Laggards are epistemically risk averse: they trust only what they already know. Those in between have an eye on the facts of their situation, on their own degree or epistemic risk aversion, and importantly on the behavior and successes of the early adopters, which reassures them about the eventual choice to adopt.

This is about technology, but a different kind of technology than the kind Selinger has in mind. Nevertheless, similar issues arise. Users have to make judgments about technologies they do not understand, have to accept risks that may be difficult to measure or anticipate, and are likely to rely on what the crowd is doing to make the relevant decisions–even deciding to post something on a Facebook page is potentially fraught with risks, as companies use this data to make hiring decisions, and they become the means of making automated personality profiles. And the thing produced by the system of Facebook or an iPhone is a new kind of knowledge about the other people whom one now has access to, knowledge that must be aggregated, and to which we apply heuristics.

The only way to understand how people navigate these systems is through the proprietary data, as Selinger points out, held by the companies, which they use to target marketing, and may also use to manipulate users, particularly by, in effect, using their customers heuristics with their particular biases against them. Selinger is concerned, appropriately, with the ethics of this. For me, the question this raises is how novel this is. In the early days of agricultural extension, extension agents sponsored 4-H clubs and used contests to get girls to grow tomatoes (Scott 1970, 248-52). Were they doing anything fundamentally different? They hypothesized about ways to manipulate the girls into doing what they wanted them to, and what they believed—from their expert point of view—was in the girls’ own good. If there is a difference, it is in the motives of the experts and in scale. But the idea that there is a radical contrast between these kinds of manipulations, or between them and the ordinary manipulations found in social life, is not plausible.

In these cases it seems that the manipulation works with our heuristic bias toward taking the Facebook page or the tomato growing contest at face value. But this is not so different, if it is different at all, from taking “facts” at face value and ignoring the systems that produce them. And, if this is the case, it suggests that there is no real escape from the situation of “being manipulated” or being subject to unconsciously produced biases. We live in a world that other people are always manipulating, in ways that trade on our preferences and biases. But we can try to understand the systems that produce apparent truth, as well as the risks of relying on them.

Making Expertise Accessible to the Public

One of the largest and most difficult problems for science studies is making sense of the new regime of knowledge—call it post-normal, post-academic, technoscience, or something else. Compared to the extension agents and their tomatoes, these are regimes of great complexity. The difficulty is compounded in the case of the social sciences and humanities, which have their own histories of being part of public discussion. But the same kinds of issues arise. Knowledge risk—the risk involved in relying on the systems that underlie “facts”—is a place to start thinking about these problems in general, and perhaps the concept can shed some light on the issues Selinger raises. I would separate these into two mutually entangled issues: the problem of encouraging academics to engage the “public” and providing means for them to do so, and the problem of who owns and controls data, academic opinion, and the like.

Selinger discusses a chapter on Sociology, and it is a good place to start. Once upon a time, academic sociologists did write a lot for newspapers. Franklin Giddings, one of the founders of American sociology, was routinely in Hamilton Holt’s Independent, which itself was an unusual paper that promoted public discussion and published lots of experts. But Giddings had made a living as a journalist, and was comfortable with its demands. Why did academics shy away from this sort of thing? There is a simple, if only partial, explanation for this: the great academic freedom scandals of the late nineteenth century, the Bemis case at Chicago and the E.A. Ross case at Stanford, involved professors being fired for speaking out in public on public topics related to their expertise. Giddings himself paid a price with the Columbia faculty among those who disagreed with his views on the Great War. The establishment of AAUP standards of academic freedom was an attempt to draw a line defining acceptable standards, and distinguishing academic speech and teaching from political expression. This was a trade-off: say what you want in class, as long as it is within the limits of professional talk, and you won’t be fired. It left the possibility of public discussion vague, but the implications were clear: stay within professional limits if one is speaking as a professional, and the professionals will defend you.

These trade-offs seem quaint today, in part for reasons Selinger alludes to. Academics are in the money machine, whether it is to funding agencies or corporations. They work within very narrow ranges as a result of the intense competition for money. Much of the money is attached to policy agendas, so the work is politicized in complex ways. At the same time, “political” speech is no longer separable from what is actually taught in many humanities and social science disciplines, and professors, such as Melissa Harris-Perry, can have national TV shows on which they say the same sorts of things they say in class and in their published work. But the ideological range of what can be said in universities is narrow. Philosophers have adapted to this new order. There are now philosophical defenses, for example by Phillip Kitcher, of the suppression of research (and, implicitly, of the expert speech that would put this research into the domain of public discussion) that doesn’t serve the right political goals.

Needless to say, the many issues raised by this new situation, some of which I take up in a chapter on Post-normal science, can’t be explored in detail in this reply.[1] But they do form part of the background for Selinger’s concerns. Selinger suggests several reforms that are designed to facilitate the flow of expert knowledge from the academy to the public. I take this to be an attempt at policy design, or more accurately at intervening in an existing system which he thinks does not work well. His suggestions are largely a matter of changing incentives for academics, which he thinks are skewed toward the production of unreadable professional texts.

So what is the existing system? The system closest to Selinger’s concerns involves the category of public intellectuals. We have all heard stories of the celebrity-like demands of some of the divas on this circuit, and the prices that some of them command. Here the incentives are enormous, and many academics aspire to becoming one. But only a limited number of academics can be celebrities. Why? If we go back to some basics there are some answers. In the mundane case of adopting agricultural methods, most people had a risk-reducing strategy of waiting to see what the early-adopters and the most successful farmers did. This means that many people pay attention to a small number of influentials and to those that other people respect as influentials. They are the analogue to celebrity intellectuals. The dual structure of listening to people directly and listening to what other people think (and the added consideration of other people’s opinions about whose judgments to respect) leads to hierarchy, simply because the judgments are no longer independent, and pile on to one another. The mechanisms are many, but this much seems obvious: if I do what seems safe because other people are doing it, I add to the “consensus.”

From the point of view of the person trying to break in to the status of public intellectual, this is a lottery (as Adam Smith famously described making specific reference to the rewards of philosophers {[1776] 2009, I.10.29}): the number of winners is necessarily small; controlling the secondary judgments is impossible. This means that there are big rewards for a few, and no rewards for many. From a career point of view, choosing to enter this lottery is choosing a lot of risk: better to play the low risk game of writing but safe obscure journal articles. I take it that Selinger is, however indirectly, objecting to this system, and is suggesting a policy change: increasing the rewards for non-celebrity academic interventions into public discussion. This would have the result of reducing the career risks for doing so, recognizing the value of these interventions, and making them more common, or at least making the attempt more common. In short, it is a suggestion for making a process that is opaque, elitist, and undemocratic into one that is more open, by encouraging more entrants.

In a sense, the problem of elitism is where I came in to science studies, with two articles in the seventies in response to the so-called Ortega hypothesis, which basically held that only the top 10% or so of scientists actually contributed anything to science, which implied for the Mertonian advocates of this view, who were at the time devoted to defending the status hierarchy of science, that funding the others was a waste. Daryl Chubin and I argued two things: that the distribution of talent was such that the 90% were not that different from the 10%, which implied not that they were useless but that the system wasted their talent (Turner and Chubin 1976), and that much of eminence was a result of chance (Turner and Chubin 1979). Science studies has long since moved away from the study of stratification. But if what I have said here is close to being correct, the processes producing the lottery character both of normal academic life and of the careers of public intellectuals are deeply ingrained in basic processes of social epistemology that produce “influentials,” and which are especially difficult to reform because they arise from a basic epistemic situation.

Reducing the risks of entering the lottery for becoming a public intellectual would merely make placing a bet more attractive, and induce more people to play, without changing its character. This may be a good thing—I am not inclined to get too normative here. My point would be that “public discussion” also has its biases, including the one of elevating divas, but also of directing opinion in certain ways. But I do want to note the fact that there is vociferous hostility on the part of some philosophers—Habermas especially—to the democratization via the internet of public discourse, which he understands as a bad development challenging the dominance of genuine expertise over the public (2006). This is not exactly the same issue as the one of opening up public discussion by encouraging more academics to participate, but it is not unrelated. If you like the present top-down structure of expertise-public relations, you are not going to like its erosion.

A “liberal” view of public discussion would be this: let all the viewpoints out and let the members of the public choose who to believe. Selinger’s proposal would facilitate this by putting out more expert opinions to choose from. The illiberal view—that of Habermas and Kitcher—is that the public is incompetent to decide, and that what they are fed should be controlled so that they are led in the right direction.[2] From my point of view, what is interesting about these viewpoints is that they make assumptions about the capacities and biases of the public, and also about expert opinion, that are (radically) at variance with one another. My thought is that we can begin to at least grope with the question of who is empirically right about the differences.

This brings me to Selinger’s other issue: the question of who owns data, and the larger question of who owns science, which is raised by it, and the related question of “technology.” I think the issues parallel the others. Certainly there are good reasons to be suspicious of such things as science that is paid for by people who have an interest in the outcome. There are also reasons to be suspicious about the uses of data that is being kept secret from us. But it is an illusion to think that there is a pure system which avoids these problems. Ironically, this was a point made by the Left, notably by John Desmond Bernal, against Polanyi: to his objections to the “planning of science,” they replied that there was no such thing as unplanned science, since decisions to fund research were a form of planning, albeit a wasteful and undirected one. Their alternative: science understood as technology in service to man. Indeed, it was part of their Marxism to say that the real scientists in history, and the possessors of knowledge, were the craftsmen and technologists, on whom scientists were parasitic. And they thought—even more ironically, given the present role of corporate science—that a revolution ending the capitalist order was needed in order to bring about the full utilization of science for human ends (Turner 2008).

The deep interdependence of science and technology that Bernal and his followers on the Left promoted in the thirties is a given. Moreover, it is central to the legitimacy of science, something the liberal critics of Bernalism did not appreciate. Much of contemporary science is “platform science,” that is to say the product of advanced technologies of observation and data collection, such as satellites and CERN. And of course most of the decision-making I discuss in the book involves, or is straightforwardly about, technology in this sense: the discussion of arid land policy is also a discussion of artesian wells and dams; the discussion of expertise in the EU is about policies for technologies such as telecommunications; the Columbia Shuttle and the atom bomb were both highly aggregated technology; the discussion of city planning in Aalborg is about the technologies of transportation; the Homestake mine experiment was a technological marvel made possible by the pre-existing technology of the mine itself, and the medical cases of cholera involved crucially the technology of pumping and supplying water and disposing of waste.

In the chapter discussing post-normal science I discuss the concerns of the physicists of the atomic bomb project that the technologization of atomic science would mean the end of the openness and competitiveness of the earlier period in physics, and note that they tried to preserve this through institutional means that enabled competition and divergent thinking. The system that we used to have was careful to use indirect means, with many layers, to distribute money for research: now most of it is given to people who can promise results, by people who want particular results. What this suggests today is that technoscience—the science that Bernal dreamed of and which we have in some sense now realized—needs to be understood on its own terms, as a system that aggregates and thus generates knowledge in its own way, with its own biases. This is my response to Selinger’s query about technology.

Science and knowledge are not free. What we know, or think we know, is the product of systems that are organized, have incentives, constraints, biases, and the rest of it. This, I am inclined to say, is an existential situation that we need first and foremost to face up to rather than escape. We can sometimes identify these biases, especially with a long historical record. As I note in the book, one can read Thomas Kuhn as having done this with normal science: showing what its conservative biases consist in, and how they work. But doing so is difficult, retrospective, and not helpful with new issues. The only way I know to begin to do this is to look at the historical episodes, with their known outcomes, that resemble the present situation in science, and that is a large part of what I have tried to do. I am painfully aware of how limited and imprecise this project is. But it is nevertheless important to ask these questions.

References

Habermas, Jürgen. “Political Communication in Media Society: Does Democracy Still Enjoy an Epistemic Dimension? The Impact of Normative Theory on Empirical Research.” Communication Theory 16 (2006): 411-426.

Selinger, Evan. “The Politics of Expertise, Patronage, and Public Engagement.” Social Epistemology Review and Reply Collective 3, no. 8 (2014): 13-18.

Smith, Adam. An Inquiry into the Nature and Causes of the Wealth of Nations 5th edn. London: Metheun & Co. Ltd. ([1776] 1904) http://www.econlib.org/library/Smith/smWN.html (accessed 12 August 2014).

Turner, Stephen and Daryl Chubin. “Another Appraisal of Ortega, the Coles, and Science Policy: The Ecclesiastes Hypothesis.” Social Science Information 15 (1976): 657-662.

Turner, Stephen and Daryl Chubin. “Chance and Eminence in Science: Ecclesiastes II.” Social Science Information 18 (1979): 437-449.

Turner, Stephen. The Third Science War. James R. Brown, Who Rules in Science: An Opinionated Guide to the Wars and Philip Kitcher, Science, Truth, and Democracy. Social Studies of Science 33, no. 4 (2003): 581-611.

Turner, Stephen. “Public Sociology and Democratic Theory.” Sociology 41, no.5 (2007): 785-798.

Turner, Stephen. “The Social Study of Science before Kuhn.” The Handbook of Science and Technology Studies 3rd edn. Edited by Edward J. Hackett, Olga Amsterdamska, Michael Lynch, and Judy Wajcman. Cambridge, MA: MIT Press (2008), 33-62.

Turner, Stephen. “Deintellectualizing American Sociology: A History, of Sorts.” Journal of Sociology 48, no. 4 (2012): 346-63. doi: 10.1177/1440783312458226

Turner, Stephen. American Sociology: From Pre-Disciplinary to Post-Normal. Basingstoke, UK: Palgrave (2013a).

Turner Stephen. “What Can We Say About the Future of Social Science?” Anthropological Theory 13, no. 3 (2013b): 187-200.

Turner, Stephen. “The Blogosphere and Its Enemies.” The Sociological Review 61, no. S2 (2013c): 160-179. DOI: 10.1111/1467-954X.12105

[1] I discuss Kitcher elsewhere (2003), as well as recent sociology (2012, 2013a).

[2] I contrast these positions in relation to democratic theory in “Public Sociology and Democratic Theory” (2007). I give a case study of a counterexample to the idea that the kind of open public discussion facilitated by the internet has degraded rationality and illegitimately undermined respect for expert opinion is given in a paper on women’s responses to expert claims about the effects of hysterectomies and oophorectomies (2013c). I have also made some suggestions about the nature of communities of expertise and how they differ from those of science (2013b)

One response to Thinking Epistemically about Experts and Publics: A Response to Selinger, Stephen Turner

  1. 

    For many years, I have been an appreciative reader of Stephen Turner’s books and essays, but I do not claim to know just what he meant when he wrote that Selinger “raises an important question that is touched on in several essays but not fully developed: the problem of getting expert knowledge possessed by academics into something like public discussion or the public domain.” I agree that the question is important, but do not agree with the way Turner formulates it here.

    To say that academics “posses” expert knowledge suggests the “knowledge” is a kind of mental object that academics share. My understanding of Turner’s The Social Theory of Practices, Brains/Practices/Relativism, Explaining the Normative, and Explaining the Tacit is that he rejects the proposition that categories or collectivities of persons can “share” common mental objects. Again, this is just my interpretation of Turner’s position, and my interpretation of the way he has stated the important question upon which he elaborates in his reply to Selinger.

    I am also uneasy with Turner’s confident assertion that there is a “problem of expertise” and a larger “problem of knowledge in society.” It seems to me he would have us imagine that “expertise” is a special kind of “knowledge,” that is, a special kind of mental object that is collectively “possed” by a set of experts. “Knowledge in society” is a highly metaphorical expression, suggesting that society is a large “container” for a large, but undefined, mental object designated “knowledge.”

    The fact that I am uneasy with Turner’s initial formulation of the question he is trying to answer does not mean that I disagree with all he has written in his reply to Selinger. Perhaps I ought to have emphasized the points with which I agree, rather than my disagreement with the way he stated the question.

    For example, on p. 40 Turner says:
    But if what I have said here is close to being correct, the processes producing the lottery character both of normal academic life and of the careers of public intellectuals are deeply ingrained in basic processes of social epistemology that
    produce “influentials,” and which are especially difficult to reform because they arise from a basic epistemic situation.
    This strikes me as being true, and also as being expressed very well. The “lottery character” to which he refers implies an assertion that the “systems” that produce elites in normal academic life and “celebrated” public intellectuals always include random or stochastic processes. In these “systems,” God does “throw dice.” This means that there are non-systematic processes intextricably embedded within the “systems” that sort the lottery players into a few “winners” and many “loosers.”

    I argue that the “basic expistemic situation” that he posits as a kind of generative mechanism for the lottery-like aspect of intellectual stratification is one in which there are no “shared mental objects.” What are shared are the skin-out aspects of cultural symbols, the aspects of symbols that are accessible to our external senses. What we do not share are the skin-in meanings each person attributes to those shared things.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s