Ahead of the Curve // John J. Reilly Center // University of Notre Dame

Ahead Of The Curve: Anticipating Ethical, Legal, and Societal Issues Posed by Emerging Weapons Technologies

April 22-23, 2014

University of Notre Dame

“Ahead of the Curve” will provide a forum to discuss the “action-oriented” chapters of the soon-to-be-released National Academy of Science’s report, “Emerging and Readily Available Technologies and National Security.” The report was commissioned by the Defense Advanced Research Projects Agency (DARPA) in order to begin a discussion about the conduct and applications of research on military technology as well as their unforseen and inadvertant consequences. Speakers will include members of the NAS committee that wrote the report, along with distinguished experts on the ethics, law, and social impacts of new weapons technologies and representatives of agencies and organizations that are home to cutting-edge weapons research. Presentations will address the ethical, legal, and societal issues that policy makers, researchers, and industries need to anticipate as new technologies arise, specifically in fields such as robotics, autonomous systems, prosthetics and human enhancement, cyber weapons, information warfare technologies, synthetic biology, and nanotechnology. Our primary goal is to help government agencies, institutions, and researchers grow the expertise necessary for early and continuing engagement with the ethical, legal, and societal implications of new weapons technologies as they are planned and developed. We also aim to generate a broad public audience for the NAS report, this being an area in which public education is necessary, as is elevating the level of factually well-informed, public discourse.

via Ahead of the Curve // John J. Reilly Center // University of Notre Dame.

Lethal Autonomous Robots (“Killer Robots”) | Center for Ethics & Technology | Georgia Institute of Technology | Atlanta, GA

Lethal Autonomous Robots (\”Killer Robots\”)

Monday, 18 November 2013 05:00 pm to 07:00 pm EST

Location: 

Global Learning Center (in Tech Square), room 129

WATCH the simultaneously streamed WEBCAST at: 

http://proed.pe.gatech.edu/gtpe/pelive/tech_debate_111813/

Debate and Q&A for both

Lethal Autonomous Robots (LARs) are machines that can decide to kill. Such a technology has the potential to revolutionize modern warfare and more. The need for understanding LARs is essential to decide whether their development and possible deployment should be regulated or banned. Are LARs ethical?

via Lethal Autonomous Robots ("Killer Robots") | Center for Ethics & Technology | Georgia Institute of Technology | Atlanta, GA.

‘Big Data’ Is Bunk, Obama Campaign’s Tech Guru Tells University Leaders – Wired Campus – The Chronicle of Higher Education

“The ‘big’ there is purely marketing,” Mr. Reed said. “This is all fear … This is about you buying big expensive servers and whatnot.”

via 'Big Data' Is Bunk, Obama Campaign's Tech Guru Tells University Leaders – Wired Campus – The Chronicle of Higher Education.

Also funny what he says about his own education ….

What does it mean to prepare for life in ‘Humanity 2.0’?

Francis Rememdios has organized a session at the 4S Annual Meeting in which he, David Budtz Pedersen, and I will serve as critics of Steve Fuller’s book Preparing for Life in Humanity 2.0. We’ll be live tweeting as much as possible during the session, using the hashtag #humanity2 for those who want to follow. There is also a more general #4s2013 that should be interesting to follow for the next few days.

Here are the abstracts for our talks:

Humanity 2.0, Synthetic Biology, and Risk Assessment

Francis Remedios, Social Epistemology Editorial Board member

As a follow-up to Fuller’s Humanity 2.0, which is concerned with the impact of biosciences and nanosciences on humanity, Preparing for Life in Humanity 2.0 provides a more detailed analysis. Possible futures are discussed are: the ecological, the biomedical and the cybernetic. In the Proactionary Imperative, Fuller and Lipinska aver that for the human condition, the proactionary principle, which is risk taking, is an essential part should be favored over the precautionary principle, which is risk aversion. In terms of policy and ethics, which version of risk assessment should be used for synthetic biology, a branch of biotechnology? With synthetic biology, life is created from inanimate material. Synthetic biology has been dubbed life 2.0. Should one principle be favored over the other?

The Political Epistemology of Humanity 2.0
David Budtz Pedersen, Center for Semiotics, Aarhus University
In this paper I confront Fuller’s conception of Humanity 2.0 with the techno-democratic theories of Fukuyama (2003) and Rawls (1999). What happens to democratic values such as inclusion, rule of law, equality and fairness in an age of technology intensive output-based policymaking? Traditional models of input democracy are based on the moral intuition that the unintended consequences of natural selection are undeserved and call for social redress and compensation. However, in humanity 2.0 these unintended consequences are turned into intended as an effect of bioengineering and biomedical intervention. This, I argue, leads to an erosion of the natural luck paradigm on which  standard theories of distributive justice rest. Hence, people can no longer be expected to recognize each other as natural equals. Now compare this claim to Fuller’s idea that the welfare state needs to reassure the collectivization of burdens and benefits of radical scientific experimentation. Even if this might energize the welfare system and deliver a new momentum to the welfare state in an age of demographic change, it is not clear on which basis this political disposition for collectivizing such scientific benefits rests. In short, it seems implausible that the new techno-elites, that has translated the unintended consequence of natural selection into intended, will be convinced of distributing the benefits of scientific experiments to the wider society. If the biosubstrate of the political elite is radically different in terms of intelligence, life expectancy, bodily performance etc. than those disabled, it is no longer clear what the basis of redistribution and fairness should be. Hence, I argue that important elements of traditional democracy are still robust and necessary to vouch for the legitimacy of humanity 2.0.
Fuller’s Categorical Imperative: The Will to Proaction
J. Britt Holbrook, Georgia Institute of Technology
Two 19th century philosophers – William James and Friedrich Nietzsche – and one on the border of the 18th and 19th centuries – Immanuel Kant – underlie Fuller’s support for the proactionary imperative as a guide to life in ‘Humanity 2.0’. I make reference to the thought of these thinkers (James’s will to believe, Nietzsche’s will to power, and Kant’s categorical imperative) in my critique of Fuller’s will to proaction. First, I argue that, despite a superficial resemblance, James’s view about the risk of uncertainty does not map well onto the proactionary principle. Second, however, I argue that James’s notion that our epistemological preferences reveal something about our ‘passional nature’ connects with Nietzsche’s idea of the will to power in a way that allows us to diagnose Fuller’s ‘moral entrepreneur’ as revelatory of Fuller’s own  ‘categorical imperative’. But my larger critique rests on the connection between Fuller’s thinking and that of Wilhelm von Humboldt. I argue that Fuller accepts not only Humboldt’s ideas about the integration of research and education, but also – and this is the main weakness of Fuller’s position – Humboldt’s lesser recognized thesis about the relation between knowledge and society. Humboldt defends the pursuit of knowledge for its own sake on the grounds that this is necessary to benefit society. I criticize this view and argue that Fuller’s account of the public intellectual as an agent of distributive justice is inadequate to escape the critique of the pursuit of knowledge for its own sake.

Developing Metrics for the Evaluation of Individual Researchers – Should Bibliometricians Be Left to Their Own Devices?

So, I am sorry to have missed most of the Atlanta Conference on Science and Innovation Policy. On the other hand, I wouldn’t trade my involvement with the AAAS Committee on Scientific Freedom and Responsibility for any other academic opportunity. I love the CSFR meetings, and I think we may even be able to make a difference occasionally. I always leave the meetings energized and thinking about what I can do next.

That said, I am really happy to be on my way back to the ATL to participate in the last day of the Atlanta Conference. Ismael Rafols asked me to participate in a roundtable discussion with Cassidy Sugimoto and him (to be chaired by Diana Hicks). Like I’d say ‘no’ to that invitation!

The topic will be the recent discussions among bibliometricians of the development of metrics for individual researchers. That sounds like a great conversation to me! Of course, when I indicated to Ismael that I was bascially against the idea of bibliometricians coming up with standards for individual-level metrics, Ismael laughed and said the conversation should be interesting.

I’m not going to present a paper; just some thoughts. But I did start writing on the plane. Here’s what I have so far:

Bibliometrics are now increasingly being used in ways that go beyond their design. Bibliometricians are now increasingly asking how they should react to such unintended uses of the tools they developed. The issue of unintended consequences – especially of technologies designed with one purpose in mind, but which can be repurposed – is not new, of course. And bibliometricians have been asking questions – ethical questions, but also policy questions – essentially since the beginning of the development of bibliometrics. If anyone is sensitive to the fact that numbers are not neutral, it is surely the bibliometricians.

This sensitivity to numbers, however, especially when combined with great technical skill and large data sets, can also be a weakness. Bibliometricians are also aware of this phenomenon, though perhaps to a lesser degree than one might like. There are exceptions. The discussion by Paul Wouters, Wolfgang Glänzel, Jochen Gläser, and Ismael Rafols regarding this “urgent debate in bibliometrics,” is one indication of such awareness. Recent sessions at ISSI in Vienna and STI2013 in Berlin on which Wouters et al. report are other indicators that the bibliometrics community feels a sense of urgency, especially with regard to the question of measuring the performance of individual researchers.

That such questions are being raised and discussed by bibliometricians is certainly a positive development. One cannot fault bibliometricians for wanting to take responsibility for the unintended consequences of their own inventions. But one – I would prefer to say ‘we’ – cannot allow this responsibility to be assumed only by members of the bibliometrics community.

It’s not so much that I don’t want to blame them for not having thought through possible other uses of their metrics — holding them to what Carl Mitcham calls a duty plus respicare: to take more into account than the purpose for which something was initially designed. It’s that I don’t want to leave it to them to try to fix things. Bibliometricians, after all, are a disciplinary community. They have standards; but I worry they also think their standards ought to be the standards. That’s the same sort of naivety that got us in this mess in the first place.

Look, if you’re going to invent a device someone else can command (deans and provosts with research evaluation metrics are like teenagers driving cars), you ought at least to have thought about how those others might use it in ways you didn’t intend. But since you didn’t, don’t try to come in now with your standards as if you know best.

Bibliometrics are not the province of bibliometricians anymore. They’re part of academe. And we academics need to take ownership of them. We shouldn’t let administrators drive in our neighborhoods without some sort of oversight. We should learn to drive ourselves so we can determine the rules of the road. If the bibliometricians want to help, that’s cool. But I am not going to let the Fordists figure out academe for me.

With the development of individual level bibliometrics, we now have the ability — and the interest — to own our own metrics. What we want to avoid at all costs is having metrics take over our world so that they end up steering us rather than us driving them. We don’t want what’s happened with the car to happen with bibliometrics. What we want is to stop at the level at which bibliometrics of individual researchers maximize the power and creativity of individual researchers. Once we standardize metrics, it makes it that much easier to institutionalize them.

It’s not metrics themselves that we academics should resist. ‘Impact’ is a great opportunity, if we own it. But by all means, we should resist the institutionalization of standardized metrics. A first step is to resist their standardization.

Coming soon …

Image

- Featuring nearly 200 entirely new entries

- All entries revised and updated

- Plus expanded coverage of engineering topics and global perspectives

- Edited by J. Britt Holbrook and Carl Mitcham, with contributions from consulting ethics centers on six continents