Annual Question

WHAT *SHOULD* WE BE WORRIED ABOUT?

        Clematis 2013 by Katinka Matson
Click to Expand www.katinkamatson.com



"The World's Smartest Website"

 —John Naughton, The Observer

THE 2013 EDGE QUESTION . . .
___________________________________________________________

We worry because we are built to anticipate the future. Nothing can stop us from worrying, but science can teach us how to worry better, and when to stop worrying.

Conversation

THE NORMAL WELL-TEMPERED MIND

Daniel C. Dennett [1.8.13]

The vision of the brain as a computer, which I still champion, is changing so fast. The brain's a computer, but it's so different from any computer that you're used to. It's not like your desktop or your laptop at all, and it's not like your iPhone except in some ways. It's a much more interesting phenomenon. What Turing gave us for the first time (and without Turing you just couldn't do any of this) is a way of thinking about in a disciplined way and taking seriously phenomena that have, as I like to say, trillions of moving parts. Until late 20th century, nobody knew how to take seriously a machine with a trillion moving parts. It's just mind-boggling. 

DANIEL C. DENNETT is University Professor, Professor of Philosophy, and Co-Director of the Center for Cognitive Studies at Tufts University. His books include Consciousness Explained; Darwin's Dangerous Idea; Kinds of Minds; Freedom Evolves; and Breaking the Spell.

Daniel C. Dennett's Edge Bio Page


[50 minutes]


THE NORMAL WELL-TEMPERED MIND

I'm trying to undo a mistake I made some years ago, and rethink the idea that the way to understand the mind is to take it apart into simpler minds and then take those apart into still simpler minds until you get down to minds that can be replaced by a machine. This is called homuncular functionalism, because you take the whole person. You break the whole person down into two or three or four or seven sub persons that are basically agents. They're homunculi, and this looks like a regress, but it's only a finite regress, because you take each of those in turn and you break it down into a group of stupider, more specialized homunculi, and you keep going until you arrive at parts that you can replace with a machine, and that's a great way of thinking about cognitive science. It's what good old-fashioned AI tried to do and still trying to do.

The idea is basically right, but when I first conceived of it, I made a big mistake. I was at that point enamored of the McCulloch-Pitts logical neuron. McCulloch and Pitts had put together the idea of a very simple artificial neuron, a computational neuron, which had multiple inputs and a single branching output and a threshold for firing, and the inputs were either inhibitory or excitatory. They proved that in principle a neural net made of these logical neurons could compute anything you wanted to compute. So this was very exciting. It meant that basically you could treat the brain as a computer and treat the neuron as a sort of basic switching element in the computer, and that was certainly an inspiring over-simplification. Everybody knew is was an over-simplification, but people didn't realize how much, and more recently it's become clear to me that it's a dramatic over-simplification, because each neuron, far from being a simple logical switch, is a little agent with an agenda, and they are much more autonomous and much more interesting than any switch. 

Conversation

TALES FROM THE WORLD BEFORE YESTERDAY

Jared Diamond [12.31.12]
Introduction by

INTRODUCTION

Over the years I've had the privilege to work with some of the more interesting thinkers of our time, individuals who, through their research in biology, physics, psychology, computer science, provide us with the evidence-based results that are the basis of the most reliable method of our knowledge about who and what we are. In this regard, nothing beats sitting down with Jared Diamond after one of his many (41 to date) trips to New Guinea to listen to this master story-teller hold forth on the windows to our past, whether the topic is rare birds, "primitive peoples", birth practices, the lives of the old, war, or the characteristics of all human societies until the rise of state societies with laws and government, beginning around 5,500 years ago.

A few months ago I visited him at his home in Bel Air, California, just a few doors up the road from the Bel Air Hotel. We sat for an hour while he recounted some his experiences in New Guinea. Fortunately, I had my video camera with me. What follows are three videotaped stories, which I have taken the liberty of presenting with the following titles:

• "If You Camp Under Dead Trees, And Each Dead Tree Has A One In 1,000 Chance Of Falling On You And Killing You"

• "One Of The Stupider, More Dangerous Things That I Did In My Life"

• "How I Discovered The Long-Lost Bowerbird, Initially Without Realizing It"

— JB

JARED DIAMOND is Professor of Geography at the University of California, Los Angeles. His latest book, published today, is The World Until Yesterday: What Can We Learn from Traditional Societies? His other books include Collapse: How Societies Choose to Fail or Succeed, and the Pulitzer Prize-winning author of the widely acclaimed Guns, Germs, and Steel: the Fates of Human Societies, which is the winner of Britain's 1998 Rhone-Poulenc Science Book Prize. 

Jared Diamond's Edge Bio Page


The significance of the guy holding out his arm, dipping at the wrist, is that that's a gesture that magicians use to imitate the cassowary. The cassowary is New Guinea's biggest bird. It's flightless. It's like a small ostrich. Weighs up to 100 pounds. And it has razor-sharp legs that can disembowel a man. The sign of the cassowary, if you hold out your arm like this, that's the cassowary rolling its head and dipping its head when it's ready to charge. So magicians will imitate a cassowary in order to show their power. Because the cassowary's big and powerful. Magicians identify with the cassowary. They intimidate people.

 

Conversation

UNDERSTANDING IS A POOR SUBSTITUTE FOR CONVEXITY (ANTIFRAGILITY)

Nassim Nicholas Taleb [12.12.12]

The point we will be making here is that logically, neither trial and error nor "chance" and serendipity can be behind the gains in technology and empirical science attributed to them. By definition chance cannot lead to long term gains (it would no longer be chance); trial and error cannot be unconditionally effective: errors cause planes to crash, buildings to collapse, and knowledge to regress.

NASSIM NICHOLAS TALEB, essayist and former mathematical trader, is Distinguished Professor of Risk Engineering at NYU’s Polytechnic Institute. He is the author the international bestseller The Black Swan and the recently published Antifragile: Things That Gain from Disorder. (US: Random HouseUK: Penguin Press)

Nassim Nicholas Taleb's Edge Bio


UNDERSTANDING IS A POOR SUBSTITUTE FOR CONVEXITY (ANTIFRAGILITY)

Something central, very central, is missing in historical accounts of scientific and technological discovery. The discourse and controversies focus on the role of luck as opposed to teleological programs (from telos, "aim"), that is, ones that rely on pre-set direction from formal science. This is a faux-debate: luck cannot lead to formal research policies; one cannot systematize, formalize, and program randomness. The driver is neither luck nor direction, but must be in the asymmetry (or convexity) of payoffs, a simple mathematical property that has lied hidden from the discourse, and the understanding of which can lead to precise research principles and protocols.

Conversation

HOW TO WIN AT FORECASTING

Philip Tetlock [12.6.12]
Introduction by

The question becomes, is it possible to set up a system for learning from history that's not simply programmed to avoid the most recent mistake in a very simple, mechanistic fashion? Is it possible to set up a system for learning from history that actually learns in our sophisticated way that manages to bring down both false positive and false negatives to some degree? That's a big question mark.

Nobody has really systematically addressed that question until IARPA, the Intelligence Advanced Research Projects Agency, sponsored this particular project, which is very, very ambitious in scale. It's an attempt to address the question of whether you can push political forecasting closer to what philosophers might call an optimal forecasting frontier. That an optimal forecasting frontier is a frontier along which you just can't get any better.

PHILIP E. TETLOCK is Annenberg University Professor at the University of Pennsylvania (School of Arts and Sciences and Wharton School). He is author of Expert Political Judgment: How Good Is It? How Can We Know? 

Philip Tetlock's Edge Bio Page 


[46.50 minutes]

INTRODUCTION
by Daniel Kahneman

Philip Tetlock’s 2005 book Expert Political Judgment: How Good Is It? How Can We Know? demonstrated that accurate long-term political forecasting is, to a good approximation, impossible. The work was a landmark in social science, and its importance was quickly recognized and rewarded in two academic disciplines—political science and psychology. Perhaps more significantly, the work was recognized in the intelligence community, which accepted the challenge of investing significant resources in a search for improved accuracy. The work is ongoing, important discoveries are being made, and Tetlock gives us a chance to peek at what is happening.

Tetlock’s current message is far more positive than his earlier dismantling of long-term political forecasting. He focuses on the near term, where accurate prediction is possible to some degree, and he takes on the task of making political predictions as accurate as they can be. He has successes to report. As he points out in his comments, these  successes will be destabilizing to many institutions, in ways both multiple and profound. With some confidence, we can predict that another landmark of applied social science will soon be reached.

Daniel Kahneman, recipient of the Nobel Prize in Economics, 2002, is the Eugene Higgins Professor of Psychology Emeritus at Princeton University and author of Thinking Fast and Slow.


HOW TO WIN AT FORECASTING
A Conversation with Philip Tetlock 

There's a question that I've been asking myself for nearly three decades now and trying to get a research handle on, and that is why is the quality of public debate so low and why is it that the quality often seems to deteriorate the more important the stakes get?

About 30 years ago I started my work on expert political judgment. It was the height of the Cold War. There was a ferocious debate about how to deal with the Soviet Union. There was a liberal view; there was a conservative view. Each position led to certain predictions about how the Soviets would be likely to react to various policy initiatives.

Conversation

COLLECTIVE INTELLIGENCE

Thomas W. Malone [11.21.12]

As all the people and computers on our planet get more and more closely connected, it's becoming increasingly useful to think of all the people and computers on the planet as a kind of global brain.


THOMAS W. MALONE is the Patrick J. McGovern Professor of Management at the MIT Sloan School of Management and the founding director of the MIT Center for Collective Intelligence. He was also the founding director of the MIT Center for Coordination Science and one of the two founding co-directors of the MIT Initiative on "Inventing the Organizations of the 21st Century".

Thomas W. Malone's Edge Bio Page 


[31:45 minutes]


COLLECTIVE INTELLIGENCE

Pretty much everything I'm doing now falls under the broad umbrella that I'd call collective intelligence. What does collective intelligence mean? It's important to realize that intelligence is not just something that happens inside individual brains. It also arises with groups of individuals. In fact, I'd define collective intelligence as groups of individuals acting collectively in ways that seem intelligent. By that definition, of course, collective intelligence has been around for a very long time. Families, companies, countries, and armies: those are all examples of groups of people working together in ways that at least sometimes seem intelligent.

Conversation

WHAT DO ANIMALS WANT?

Marian Stamp Dawkins [10.31.12]

Whatever anybody says, I feel that the hard problem of consciousness is still very hard, and to try and rest your ethical case on proving something that has baffled people for years seems to me to be not good for animals. Much, much better to say let's go for something tangible, something we can measure. Are the animals healthy, do they have what they want? Then if you can show that, then that's a much, much better basis for making your decisions.


MARIAN STAMP DAWKINS is professor of animal behaviour at the University of Oxford, where she heads the Animal Behaviour Research Group. She is the author of Why Animals Matter.

Marian Stamp Dawkin's Edge Bio Page


[35 minutes]

The Reality Club: Nicholas Humphrey


WHAT DO ANIMALS WANT?

The questions I'm asking myself are really about how much we really know about animal consciousness. A lot of people think we do, or think that we don't need scientific evidence. It really began to worry me that people were basing their arguments on something that we really can't know about at all. One of the questions I asked myself was: how much do we really know? And is what we know the best basis for arguing for animal welfare? I've been thinking hard about that, and I came to the conclusion that the hard problem of consciousness is actually very hard. It's still there, and we kid ourselves if we think we've solved it.

Subscribe to Front page feed