Skip to content. | Skip to navigation

Central Intelligence Agency
The Work of a Nation. The Center of Intelligence

Kent Center Occasional Papers

CIA Home > Library > Kent Center Occasional Papers > Tensions in Analyst-Policymaker Relations: Opinions, Facts, and Evidence

Tensions in Analyst-Policymaker Relations: Opinions, Facts, and Evidence

The Sherman Kent Center for Intelligence Analysis

Occasional Papers: Volume 2, Number 2, Jan. ‘03

Tensions in Analyst-Policymaker Relations:
Opinions, Facts, and Evidence
Jack Davis
Sherman Kent Center

 

This memorandum on tensions in analyst-policymaker relations is occasioned by recent media accounts of DOD-Intelligence Community differences over the extent of Iraqi-al Qa’ida ties. Similar patterns of tension have existed over the decades. The following conclusions could have been crafted about Vietnam War issues in the 1960s, Soviet strategic intentions in the 1970s, or Central American insurgencies in the 1980s.

  1. Tension over policymaker criticism of intelligence performance on hot-button issues is normal. Policymakers believe criticism of what they see as inadequate analysis is part of their job description, especially when they conclude that Directorate of Intelligence assessments complicate their action agendas. On their part, analysts find it difficult to distinguish between bona fide tradecraft criticism and complaints generated by the politics of policymaking.
  1. The intensity and political content of policymaker criticism, and thus the analysts’ pain, can vary considerably. Since politics and policymaking are essentially inseparable and analysts have no alternative market for their wares, they must learn to live with and manage recurring tensions as best they can.
  1. One key to effective management is to take seriously the analytic elements of criticism. A tightening of tradecraft standards would help take off the table the issue of whether analysts have curbed their own cognitive and policy biases in assessing ambiguous evidence. For example, analysts could pay greater attention to critical evaluation of the gaps in information that often underlie disputed judgments. If extended tradecraft efforts reinforce previous judgments, analysts are professionally bound to stand by them.
  1. The analysts’ pain is magnified when colleagues levy charges of “unprofessional analysis” and “politicization” against attempts to address policymaker concerns through more deliberate trade­craft or Alternative Analysis. Effective institutional response on contentious issues requires teamwork—not turf warfare. In particular, analysts must use charges of politicization responsibly—against distortions of facts and judgments that result in a policy bias, whether politically motivated or generated by conspicuously poor tradecraft.
  1. Leadership can ease tensions with policy officials and among analysts by articulat­ing robust corporate tradecraft responses for disputes over interpretation of am­biguous evidence. After 50 years of recurring clashes of policymaker-analyst values and egos, ample “best practices” are there to be codified.

 

Politics and Policymaking

Over the decades, influential policymakers, including those who have expressed general satisfaction with DI analytic support, have been critical of DI performance on individual issues central to their policy agendas. As a rule, the criticism reflects some mixture of the hardball politics of policymaking and pointed tradecraft issues.

Four assumptions about the political roots of analyst-policymaker tensions generally condition this paper’s recommendations to analysts for managing resulting tensions in the relationship.

  • In the American system, government and politics are all but synonymous, and politics are largely characterized by competing personalities and agendas.
  • Thus, the heavy presence of politics and personalities in the national security policymaking process is not only unavoidable but also as American as apple pie.
  • Joining in the policymaking process can be uncomfortable for analysts, who work to minimize, and even deny the existence of, any impact on intelligence assessments of their own political preferences and personalities. And engagement can be painful when policy officials step up the criticism and make it public.
  • But engagement is nonetheless essential, since analysts have the professional charge not only for maintaining analytic integrity but also for ensuring the access and credibility necessary to provide distinctive value-added to policy clients.

In short, analysts gain little by decrying the legitimacy of the political dimension of policymaking. Their best course of action is to work to take complaints about tradecraft matters off the table and hope this exposes, and eases, any politically based pressures.

 

Defining The Tradecraft Dimension of the Problem

Former DCI James Schlesinger (among others) once observed that: Every American is entitled to his own opinion but not his own facts. A sound enough explanation of analyst-policymaker relations, as far as it goes.

On issues central to their agendas, well-informed policymakers often gain insights from intelligence analysts’ well-argued estimative judgments. As a rule, though, they insist on being the national security analysts of last resort when it comes to matters that are uncertain in the sense of being unknowable (what Saddam will do if…). Analysts are well advised to accept their clients’ insistence on rendering the final judgment on complex substantive issues as a proper responsibility at their pay grade.

  • Regarding opinions, on more than one occasion, an official has reminded an analyst that if an estimative judgment in a policy assessment proves wrong or otherwise unhelpful, the President will call the policymaker on the carpet, not the intelligence producer.

In contrast, only rarely does a policymaker claim the right, as national security analyst, to manipulate matters knowable and known (what Saddam said on television yesterday).

  • Regarding facts, a prominent official once observed that policymakers are like surgeons. They don’t last long if they ignore what they see once they cut the patient open.

But what about differences over the meaning and adequacy of the facts; that is, about the quality of the evidence? Analyst-policymaker disputes are usually most acute in inter­preting the evidence about matters that are knowable but not fully known to either intelligence or policy professionals (what Saddam has or has not done regarding extending support to al Qa’ida).

  • Regarding quality of evidence, former DCI William Casey’s admonition to analysts about the Soviet role in international terrorism set forth his standard for keeping a policy-sensitive issue on the table: Absence of evidence is not evidence of absence. In effect, if a devel­op­ment or relationship is plausible, analysts cannot prove a nega­tive to the satisfaction of an official with a mind and agenda of his own.
  • In disputes with analysts, policymakers can insist on raising as well as lowering the bar of proof regarding judgments that could have a negative impact on their agendas. Once when an analyst averred that reliable evidence indicated a devel­opment that undermined an Administration policy initiative was almost certainly taking place, a policy critic retorted that the analyst “couldn’t get a murder-one conviction in an American court with [his] evidence.”

 

The Critics’ Analytic Doctrine and Challenges to DI Tradecraft

Part of the analyst-policymaker tension in evaluating evidence reflects a difference in professional attitude toward odds. To an analyst, the judgment that something is unlikely usually means the odds against an estimative interpretation of ongoing events or projection of future developments are roughly three to one. Given such odds, she is ready to move on to the next question.

In contrast, to a policymaker with an agenda to advance, the same starting odds of roughly one in four can make it worthwhile to stay on the case. Moreover, on hot-button issues the official will not overlook the prospect that the analysts’ judgment could be off base because they are insufficiently informed about the current state and fluidity of foreign forces at play, or because they do not appreciate the impact on developments of US carrots and sticks, if a policy initiative gathers backing.

The reluctance of policymaker critics to rely on what they see as unhelpful assessments on hot-button issues goes beyond professionally necessary “positive think­ing” on their part. Critics also point out the following systemic weaknesses in the analysts’ tradecraft. [1]

  1. Since cognitive bias is pervasive, analysts, like all observers, tend to see more quickly and vividly what they expect to see; and, conversely, tend not to see and properly credit information that would undermine their prior judgments. Critics contend that analysts delude themselves if they think they are exempt because of their claims to “objectivity.”
    • Critics made these points in defending requests that analysts take another look at the rate of success of the strategic hamlet program in Vietnam (1960s), or Soviet plans for winning a nuclear war (1970s), or the extent of today’s Iraqi-al Qa’ida connections.
  1. The analysts’ phrase “we have no evidence” that X exists is judged particu­larly unhelpful by those officials dedicated to either blunting the threat or seizing the policy opportunity in question.
    • Deputy Secretary of Defense Paul Wolfowitz is quoted recently as saying policymak­ing is not a “court of law.” And other critics have noted that analysts rarely admit they have no evidence that X does not exist
  1. In any case, the analysts’ judgment on developments when evidence is ambiguous hardly qualifies as “the truth.” Secretary of Defense Donald Rumsfeld, in a recent press briefing, went to great length to define the limits of the analysts’ opinions in such circumstances.
    • If you think about it, what comes out of intelligence is not fixed, firm conclusions. What comes out are a speculation, an analysis, probabilities, possibilities, estimates. Best guesses.
  1. Analyst training and incentives place too much emphasis on “straight line, single outcome” analysis on complex and uncertain issues. Critics say this “make the call” approach is both unhelpful to sound decisionmaking and prone to error.
    • Assertiveness in the face of uncertainty, according to Wolfowitz, can make estimative analysis into a weapon for one policymaking camp to use against another, whereas tabling alternative interpretations would provide a tool useful to all.
    • Some may prize the analyst who can come quickly to a crisp conclusion on issues surrounded by uncertainty. But this method­ology, the critics note, helps explain the record of analytic failures from the Cuban Missile Crisis to the Iraqi invasion of Kuwait
  1. The analysts’ main job, according to critics, is to provide deliverables to en­able policy analysts to reach sound judgments despite the uncertainty that fogs complex world events. Focus should be on strengths and weak­nesses of foreign players, tendencies and motivations, triggers of change and leverage points—not on what critics derisively call the analysts’ “opinions.”
    • Remember former DDI Doug MacEachin’s scout-coach analogy. The scout’s job is to gather and structure information on opponents to help the coach develop the best game plan, and not to predict the final score before the game is played.
  1. Especially when policy stakes are high, analysts should expend much more effort evaluating what they don’t know and why they don’t know it. For example, could gaps in analysts’ information on potentially harmful developments be caused by Denial and Deception (D&D) operations, or inadequate US collection, or flawed assumptions about which pathways and relationships deserve analytic focus?
    • The 1998 Missile Commission report charged intelligence analysts with managing collection and analysis to assess critically the alternative explanations for “particular gaps in a list of indicators.” In the Commission’s view, greater confidence about the non-existence of an indicator can be as important in reaching judgments as finding proof that an indicator does exist.
  1. Finally, according to the critics, it is the duty of the responsible policy officials to ask probing questions, to insist on critical review of the evidence, to send analysts back to the drawing board for another look.
    • Secretary Rumsfeld at a recent DOD briefing referred to the impor­tance of engagement and criticism: “…to the extent there’s no feed­back coming from…a user of intelligence, then one ought not expect that the level of competence…on the part of people supplying the intelligence will be as good…as if there’s an effective interaction.”

Granted that political overtones often color these criticisms. But in tradecraft terms they represent reasonable standards for policymaking officials to levy on analysts charged with providing distinctive value-added to US policymaking efforts.

Defining Professional and Unprofessional Analysis

The doctrinal basis for responding to policymaker criticism should reflect definitions that set boundaries for the proper role for intelligence analysis. Constructing agreed definitions of complex concepts such as professional and unprofessional analysis is no easy task. Not everything relevant can be included; choices and priorities are arguable and open the gate to the definer’s motivated and unconscious biases.

A practical and defensible definition of the professional mission of intelligence analysts posits both sound analytic practice and high potential utility for the policymaking process as equally important standards. Neither objectivity without impact, nor impact without objectivity, meets the standard of the following definition.

The mission of intelligence analysts is to apply in-depth substantive expertise, all-source informa­tion, and tough-minded tradecraft to produce assessments that provide distinctive value-added to policy clients’ efforts to protect and advance US security interests.

To fulfill this mission, analytic deliverables must be seen by policy officials to have utility as they envision their professional mission, which, in effect, is to posit and enact an Administration’s politically colored policy agenda. The analysts who would produce an assessment with high potential for utility to the policymaking process can no more ignore the political context in which their clients operate than they can ignore where the latter are on their learning curves and decision cycles.

To take account of the politics of policymaking is not a license for intelligence professionals, as analysts, to become policymakers, or their speechwriters or spear-carriers. But if an analyst is not close enough to the process to feel the competitive pressures of policymaking, he or she is probably not close enough to produce professionally crafted deliver­ables that provide distinctive value-added.

Thus, there will always be a danger that analysts, in constructing their written assessments and oral commentary, will introduce a political slant—either deliberately or through sloppy tradecraft. Analysts have done so in the past, and likely will do so from time to time in the future.

A politicized and therefore unprofessional assessment can be defined as an analytic deliverable that reflects either (1) the analyst’s motivated effort to distort facts and judgments to support, or oppose, a specific policy, political entity, or general ideology, or (2) a conspicuous, even if unmotivated, disregard for sound tradecraft standards that produces similarly distorted outputs that could affect the policymaking process.

From the policymakers’ agenda-oriented perspective it makes little difference whether what they see as analytic bias is motivated or unmotivated. One senior official, for example, complained that every assessment that indicated an Administration initiative was flawed constituted analytic policymaking, since it provided ammunition to Congress to oppose funding the initiative.

So long as criticism of analysis by policymakers reflects a legitimate tradecraft concern, they are not necessarily putting pressure on analysts to engage in unprofessional behavior. Policy officials have the license to change the intelligence question from the one the analysts preferred to address, to ask that assumptions and evidence be examined more thoroughly, and to request customized follow-on assessments. That is part of their job description, whether they are seeking fresh insights or analytic support for their established views.

Thus, it is not unprofessional behavior for analysts, on their own or when requested, to provide alternatives to their unit’s bottom-line interpretations of ambiguous evidence of ongoing developments and estimative projections of complex trends—so labeled and vested with appropriate tradecraft for dealing with substantive uncertainty.

Additionally, it is not unprofessional behavior for an analyst to address policy options for dealing with specific threats to and opportunities for an established general policy. The key to sound “action analysis” is for the analyst to identify plausible initiatives and evaluate them in cost-benefit terms, and for the policymakers to choose what course to pursue and bear responsibility for their decisions.

Finally, for a manager to tighten tradecraft standards on a sensitive policy issue before an analyst’s assessment goes forward under a corporate DI seal is rarely a signal of unprofessional behavior. Painful to the analyst, yes. Politicization of his assessment, no.

Analysts and managers must be vigilant in identifying, deterring, and decrying unprofessional assessments as herein defined; when engaged in analysis, they are and must remain intelligence professionals, not policy or political aides. But analysts must also take seriously the “cry wolf” danger of levying charges of politicization whenever their authority to control the bottom line of an assessment is abridged.

More to the point, if ever teamwork must prevail over turf warfare and the individual analyst’s sense of entitlement to determine what is in the best interests of the Directorate, it is when the analytic corps is dealing with a contentious policy issue. Over the decades, many analysts who have made reasonable tradecraft adjustments to clients’ criticism have felt the sting of colleagues’ unreasonable charges of politicization.

 

The Analysts’ Response to Policymaker Criticism: Best Practices

The DI has had 50 years of experience at trying simultaneously to maintain professional tradecraft standards and to provide customized analytic support to policymakers on politically sensitive policy issues. A good case can be made that the Directorate collectively has met this challenge well, especially over the past decade.

For the most part though, the Directorate has tended to see and treat each challenge as one of a kind. For their part, individual analysts have tended to see the issue as a challenge affecting only those directly in the crossfire at any given time. Meanwhile, potentially instructive analyst responses over the years were usually closeted in limited-distribution memoranda or in scantily recorded oral exchanges. Remarkably, no advanced course for analysts on managing policy relations was offered over the past decade. (The Kent School offered a well-received pilot for such a course in December 2002). Analysts and team leaders facing the challenge for the first time, then, have had to learn to deal with policymaker criticism of their professional deliverables by at times painful personal experience.

This paper has argued that policymaker criticism of DI analysis on hot-button issues is not an exceptional challenge but a largely normal clash of conflicting professional priorities between analysts and policymakers as two distinct national security tribes. Below is an attempt to provide general tradecraft guidance for analysts based mainly on personal experience and research. Because contentious issues usually generate an expansion in requests for analytic deliverables and a compression of deadlines, managers are advised to invest in incremental development of identified analyst skills before sustained policymaker criticism strikes.

The general message of these recommendations is that analysts should take the tradecraft elements of policymaker criticism seriously. Analysts should enhance, first, understanding of the dynamics of national security policy making, second, their vulnerability to misperception and, third, their skills in remedial practices. The goal—by raising standards of practice—is to take tradecraft issues off the table, so to speak, in an effort to isolate and defuse any politically motivated elements.

 

Recommendations for DI Analysts

  1. Analysts should commit to learning as much about the US policymaking process and their key policymaking clients as, say, a national security correspondent for a major newspaper or journal is expected to command. Analysts, starting from year one, have to spend quality time analyzing how Washington works, even if this slows down the pace of grasping how Baghdad, Berlin, or Buenos Aires work.
    • By setting practical and measurable goals, both learning curves can be mounted in reasonable time. One approach is to hold biweekly “pizza lunches” on the unit’s policymaking environment. Invite junior policy aides, National Intelligence Officers, PDB briefers, and other informed people as guest speakers. Assign individual analysts to brief the unit on the policy-shaping background and current activities and statements of individual clients.
    • Another measure is for analysts to be trained to role-play potentially critical policy clients while reviewing their drafts, not to mortgage analytic integrity but to anticipate tradecraft and other challenges and beef up vulnerable elements of the assessment.
  2. Analysts should recognize that they reach judgments on complex issues not solely on the basis of facts that speak for themselves, but also by reliance on substantive biases that help them make sense of incomplete, contradictory and otherwise ambiguous information. Judgments based on “professional mindset” are usually well founded in substantive expertise on the issue at hand, but as a rule are not demonstrable on the basis of available evidence to a skeptical analytic colleague or manager, much less a doubting policy official.
    • Regarding unprecedented events, at times the more the analyst knows about the general issue the more she has to unlearn to credit the onset of a new driving force and oncoming paradigm shift. Most chapter headings of post-war history feature developments that DI experts, along with nongovern­mental counterparts, once thought of as unlikely. For example, the collapse of the Soviet Union and the reunification of Germany.
    • Recognition of the substantive perils of estimating is a prerequisite for taking the views of other informed observers seriously—in effect, to use criticism to advantage. This, again, is not an invitation to jettison hard earned professional judgments, but to avoid premature dismissal of alternative explanations and outcomes.
  3. Analysts should also take heed of the psychological perils of estimating. Inherent cognitive limitations compound the mindset challenge caused by the limitations of substantive expertise regarding “one of a kind” events. Whatever the analyst’s level of expertise, the “hard-wiring” of the mind tends to make confirming information seem more vivid and authoritative than informa­tion that would call an established bottom line judgment into question.
    • Authorities on perception and misperception in national security affairs have concluded that general recognition of cognitive vulnerability does not remove risk of unmotivated bias in evaluating information.
  4. Managing the twin perils of substantive and cognitive bias while interpreting complex developments and predicting uncertain outcomes requires engagement in analytic structuring as well as in group brainstorming and casual self-criticism.
    • Unstructured attempts to challenge the lead analyst’s preferred judgments often produce more heat than light. Especially on hot-button issues, analysts should commit to all the structuring steps associated with Linchpin Analysis or a similar method of organized analysis: spell out and critically assess assumptions, be specific about signposts that would make you change your bottom line, identify triggers of a shift in momentum or probabilities. [2]
  5. Analyze with care, and an open mind, the policy critics’ paradigm of a contentious issue, however colored by political considerations it may at first seem. Deconstruct it to identify the critic’s assumptions, evaluation of evidence, and calculations of likelihood.
    • If the prominence of the issue warrants, undertake a Devil’s Advocacy exercise, whereby the production unit’s judgments about a contested issue are put to one side, and one or more analysts use appropriate tradecraft to try to justify the critic’s judgments. Again, the goal of such activities is not necessarily to abandon or modify the DI’s bottom line estimative judgments, but to put them to the test of a well-vetted alternative analytic approach.
  6. Do not hesitate to change the question from the one analysts initially believed should be addressed to ones policy critics call for. Often the shift is from what is the most likely interpretation of an event or relationship or the most likely future path of development, to what direct and indirect leverage the US has to reduce dangers and seize opportunities.
    • Analysts should have the skills at hand to adjust the mix and emphasis of deliverables to the preference of many well-informed policymakers for help in understanding and leveraging forces at play rather than for reporting events and estimating outcomes.
    • Action analysis requires assessments to go beyond “relevancy” to the policymakers’ concerns and to address with expert understanding power relationships abroad, as well as foreign leaders’ risk analysis and susceptibility to US carrots and sticks.
  7. More deliberate analyst attention to evaluating evidence on contentious policy issues, especially assessments of gaps in information, is a promising avenue for stripping tradecraft complaints from policymaker criticism of analytic performance.
    • Analysts should seek help from specialized DI units when needed but must ensure in­cremental growth in their own skills for evaluating the authenticity of information (combating D&D), the extent to which available information represents the total picture (under­standing collection platforms), and the information’s diagnostic power (to sort signals from noise).
    • Analysts should take the extra steps needed to convince lead policymakers on sensitive issues that the production unit has done its homework in evaluating evidence. More robust tradecraft for evaluating absence of evidence is needed here. Analysts can adapt for a variety of analytic issues the illustrative example of “gaps in information” tradecraft outlined on the last page below.
  8. Careful use of estimative terminology, always important in intelligence-policy relations to avoid compounding substantive uncertainty with linguistic confusion, is essential on sensitive issues. Vague estimative phrases such as “real possibility” and “good chance” should be avoided, even at the risk of an exaggerated precision (e.g., “we judge the odds to be low—no more than 1 in 5”). On controversial issues also avoid non-falsifi­able judgments such as “it is possible,” “suggests that,” and “according to reports.” Provide instead an evaluation of the authenticity, comprehensiveness of coverage, and significance of the evidence.
  9. As long as an analytic unit believes it has done its homework in evaluating evidence and in considering alternative explanations and projections, it should stand by its estimative judgments even if policymaker criticism persists. But the unit should also work to ensure continued access to and credibility with key clients by varying the emphasis and potential utility of its deliverables. Consider taking the following “1-3-1” approach to a hot button issue on which the Team or Issue Group is engaged in producing nearly daily assessments.
    • Once a week, issue a net judgment assessment that features a credible accounting of the impact of recent developments and reports.
    • Several times a week, put the net judgment aside and employ action analysis to address tactical dangers and policy opportunities on which direct and indirect US leverage could be applied.
    • Once a week, change the question via Alternative Analysis tradecraft. For ex­ample: What-If Analysis (What we would see, if the likelihood of development X increased). Risk-Benefit Analysis (The adversary’s estimated calculations affecting its possible engagement in development X). If-Then Analysis (Implica­tions of the advent of a high-impact, low-probability development).
  10. What of the danger that analysts’ efforts to curb their own substantive and cognitive biases will generate deliverables that provide unwarranted support to the clients’ biases and weaken respect for the production unit’s professional judgment?

There may be no win-win answer, but suppression of tradecraft initiatives and the countering of policymaker exaggeration of certitude with analyst exaggeration will help neither camp. Perhaps a blending of activities and deliverables that indicate an open mind toward alternative interpretations with regular affirmation of what analysts believe to be sound, if vulnerable, judgments will suit best. Finally, early engagement of the DI Ombudsman and other detached veteran practitioners will help a beleaguered production unit deal with the difficult challenge of identifying the best professional response to policymaker criticism.

Recommendations for DI Leadership: Robust Corporate Tradecraft

Every intelligence issue, and every opportunity to craft a written assessment or present an oral brief, has one-of-a-kind elements. Good analysis is often driven by lead analyst and team examination of these particulars, specialized execution of analytic tradecraft—at times through a trial and error approach, and then a largely personalistic quality control and review process. Identification of best practices, as indicated, can be lost in concentration on the details.

When internal or external pressures have demanded it, the DDI has launched initiatives to identify, codify, and teach best practices based on how veteran analysts have dealt with the common and recurring elements of an analytic challenge. This process was essentially undertaken regarding Alternative Analysis in 1999-2000.

DI leadership should consider commissioning such an effort for the analytic management of policymaker criticism of the kind outlined in this paper. A relatively inexperienced analytic work force will be much in need of help if the prolonged tensions that surrounded, say, the Vietnam War or the Central American insurgencies return soon on Iraq or some other complex issue.

  • The Agency can tap a rich and useful trove of experience, including from oral deliverables at inter-agency policy meetings and telephone and teleconference exchanges and limited-distribution customized memoranda. Analysts and managers who have had tours with policymaking outfits will also have much insight to contribute
  • Convening a conference to pool the experience of retired and practicing analysts who have worked the challenge, and then holding a follow-up exchange with policymaking counterparts could tap additional insights. A rather flat learning curve has to be made steeper.

The commissioning of series of specialized tradecraft notes, one or more case studies, and a conference or two on the challenge—to all of which the DDI would lend guidance and sanction standards—would be a worthwhile and timely investment. The resultant DDI-sanctioned tradecraft approaches could be incorporated into CIA University training initiatives on policy relations (e.g., a Kent School seminar). Individual production offices may seek their own training and mentoring programs based on the codified DI tradecraft standards.

Transparent corporate tradecraft is not meant to drive out individualized approaches to the management of particular analytic assignments when circumstances require. But it will serve to shape, speed, and strengthen the overall body of Directorate responses to policymaker criticism.

 

Text Box

Illustrative Methodology to Assess an Information Gap

Hypothetical Intelligence Challenge

To test the analysts’ judgment that Country X probably does not have a robust nuclear weapons program by examining the absence of specific information needed to confirm existence of the program.

Analytic Team Activities

  1. Identify 3 to 5 essential factors on which analysts have little or no reliable information but that would have to be present if Country X has a robust nuclear weapons program. (For example, an ample foreign or domestic source of weapons-grade fissile material).
  2. Evaluate Country X’s assumed ability to use Denial and Deception (D&D) to block effective intelligence collection against each essential factor.
  3. List plausible non-D&D explanations, however unlikely, for all-source collection to miss each of the essential factors.
  4. Evaluate US ability to collect information on each essential factor, if the program existed, taking into account D&D and other barriers.
  5. Calculate the team’s collective intuitive estimate of the probability that all-source analysis would miss obtaining information on each essential factor, if the program existed (for example, greater than 90%, greater than 50%).
  6. Calculate the team’s collective intuitive estimate of the probability that all-source analysis would miss collecting hard information on all of the essential factors (that is, the likelihood that a robust weapons program indeed exists).
  7. Commission individual team members or supporting contractors to list all anomalous information that has been collected but not credited as authentic or diagnostic and explain the reason for discrediting in each instance.
  8. Commission a similar group to prepare a Devil’s Advocacy assessment that seeks to justify analytically a greater likelihood that a robust nuclear weapons program exists
  9. On the basis of Steps 1-8, reassess the Team’s estimative judgment of the likelihood that a robust nuclear weapons program exists.


[1] Over the past decade, policy critics have been both generous and largely consistent in explanations of their criticism of certain aspects of analytic tradecraft. Major sources of commentary include: a presentation to DI managers in 1994 by three high-ranking policy officials of the first Bush Administration (Paul Wolfowitz, Steven Hadley, Arnold Kantor); commentary by Paul Wolfowitz in 1995 on “managing uncertainty”; the report on missile analysis by a commission on which current DOD officials Donald Rumsfeld, Wolfowitz, and Steven Cambone served (1998); and, since 2001, articles and public statements by and commentary in the media attributed to Secretary of Defense Rumsfeld and Deputy Secretary Wolfowitz.

[2] Linchpin analysis is an approach to structured argumentation advocated by former DDI Douglas MacEachin in the early 1990s, when he revised terms from standard academic nomenclature. Key variables became “drivers” of outcomes, and hypotheses about drivers became “linchpins”— assumptions underlying the argument that had to be spelled out explicitly.

     

    Disclaimer:
    All statements of fact, opinion, or analysis expressed in Occasional Papers are those of the authors. They do not necessarily reflect official positions or views of the Kent School, the Central Intelligence Agency, or any other US Government entity, past or present.

    Nothing in the contents should be construed as asserting or implying US Government endorsement of an article's factual statements and interpretations.

    These papers have been prepared with the support of Central Intelligence Agency funds and are published with the consent of the authors.


    Posted: Apr 21, 2007 08:30 PM
    Last Updated: Nov 08, 2007 02:10 PM
    Last Reviewed: Apr 21, 2007 08:30 PM