Skip to content. | Skip to navigation

Central Intelligence Agency
The Work of a Nation. The Center of Intelligence

CSI

Chapter Nine

Recommendations

The First Step: Recognizing A Fundamental Problem

It is far too early in the research process to determine if any one organizational model for the Intelligence Community is more or less effective than any other, but I believe there is a fundamental structural question that needs to be addressed at the outset.  This is, in my view, that current reporting competes for time and resources with indications and warning (I&W) intelligence.  This emphasis is unlikely to change, for several reasons.  First, current intelligence reporting results in significant “face-time” for the Intelligence Community with policy makers, who, in turn, provide the resources that fund and support community activities.  This is a significant contributor to the social capital that the Intelligence Community commands.

The second reason is that in-depth research of the kind that contributes to I&W intelligence is a long-term investment whose payoff is often an abstraction.  Not infrequently, successful warnings are taken for granted.  Those that fail, however, may well involve the community in public recriminations that cost the Intelligence Community significant social capital.  In this sense, the Intelligence Community’s focus on current reporting is understandable.  The problem is that producing current intelligence tends to become an all-consuming activity.  The majority of analysts who participated in this study said that their time was spent on current reporting.  Unfortunately, this does little to improve I&W intelligence, which requires long-term research, in-depth expertise, adoption of scientific methods, and continuous performance improvement.  The return for the Intelligence Community, in terms of social capital, may be quite limited and even, as noted above, negative.  Thus, the analytic area most in need of long-term investment often gets the least. 

As the resources available to intelligence analysis are limited, it needs to be determined if those resources are better spent on the reporting functions of the Intelligence Community or on warning functions.  It also needs to be determined whether these functions should be performed by the same analysts or if they are two separate career tracks.  To make this determination, the Intelligence Community will need to invest in what I call a Performance Improvement Infrastructure as well as basic and applied analytic research.

 

Performance Improvement Infrastructure

The first step in improving job or task-specific performance is the establishment of a formal infrastructure designed explicitly to create an iterative performance improvement process.  Such a process would include:

  • measuring actual analytic performance to create baseline data;

  • determining ideal analytic performance and standards;

  • comparing actual performance with ideal performance;

  • identifying performance gaps;

  • creating interventions to improve analytic performance;

  • measuring actual analytic performance to evaluate the effectiveness of interventions.

Several organizational, or infrastructure, assets should be developed to support this process.  These should include:

  • basic and applied research programs;

  • knowledge repositories;

  • communities of practice;

  • development of performance improvement techniques.

The performance improvement process would be repeated throughout the life cycle of an organization in order to encourage continuous improvement.  With the infrastructure and process in place, an organization would be capable of adapting to new or changing environmental conditions.

 

Infrastructure Requirements

Institutional changes, such as corporate reorganizations, are often enacted without a clear understanding of their potential or actual impact.  What is most often missing in such changes is a basic research plan or a systems approach to determine and predict the effect on organizational performance.  The same is true with the Intelligence Community.  Although there have been numerous proposals to reorganize the Intelligence Community—including those that resulted from the hearings of the Kean 9/11 commission—few have addressed the question of why one change would be any more effective than any other change.  Merely asserting, based on some a priori notion of effectiveness, that organizational scheme X is more effective than organizational scheme Y is insufficient evidence.  What is needed is a posteriori data, such as case studies, to support or refute the proposed change.[1]

Organizational Requirements.  Many large organizations distribute performance improvement responsibilities throughout the organization at a supervisory or midlevel of management, but the group most often charged with collecting and analyzing performance data is the human resources department.  This task generally involves developing task-specific performance standards and metrics based on expert performance models and in accordance with corporate policy.  

The human resources department also becomes the central repository for pre-, periodic, and post-performance measurements. As this department generally has contact with employees throughout their careers, this is the most efficient way to manage, analyze, and inform senior leadership about aggregate changes in performance over time.  Although data are collected at the individual level, it is the aggregation of performance data that allows leadership to determine the effectiveness of any organizational change or job-related intervention. 

Baseline Data.  Measuring actual analytic performance is essential to the establishment of a data driven performance infrastructure.  The analysts in this study perceived their performance to be tied directly to the quantity of written products they produced during each review period.  Counting the number of analytic publications is one metric, of course, but it is hardly indicative of analytic quality.  Surgeons are a useful example of this problem..  They may count the number of patients they treat, but this metric says more about system throughput and salesmanship than it does about surgical performance.  Unlike the purely cognitive work of intelligence analysts, surgeons have the advantage of multiple physical outputs, which makes measurement an easier task.  In particular, surgeons have patient outcomes, or morbidity and mortality ratios, which become a grounded end-state for all measurements.[2] Other things being equal, these data then ought to inform a prospective patient about where to take his or her business.

For intelligence analysts, the question may be put as, “What is an analytic morbidity and mortality ratio?”  The process of describing and identifying morbidity and mortality, or error and failure in analytical terms, is a necessary step in identifying mechanisms to develop, test, and implement performance improvement interventions.  There was little consensus among the participants in this study about what comprises failure, or even if failure was possible.  There was greater consensus regarding the nature of analytic error, which was generally thought to be a consequence of analytic inaccuracy.

Metrics.  One could reasonably conclude that compounded errors lead to analytic failure.  Conversely, one could conclude that failure is the result of analytic surprise, that its causes are different from the causes of error, and that it needs to be treated as a separate measurement.  This subject is open to debate and will require further research.  It is still possible, however, to use both accuracy and surprise as metrics in evaluating analytic performance on a case-by-case basis.

The advantage of an error and failure metric is that it is observable in a grounded state separate from the analytic process.  Any analytic product can be reviewed to determine levels of accuracy, and any unexpected event can be traced back through analytic products to determine if there was an instance of surprise. 

Once levels of error and failure are calculated, along with measures of output, it is possible to determine expert levels of performance and to derive performance models based on successful processes.  In any organization, there will be those individuals with the greatest output—in this case, the greatest number of written products.  There will also be individuals with the highest levels of accuracy—in this case, factual consistency.  There will also be individuals who have the lowest incident of surprise—in this case, those who generate the greatest number of potential scenarios and track and report probabilities most reliably.  Using data-driven metrics means that expertise is not a function of tenure; rather, it is a function of performance. 

Once expert performers are identified, it is possible to capture their work processes and to develop performance models based on peak efficiency and effectiveness within the Intelligence Community.  Through the use of cognitive, behavioral, and linguistic task analyses, ethnography, and controlled experiments, it is possible to generate process metrics to identify analytic methods that are more effective for specific tasks than other methods.  This is not to say that there is one analytic method that is the most effective for intelligence analysis; rather, each type of task will have an analytic method that is best suited to accomplishing it in an efficient and effective manner.

Developing these metrics is no small task.  It is a job that will require numerous researchers and research programs within, or with access to, the Intelligence Community. These programs will need formal relationships with human resource departments, analytic divisions, organizational leadership, and developers of training and technology interventions in order to have a positive effect on analytic performance. 

 

Research Programs

The results of this research indicate that the Intelligence Community needs to commit itself to performance research that is rigorous, valid (in that it measures what it proposes to measure), and replicable (in that the method is sufficiently transparent that anyone can repeat it).  Within some intelligence organizations, this has been an ongoing process.  The problem is that most of the internal research has concentrated on historical case studies and the development of technological innovations.  What is missing is focused study of human performance within the analytic components of the Intelligence Community.  Questions about the psychology and basic cognitive aptitude of intelligence analysts, the effectiveness of any analytic method, the effectiveness of training interventions, group processes versus individual processes, environmental conditions, and cultural-organizational effects need to be addressed.

This effort will require commitment.  Researchers will have to be brought into the Intelligence Community, facilities will have to be dedicated to researching analytic performance, expert analysts will have to give some percentage of their time to participating in research studies, managers and supervisors will have to dedicate time and resources to tracking analytic performance within their departments, human resource staffs will have to dedicate time and resources to developing a performance repository, and there will have to be formal interaction between researchers and the community.   

Analytic Performance Research.  In the previous section, I discussed the need for analytic standards as part of the Performance Improvement Infrastructure.  In terms of a research program, this will require, as a first step, the collection of baseline analytic performance data and a clear and measurable description of ideal analytic behavior.  Next, there should be a determined effort by human performance researchers to develop, test, and validate analytic performance metrics and measurement systems.  This will be a lengthy process.  The accuracy and surprise measures suggested in this text require large historical and comparative data sets and are cumbersome and time consuming to perform.  Conducting behavioral, cognitive, and linguistic task analyses requires significant research expertise, ample time, and broad organizational access. 

In time, analytic performance research will become a highly specialized domain and will require continuous organizational access not normally available to outsiders.  It will become necessary for the Intelligence Community to establish internal or cooperative research centers in order to acquire the research expertise necessary to analyze and effect performance improvement.  There are numerous community outreach efforts on which these centers can be built.  Those efforts need to be expanded, however, and those programs need to include domains beyond the traditional relationship between the Intelligence Community and political or geographic area experts.[3]

Institutional Memory.  The results of this research program indicate that there is a loss of corporate knowledge in the Intelligence Community due to employee attrition and the lack of a central knowledge repository for capturing “lessons learned.”  A number of industries and government organizations, including the Departments of Defense and Energy and the National Aeronautics and Space Administration, already maintain centers for lessons learned as an information hub for its employees.[4]

These centers act as information repositories for successful and unsuccessful operations and interventions.  Their purpose is to reduce the amount of organizational redundancy and levels of error and failure by tracking, analyzing, and reporting on after-action reviews and analytic outcome data.[5]   The other primary function of these repositories is to establish networks for communities of practice within and among organizations.

Networked communities of practice allow professionals to interact, exchange methodological information, post and respond to individual case studies, and develop ad hoc teams of experts for specific problem solving tasks.  With simple search tools, basic database software, and a simple network visualization interface, any analyst in the Intelligence Community would be able to identify any other expert whose domain specialty was needed to answer a specific question or solve a specific problem.  Another advantage of this model is the development of formal and informal mentoring within the network.  Any novice would be able to find an expert within the Intelligence Community and establish a relationship that would be beneficial to both.  With appropriate incentives, experts would be encouraged to contribute to the network and make available their time and expertise for the purpose of mentoring.

Intelligence analysis, like other fields of science, is a cognitive process.  Although tools and technologies may be available to assist cognitive processes, such as measurement devices for physical scientists, technology is ultimately merely a tool to be designed and developed using a human-centered approach.  As such, any new technology needs to be a passive tool, employed by analysts to solve specific problems or answer specific questions, rather than a restrictive reinterpretation of cognition according to the rules of binary computation and artificial intelligence theorists.[6]

Analytic Psychology and Cognition.  As evidenced by the work of Richards Heuer and others, there is significant research to be conducted into the cognitive mechanisms involved in intelligence analysis.[7]  Understanding and defining the heuristics used in performing intelligence analysis, as well as cognitive-load thresholds, multitasking requirements, mechanisms that generate cognitive biases, and the utilization of pattern recognition strategies and anomaly detection methods are all areas that will prove fundamental to improving analytic performance. 

In addition to researching basic cognitive functions and intelligence analysis, this area of research will be valuable for understanding how external variables, such as time constraints and analytic production methods, affect the cognitive processing of individual analysts.  Another result will be the development of future employee screening and selection tools that will match the specific cognitive requirements of intelligence analysis with each applicant’s individual knowledge, skills, and abilities. 

Analysts employ cognitive strategies that are time efficient in order to cope with the demands of producing daily written products, but such strategies are not necessarily the most effective analytic methods for increasing analytic accuracy and decreasing the occurrence of analytic surprise.  In fact, improving analytic accuracy and avoiding surprise may require mutually exclusive analytic strategies.  This line of inquiry will require baseline performance data generated through the development of performance metrics and conducted in conjunction with research in analytic methodology effectiveness. The results would then be integrated into a knowledge repository.

These types of studies will require experimental psychologists and cognitive scientists working in controlled laboratory environments with consistent access to working professional analysts. 

Development and Validation of Analytic Methods. The Intelligence Community routinely generates ad hoc methodologies to solve specific analytic problems or crises.  However, once the problem has been solved, or the crisis averted, the new analytic method may or may not become part of the institutional memory.  Often these new methods are lost and need to be re-created to address the next problem or crisis.  In addition, these methods are seldom tested against other competing analytic methods for validity or reliability.  It is difficult for an analyst to know which analytic method to employ in a given situation or requirement.

There are obvious inefficiencies in the current model.  First, there is the loss of corporate knowledge each time an innovative analytic method is generated and subsequently abandoned.  Second, there is no effectiveness testing center where analytic methods can be compared for specific cases.  Although there are hundreds of analytic strategies, there is no way to determine which strategy is the most effective for any particular problem set.

The lack of an analytic methodology research agenda leads analysts to choose methods with which they are most familiar or to choose those dictated by circumstance, such as deadlines.  Moreover, instead of advancing the concept that intelligence analysis is science and needs to be engaged in like any other scientific discipline, the paucity of effectiveness data supports a deep-seated community bias that analytic methods are idiosyncratic and, therefore, akin to craft.

The development of a research agenda for analytic methodology that is focused on collecting effectiveness and validation data is the first step in moving intelligence analysis from a tradecraft model to a scientific model.  This may be the most culturally difficult recommendation to implement: there is cultural resistance to adopting a science-based model of intelligence analysis that is rooted in the traditions, norms, and values of the Intelligence Community.  Another difficult step will be to introduce effectiveness data and corresponding analytic methods to the community at large and to incorporate these in future training programs.

Training Effectiveness.  Successful analysis demands group cohesion and the implementation of consistent, effective analytic methods within the Intelligence Community.  The best way to achieve this is through formal basic and advanced training programs.  As noted earlier, several agencies within the community have invested resources in formal training programs, but these programs are unique to each agency and are often missing evaluations of student performance.  Although most formal courses include a written subjective evaluation of the instructor, as well as the student’s perception of the value of the course, the evaluation of student performance has yet to be formalized.

Without evaluating preintervention, or precourse, performance and following that with a postintervention evaluation, it is difficult to determine the effect that any training intervention will have on employee performance.  In addition to formal measurements based on course objectives, it is important to collect performance data from managers and supervisors to evaluate the retention of training and the impact that training has had on actual day-to-day performance.

Developing performance metrics will inform and advance the training interventions currently employed in the Intelligence Community and will determine the gap between ideal performance and actual performance.  As such, the system is an iterative process of setting performance standards, measuring actual performance, designing training interventions to improve performance, and evaluating the effects of those interventions on actual performance.  The data derived from these interventions and measurements will then contribute to the growth of the knowledge repository and strengthen the ties created through the communities of practice.

Organizational Culture and Effectiveness.  Identifying existing organizational norms and taboos is the first step to creating an internal dialogue about the future of an organization and its place in a competitive environment.  Culture drives the operations of an organization, determines the people who are hired, enculturates new employees, establishes standards of behavior and systems of rewards, shapes an organization’s products, and determines the social capital that any organization may possess.  In short, culture defines an organization’s identity to itself and to others.

Understanding the culture of the Intelligence Community and analyzing the effects of any performance intervention on that culture contributes to the evaluation of intervention effectiveness.  Effective performance interventions will have a positive effect on the organization’s culture and become themselves measurement instruments.

Developing cultural markers to track organizational change and performance improvement requires baseline ethnographic data and the identification of key cultural indicators.  Once identified, cultural indicators such as language use, norms, and taboos would be measured at regular intervals and would serve as grounded data to determine levels of change within the organization.  This would permit interventions to be modified before they became ritualized within the Intelligence Community.

 

The Importance of Access

The improvement of human performance often requires an organization to change its culture, and organizational leaders seldom possess sufficient power to mandate cultural change by edict.  At best, management can introduce agents or agencies of change and manage their organization’s culture in the same way they manage physical and financial resources.  An organization’s culture shapes individual behavior by establishing norms and taboos and, ultimately, determines the quality and character of an organization’s products.  Culture and product are inseparable, and one cannot be changed without affecting the other.  The choice confronting any organization is to manage its institutional culture or to be managed by it.

There is no single path to carrying out the research recommended in this work.  It could be performed at a single center or coordinated through several specific centers; it could be purely internal to the Intelligence Community; a cooperative effort among the community, academe, and national laboratories; or some combination of these.  What is most important to effective implementation is that there be regular and open access among researchers and the Intelligence Community.  This may appear simple enough, but access equals trust, and trust is difficult to establish in any domain.  This is especially the case within the Intelligence Community.  The Intelligence Community needs to increase its commitment to community outreach efforts.  This study is one such effort.

During the course of my research, the value of access and the premium the community places on trust quickly became evident.  At agency after agency, physical access restrictions, security clearances, forms, interviews, phone calls, questions, vetting, and more vetting were all signs of the value, not of secrecy per se, but of trust and access.  Without this sort of cooperation, this research would have been impossible, and this is an important lesson that ought to inform future research programs.

 

Footnotes:

[1] William Nolte, a deputy assistant director of central intelligence for analysis and production proposed such an idea in “Preserving Central Intelligence: Assessment and Evaluation in Support of the DCI ” in Studies in Intelligence 48, no. 3 (2004): 21–25.

[2] Grounded Theory is the development of theoretical constructs that result from performing interpretive analysis on qualitative data rather than relying on a priori insights. The theory is then derived from some grounded data set. Barney Glaser and Anselm Strauss, Discovery of Grounded Theory; Barney Glaser, Theoretical Sensitivity; Barney Glaser, Basics of Grounded Theory Analysis.

[3] An example of researching the validity and reliability of metrics can be found in the Buros Mental Measurement Yearbook at the Buros Institute of Mental Measurements Web site.

[4] See the US Army Center for Army Lessons Learned (CALL) Web site, which has links to numerous other repositories.

[5] See Chapter Six for a more detailed explanation of the After Action Review process.

[6] See Chapter Five for a more detailed description of the limitations of technological solutions.

[7] Richards J. Heuer, Jr., Psychology of Intelligence Analysis; William Brei, Getting Intelligence Right; Isaac Ben-Israel, “Philosophy and Methodology of Intelligence:  The Logic of Estimate Process”; Klaus Knorr, Foreign Intelligence and the Social Sciences; Abraham Ben-Zvi, “The Study of Surprise Attacks.” See also Marjorie Cline, Carla Christiansen and Judith Fontaine, Scholar’s Guide to Intelligence Literature.


Historical Document
Posted: Mar 16, 2007 08:49 AM
Last Updated: Jun 28, 2008 01:00 PM
Last Reviewed: Mar 16, 2007 08:49 AM