Skip to content. | Skip to navigation

Central Intelligence Agency
The Work of a Nation. The Center of Intelligence

CSI

Chapter Five

Integrating Methodologists into Teams of Experts[1]

Intelligence analysis, like other complex tasks, demands considerable expertise. It requires individuals who can recognize patterns in large data sets, solve complex problems, and make predictions about future behavior or events. To perform these tasks successfully, analysts must dedicate years to researching specific topics, processes, and geographic regions.

Paradoxically, it is the specificity of expertise that makes expert forecasts unreliable. While experts outperform novices and machines in pattern recognition and problem solving, expert predictions of future behavior or events are seldom as accurate as Bayesian probabilities.[2] This is due, in part, to cognitive biases and processing-time constraints and, in part, to the nature of expertise itself and the process by which one becomes an expert.

 

Becoming an Expert

Expertise is commitment coupled with creativity. By this, I mean the commitment of time, energy, and resources to a relatively narrow field of study and the creative energy necessary to generate new knowledge in that field. It takes a great deal of time and regular exposure to a large number of cases to become an expert.

Entering a field of study as a novice, an individual needs to learn the heuristics and constraints—that is, the guiding principles and rules—of a given task in order to perform that task. Concurrently, the novice needs to be exposed to specific cases that test the reliability of such heuristics. Generally, novices find mentors to guide them through the process of acquiring new knowledge. A fairly simple example would be someone learning to play chess. The novice chess player seeks a mentor who can explain the object of the game, the number of spaces, the names of the pieces, the function of each piece, how each piece is moved, and the necessary conditions for winning or losing a game.

In time, and with much practice, the novice begins to recognize patterns of behavior within cases and, thus, becomes a journeyman. With more practice and exposure to increasingly complex cases, the journeyman finds patterns not only within but also among cases and, more important, learns that these patterns often repeat themselves. Throughout, the journeyman still maintains regular contact with a mentor to solve specific problems and to learn more complex strategies. Returning to the example of the chess player, the individual begins to learn patterns of opening moves, offensive and defensive strategies, and patterns of victory and defeat.

The next stage begins when a journeyman makes and tests hypotheses about future behavior based on past experiences. Once he creatively generates knowledge, rather than simply matching patterns, he becomes an expert. At this point, he becomes responsible for his own knowledge and no longer needs a mentor. In the chess example, once a journeyman begins competing against experts, makes predictions based on patterns, and tests those predictions against actual behavior, he is generating new knowledge and a deeper understanding of the game. He is creating his own cases rather than relying on the cases of others.

The chess example in the preceding paragraphs is a concise description of an apprenticeship model. Apprenticeship may seem to many a restrictive, old-fashioned mode of education, but it remains a standard method of training for many complex tasks. In fact, academic doctoral programs are based on an apprenticeship model, as are such fields as law, music, engineering, and medicine. Graduate students enter fields of study, find mentors, and begin the long process of becoming independent experts and generating new knowledge in their respective domains.

To some, playing chess may appear rather trivial when compared, for example, with making medical diagnoses, but both are highly complex tasks. Chess heuristics are well-defined, whereas medical diagnoses seem more open ended and variable. In both instances, however, there are tens of thousands of potential patterns. A research study discovered that chess masters had spent between 10,000 and 20,000 hours, or more than 10 years, studying and playing chess. On average, a chess master acquires 50,000 different chess patterns.[3]

Similarly, a diagnostic radiologist spends eight years in full-time medical training - four years of medical school and four years of residency—before being qualified to take a national board exam and begin independent practice.[4] According to a 1988 study, the average diagnostic radiology resident sees 40 cases per day, or around 12,000 cases per year.[5] At the end of a residency, a diagnostic radiologist has acquired an average of 48,000 cases.

Psychologists and cognitive scientists agree that the time it takes to become an expert depends on the complexity of the task and the number of cases, or patterns, to which an individual is exposed. The more complex the task, the longer it takes to build expertise, or, more accurately, the longer it takes to experience a large number of cases or patterns.

 

The Power of Expertise

Experts are individuals with specialized knowledge suited to perform the specific tasks for which they are trained, but that expertise does not necessarily transfer to other domains.[6] A master chess player cannot apply chess expertise in a game of poker; although both chess and poker are games, a chess master who has never played poker is a novice poker player. Similarly, a biochemist is not qualified to perform neurosurgery, even though both biochemists and neurosurgeons study human physiology. In other words, the more complex a task, the more specialized and exclusive is the knowledge required to perform that task.

Experts perceive meaningful patterns in their domains better than do non-experts. Where a novice perceives random or disconnected data points, an expert connects regular patterns within and among cases. This ability to identify patterns is not an innate perceptual skill; rather, it reflects the organization of knowledge after exposure to and experience with thousands of cases.[7]

Experts have a deeper understanding of their domains than do novices, and they utilize higher-order principles to solve problems.[8] A novice, for example, might group objects together by color or size, whereas an expert would group the same objects according to their function or utility. Experts comprehend the meaning of data better than novices, and they weigh variables with different criteria within their domains better. Experts recognize variables that have the largest influence on a particular problem and focus their attention on those variables.

Experts have better domain-specific short-term and long-term memory than do novices.[9] Moreover, experts perform tasks in their domains faster than novices and commit fewer errors while solving problems.[10] Interestingly, experts also go about solving problems differently. At the beginning of a task, experts spend more time thinking about a problem than do novices, who immediately seek to find a solution.[11] Experts use their knowledge of previous cases as context for creating mental models to solve given problems.[12]

Because they are better at self-monitoring than novices, experts are more aware of instances where they have committed errors or failed to understand a problem.[13]  They check their solutions more often and recognize when they are missing information necessary for solving a problem.[14] Experts are aware of the limits of their knowledge and apply their domain’s heuristics to solve problems that fall outside of their experience base.

 

The Paradox of Expertise

 The strengths of expertise can also be weaknesses.[15] Although one would expect experts to be good forecasters, they are not particularly good at it. Researchers have been testing the ability of experts to make forecasts since the 1930s.[16] The performance of experts has been tested against Bayesian probabilities to determine if they are better at making predictions than simple statistical models. Seventy years later, after more than 200 hundred experiments in different domains, it is clear that the answer is no.[17] Supplied with an equal amount of data about a particular case, Bayesian probability data are as good as, or better than, an expert at making calls about the future. In fact, the expert does not tend to outperform the actuarial table, even if given more specific case information than is available to the statistical model.[18]

There are few exceptions to these research findings, but these are informative. When experts are given the results of the Bayesian probabilities, for example, they tend to score as well as the statistical model if they use the statistical information in making their own predictions.[19] In addition, if experts have privileged information that is not reflected in the statistical table, they will actually perform better than does the table. A classic example is the case of the judge’s broken leg. Judge X has gone to the theater every Friday night for the past 10 years. Based on a Bayesian analysis, one would predict, with some certainty, that this Friday night would be no different. An expert knows, however, that the judge broke her leg Thursday afternoon and is expected to be in the hospital until Saturday. Knowing this key variable allows the expert to predict that the judge will not attend the theater this Friday.

Although having a single variable as the determining factor makes this case easy to grasp, analysis is seldom, if ever, this simple. Forecasting is a complex, interdisciplinary, dynamic, and multivariate task wherein many variables interact, weight and value change, and other variables are introduced or omitted.

During the past 30 years, researchers have categorized, experimented, and theorized about the cognitive aspects of forecasting and have sought to explain why experts are less accurate forecasters than statistical models. Despite such efforts, the literature shows little consensus regarding the causes or manifestations of human bias. Some have argued that experts, like all humans, are inconsistent when using mental models to make predictions. That is, the model an expert uses for predicting X in one month is different from the model used for predicting X in a later month, although precisely the same case and same data set are used in both instances.[20] A number of researchers point to human biases to explain unreliable expert predictions.[21] There is general agreement that two types of bias exist:

  • Pattern bias: looking for evidence that confirms rather than rejects a hypothesis and/or filling in—perhaps inadvertently—missing data with data from previous experiences;

  • Heuristic bias: using inappropriate guidelines or rules to make predictions.

Paradoxically, the very method by which one becomes an expert explains why experts are much better than novices at describing, explaining, performing tasks, and solving problems within their domains but, with few exceptions, are worse at forecasting than are Bayesian probabilities based on historical, statistical models. A given domain has specific heuristics for performing tasks and solving problems, and these rules are a large part of what makes up expertise. In addition, experts need to acquire and store tens of thousands of cases in order to recognize patterns, generate and test hypotheses, and contribute to the collective knowledge within their fields. In other words, becoming an expert requires a significant number of years of viewing the world through the lens of one specific domain. This concentration gives the expert the power to recognize patterns, perform tasks, and solve problems, but it also focuses the expert’s attention on one domain to the exclusion of others. It should come as little surprise, then, that an expert would have difficulty identifying and weighing variables in an interdisciplinary task, such as forecasting an adversary’s intentions. Put differently, an expert may know his specific domain, such as economics or leadership analysis, quite thoroughly, but that may still not permit him to divine an adversary’s intention, which the adversary may not himself know.

 

The Burden on Intelligence Analysts

Intelligence analysis is an amalgam of a number of highly specialized domains. Within each, experts are tasked with assembling, analyzing, assigning meaning to, and reporting on data, the goals being to describe an event or observation, solve a problem, or make a forecast. Experts who encounter a case outside their field repeat the steps they initially used to acquire their expertise. Thus, they can try to make the new data fit a pattern previously acquired; recognize that the case falls outside their expertise and turn to their domain’s heuristics to try to give meaning to the data; acknowledge that the case still does not fit with their expertise and reject the data set as an anomaly; or consult other experts.

An item of information, in and of itself, is not domain specific. Imagine economic data that reveal that a country is investing in technological infrastructure, chemical supplies, and research and development. An economist might decide that the data fit an existing spending pattern and integrate these facts with prior knowledge about a country’s economy. The same economist might decide that this is a new pattern that needs to be stored in long-term memory for some future use, or he might decide that the data are outliers of no consequence and may be ignored. Finally, the economist might decide that the data would be meaningful to a chemist or biologist and, therefore, seek to collaborate with other specialists, who might reach different conclusions regarding the data than would the economist.

In this example, the economist is required to use his economic expertise in all but the final option of consulting other experts. In the decision to seek collaboration, the economist is expected to know that what appears to be new economic data may have value to a chemist or biologist, domains with which he may have no experience. In other words, the economist is expected to know that an expert in some other field might find meaning in data that appear to be economic.

Three disparate variables complicate the economist’s decisionmaking:

  • Time context. This does not refer to the amount of time necessary to accomplish a task but rather to the limitations that come from being close to an event. The economist cannot say a priori that the new data set is the critical data set for some future event. In “real time,” they are simply data to be manipulated. It is only in retrospect, or in long-term memory, that the economist can fit the data into a larger pattern, weigh their value, and assign them meaning.

  • Pattern bias. In this particular example, the data have to do with infrastructure investment, and the expert is an economist. Thus, it makes perfect sense to try to manipulate the new data within the context of economics, recognizing, however, that there may be other, more important angles.

  • Heuristic bias. The economist has spent a career becoming familiar with and using the guiding principles of economic analysis and, at best, has only a vague familiarity with other domains and their heuristics. An economist would not necessarily know that a chemist or biologist could identify what substance is being produced based on the types of equipment and supplies that are being purchased.

This example does not describe a complex problem; most people would recognize that the data from this case might be of value to other domains. It is one isolated case, viewed retrospectively, which could potentially affect two other domains. But, what if the economist had to deal with 100 data sets per day? Now, multiply those 100 data sets by the number of domains potentially interested in any given economic data set. Finally, put all of this in the context of “real time.” The economic expert is now expected to maintain expertise in economics, which is a full-time endeavor, while simultaneously acquiring some level of experience in every other domain. Based on these expectations, the knowledge requirements for effective collaboration quickly exceed the capabilities of the individual expert.

The expert is left dealing with all of these data through the lens of his own expertise. Let’s assume that he uses his domain heuristics to incorporate the data into an existing pattern, store the data in long-term memory as a new pattern, or reject the data set as an outlier. In each of these options, the data stop with the economist instead of being shared with an expert in some other domain. The fact that these data are not shared then becomes a potentially critical case of analytic error.[22]

In hindsight, critics will say that the implications were obvious—that the crisis could have been avoided if the data had been passed to one or another specific expert. In “real time,” however, an expert often does not know which particular data set would have value for an expert in another domain.

 

The Pros and Cons of Teams

One obvious solution to the paradox of expertise is to assemble an interdisciplinary team. Why not simply make all problem areas or country-specific data available to a team of experts from a variety of domains? This ought, at least, to reduce the pattern and heuristic biases inherent in relying on only one domain. Ignoring potential security issues, there are practical problems with this approach. First, each expert would have to sift through large data sets to find data specific to his expertise. This would be inordinately time-consuming and might not even be routinely possible, given the priority accorded gisting and current reporting.

Second, during the act of scanning large data sets, the expert inevitably would be looking for data that fit within his area of expertise. Imagine a chemist who comes across data that show that a country is investing in technological infrastructure, chemical supplies, and research and development (the same data that the economist analyzed in the previous example). The chemist recognizes that these are the ingredients necessary for a nation to produce a specific chemical agent, which could have a military application or could be benign. The chemist then meshes the data with an existing pattern, stores the data as a new pattern, or ignores the data as an anomaly.

The chemist, however, has no frame of reference regarding spending trends in the country of interest. He does not know if the investment in chemical supplies represents an increase, a decrease, or a static spending pattern—answers the economist could supply immediately. There is no reason for the chemist to know if a country’s ability to produce this chemical agent is a new phenomenon. Perhaps the country in question has been producing the chemical agent for years, and these data are part of some normal pattern of behavior.

If this analytic exercise is to begin to coalesce, neither expert must treat the data set as an anomaly and both must report it as significant. In addition, each expert’s analysis of the data—an increase in spending and the identification of a specific chemical agent—must be brought together at some point. The problem is, at what point? Presumably, someone will get both of these reports somewhere along the intelligence chain. Of course, the individual who gets these reports will be subject to the same three complicating variables described earlier—time context, pattern bias, and heuristic bias—and may not be able to synthesize the information. Thus, the burden of putting the pieces together will merely have been shifted to someone else in the organization.

In order to avoid shifting the problem from one expert to another, an actual collaborative team could be built. Why not explicitly put the economist and the chemist together to work on analyzing data? The utilitarian problems with this strategy are obvious: not all economic problems are chemical, and not all chemical problems are economic. Each expert would waste an inordinate amount of time. Perhaps one case in 100 would be applicable to both experts, but, during the rest of the day, they would drift back to their individual domains, in part, because that is what they are best at and, in part, just to stay busy.

Closer to the real world, the same example may also have social, political, historical, and cultural aspects. Despite an increase in spending on a specific chemical agent, the country in question may not be inclined to use it in a threatening way. For example, there may be social data unavailable to the economist or the chemist indicating that the chemical agent will be used for a benign purpose. In order for collaboration to work, each team would have to have experts from many domains working together on the same data set.

Successful teams have very specific organizational and structural requirements. An effective team requires discrete and clearly stated goals that are shared by each team member.[23] Teams also require interdependence and accountability, that is, the success of each individual depends on the success of the team as a whole as well as on the individual success of every other team member.[24]

Effective teams require cohesion, formal and informal communication, cooperation, shared mental models, and similar knowledge structures.[25] Putting combinations such as this in place is not a trivial task. Creating shared mental models may be fairly easy within an air crew or a tank crew, where an individual’s role is clearly identifiable as part of a clearly-defined, repetitive team effort, such as landing a plane or acquiring and firing on a target. It is more difficult within an intelligence team, given the vague nature of the goals, the enormity of the task, and the diversity of individual expertise. Moreover, the larger the number of team members, the more difficult it is to generate cohesion, communication, and cooperation. Heterogeneity can also be a challenge; it has a positive effect on generating diverse viewpoints within a team, but it requires more organizational structure than does a homogeneous team.[26]

Without specific processes, organizing principles, and operational structures, interdisciplinary teams will quickly revert to being simply a room full of experts who ultimately drift back to their previous work patterns. That is, the experts will not be a team at all; they will be a group of experts individually working in some general problem space.[27]

 

Can Technology Help?

There are potential technological alternatives to multifaceted teams. For example, an Electronic Performance Support System (EPSS) is a large database that is used in conjunction with expert systems, intelligent agents, and decision aids.[28] Although applying such a system to intelligence problems might be a useful goal, at present, the notion of an integrated EPSS for large complex data sets is more theory than practice.[29] In addition to questions about the technological feasibility of such a system, there are fundamental epistemological challenges. It is virtually inconceivable that a comprehensive computational system could bypass the three complicating variables of expertise described earlier.

An EPSS, or any other computational solution, is designed, programmed, and implemented by a human expert from one domain only, that of computer science. Historians will not design the “historical decision aid,” economists will not program the “economic intelligent agent,” chemists will not create the “chemical agent expert system.” Computer scientists may consult with various experts during the design phase of such a system, but, when it is time to sit down and write code, the programmer will follow the heuristics with which he is familiar.[30] In essence, one would be trading the heuristics of dozens of domains for those that govern computer science. This would reduce the problem of processing time by simplifying and linking data, and it might reduce pattern bias. It would not reduce heuristic bias, however; if anything, it might exaggerate it by reducing all data to a binary state.[31]

This skepticism is not simply a Luddite reaction to technology. Computational systems have had a remarkable, positive effect on processing time, storage, and retrieval. They have also demonstrated utility in identifying patterns within narrowly defined domains. However, intelligence analysis requires the expertise of so many diverse fields of study and is not something a computational system handles well. Although an EPSS, or some other form of computational system, may be a useful tool for manipulating data, it is not a solution to the paradox of expertise.

 

Analytic Methodologists

Most domains have specialists who study the scientific process or research methods of their discipline. Instead of specializing in a specific substantive topic, these experts specialize in mastering the research and analytic methods of their domain. In the biological and medical fields, these methodological specialists are epidemiologists. In education and public policy, they are program evaluators. In other fields, they are research methodologists or statisticians. Whatever the label, each field recognizes that it requires experts in methodology who focus on deriving meaning from data, recognizing patterns, and solving problems within a domain in order to maintain and pass on the domain’s heuristics. They become in-house consultants—organizing agents—who work to identify research designs, methods for choosing samples, and tools for data analysis.

Because they have a different perspective than do the experts in a domain, methodologists are often called on by substantive experts to advise them on a variety of process issues. On any given day, an epidemiologist, for example, may be asked to consult on studies of the effects of alcoholism or the spread of a virus on a community or to review a double-blind clinical trial of a new pharmaceutical product. In each case, the epidemiologist is not being asked about the content of the study; rather, he is being asked to comment on the research methods and data analysis techniques used.

Although well over 160 analytic methods are available to intelligence analyst, few methods specific to the domain of intelligence analysis exist.[32] Intelligence analysis has few specialists whose professional training is in the process of employing and unifying the analytic practices within the field. It is left to the individual analysts to know how to apply methods, select one method over another, weigh disparate variables, and synthesize the results— the same analysts whose expertise is confined to specific substantive areas and their own domains’ heuristics.

 

Conclusion

Intelligence agencies continue to experiment with the right composition, structure, and organization of analytic teams. Yet, although they budget significant resources for technological solutions, comparatively little is being done to advance methodological science. Methodological improvements are left primarily to the individual domains, a practice that risks falling into the same paradoxical trap that currently exists. What is needed is an intelligence-centric approach to methodology that will include the methods and procedures of many domains and the development of heuristics and techniques unique to intelligence. In short, intelligence analysis needs its own analytic heuristics that are designed, developed, and tested by professional analytic methodologists.

The desired outcome would be a combined approach that includes formal thematic teams with structured organizational principles, technological systems designed with significant input from domain experts, and a cadre of analytic methodologists. These methodologists would act as in-house consultants for analytic teams, generate new methods specific to intelligence analysis, modify and improve existing methods of analysis, and promote the professionalization of the discipline of intelligence. Although, at first, developing a cadre of analytic methodologists would require using specialists from a variety of other domains and professional associations, in time, the discipline would mature into its own subdiscipline with its own measures of validity and reliability.

 

Footnotes:

[1] A version of this chapter originally appeared as “Integrating Methodologists into Teams of Substantive Experts in Studies in Intelligence 47, no. 1 (2003): 57–65.

[2] Method for estimating the probability of a given outcome developed by Thomas Bayes (1702–61), an English mathematician. See Thomas Bayes, “An Essay Toward Solving a Problem In the Doctrine of Chances.”

[3] W. Chase and H. Simon, “Perception in Chess.”

[4] American College of Radiology. Personal communication, 2002.

[5] A. Lesgold et al., “Expertise in a Complex Skill: Diagnosing X-Ray Pictures.”

[6] M. Minsky and S. Papert, Artificial Intelligence; J. Voss and T. Post, “On the Solving of Ill-Structured Problems.”

[7] O. Akin, Models of Architectural Knowledge; D. Egan and B. Schwartz, “Chunking in Recall of Symbolic Drawings”; K. McKeithen et al., “Knowledge Organization and Skill Differences in Computer Programmers.”

[8] M. Chi, P. Feltovich, and R. Glaser, “Categorization and Representation of Physics Problems by Experts and Novices”; M. Weiser and J. Shertz, “Programming Problem Representation in Novice and Expert Programmers.”

[9] W. Chase and K. Ericsson, “Skill and Working Memory.”

[10] W. Chase, “Spatial Representations of Taxi Drivers.”

[11] J. Paige and H. Simon, “Cognition Processes in Solving Algebra Word Problems.”

[12] Voss and Post.

[13] M. Chi, R. Glaser, and E. Rees, “Expertise in Problem Solving”; D. Simon and H. Simon, “Individual Differences in Solving Physics Problems.”

[14] J. Larkin, “The Role of Problem Representation in Physics.”

[15] C. Camerer and E. Johnson, “The Process-Performance Paradox in Expert Judgment.”

[16] H. Reichenbach, Experience and Prediction; T. Sarbin, “A Contribution to the Study of Actuarial and Individual Methods of Prediction.”

[17] R. Dawes, D. Faust, and P. Meehl, “Clinical Versus Actuarial Judgment”; W. Grove and P. Meehl, “Comparative Efficiency of Informal (Subjective, Impressionistic) and Formal (Mechanical, Algorithmic) Prediction Procedures.”

[18] R. Dawes, “A Case Study of Graduate Admissions”; Grove and Meehl; H. Sacks, “Promises, Performance, and Principles”; T. Sarbin, “A Contribution to the Study of Actuarial and Individual Methods of Prediction”; J. Sawyer, “Measurement and Prediction, Clinical and Statistical”; W. Schofield and J. Garrard, “Longitudinal Study of Medical Students Selected for Admission to Medical School by Actuarial and Committee Methods.”

[19] L. Goldberg, “Simple Models or Simple Processes?”; L. Goldberg, “Man versus Model of Man”; D. Leli and S. Filskov, “Clinical-Actuarial Detection of and Description of Brain Impairment with the Wechsler-Bellevue Form I.”

[20] J. Fries, et al., “Assessment of Radiologic Progression in Rheumatoid Arthritis.”

[21] J. Evans, Bias in Human Reasoning; R. Heuer, Psychology of Intelligence Analysis; D. Kahneman, P. Slovic, and A. Tversky, Judgment Under Uncertainty; A. Tversky and D. Kahneman, “The Belief in the ‘Law of Small Numbers’.”

[22] L. Kirkpatrick, Captains Without Eyes: Intelligence Failures in World War II; F. Shiels, Pre-ventable Disasters; J. Wirtz, The Tet Offensive: Intelligence Failure in War; R. Wohlstetter, Pearl Harbor.

[23] Dorwin Cartwright and Alvin Zander, Group Dynamics: Research and Theory; P. Fandt, W. Richardson, and H. Conner, “The Impact of Goal Setting on Team Simulation Experience”; J. Harvey and C. Boettger, “Improving Communication within a Managerial Workgroup.”

[24] M. Deutsch, “The Effects of Cooperation and Competition Upon Group Process”; D. Johnson and R. Johnson, “The Internal Dynamics of Cooperative Learning Groups”; D. Cartwright and A. Zander, Group Dynamics: Research and Theory; David Johnson and Roger Johnson, “The Internal Dynamics of Cooperative Learning Groups”; D. Johnson et al., “Effects of Cooperative, Competitive, and Individualistic Goal Structure on Achievement: A Meta-Analysis”; R. Slavin, “Research on Cooperative Learning”; R. Slavin, Cooperative Learning.

[25] J. Cannon-Bowers, E. Salas, S. Converse, “Shared Mental Models in Expert Team Decision Making”; L. Coch and J. French, “Overcoming Resistance to Change”; M. Deutsch, “The Effects of Cooperation and Competition Upon Group Process”; L. Festinger, “Informal Social Communication”; D. Johnson et al., “The Impact of Positive Goal and Resource Interdependence on Achievement, Interaction, and Attitudes”; B. Mullen and C. Copper, “The Relation Between Group Cohesiveness and Performance: An Integration”; W. Nijhof and P. Kommers, “An Analysis of Cooperation in Relation to Cognitive Controversy”; J. Orasanu, “Shared Mental Models and Crew Performance”; S. Seashore, Group Cohesiveness in the Industrial Work-group.

[26] T. Mills, “Power Relations in Three-Person Groups”; L. Molm, “Linking Power Structure and Power Use”; V. Nieva, E. Fleishman, and A. Rieck, Team Dimensions: Their Identity, Their Measurement, and Their Relationships; G. Simmel, The Sociology of Georg Simmel.

[27] R. Johnston, Decision Making and Performance Error in Teams: Research Results; J. Meister, “Individual Perceptions of Team Learning Experiences Using Video-Based or Virtual Reality Environments.”

[28] An Expert System is a job-specific heuristic process that helps an expert narrow the range of available choices. An Intelligent Agent is an automated program (bot) with built-in heuristics used in Web searches. A Decision Aid is an expert system whose scope is limited to a particular task.

[29] R. Johnston, “Electronic Performance Support Systems and Information Navigation.”

[30] R. Johnston and J. Fletcher, A Meta-Analysis of the Effectiveness of Computer-Based Training for Military Instruction.

[31] J. Fletcher and R. Johnston, “Effectiveness and Cost Benefits of Computer-Based Decision Aids for Equipment Maintenance.”

[32] Exceptions include: S. Feder, “FACTIONS and Policon”; R. Heuer, Psychology of Intelligence Analysis; R. Hopkins, Warnings of Revolution: A Case Study of El Salvador; J. Lockwood and K. Lockwood, “The Lockwood Analytical Method for Prediction (LAMP)”; J. Pierce, “Some Mathematical Methods for Intelligence Analysis”; E. Sapp, “Decision Trees”; J. Zlotnick, “Bayes’ Theorem for Intelligence Analysis.”


Historical Document
Posted: Mar 16, 2007 08:49 AM
Last Updated: Jun 28, 2008 12:59 PM
Last Reviewed: Mar 16, 2007 08:49 AM