io3 REPORT ON REGIONAL EVALUATION SURVEY Page No. Summary 1 Background 2 RMP Evaluation Resources 4 Evaluation Directors 4 Staff s Consultants 5 Evaluation Committees 6 Expenditures for Evaluation 6 RMP Evaluation Activities 7 Project Evaluation 8 Related Activities 11 Program Evaluation 12 Relationship of Evaluation to Decisiomaking 13 Conclusion: Findings and Recommendations 14 Appendices Regions Visited -- Evaluation Survey A Issues and Questions for FM Evaluation Visit B Regional NL-dical Programs Directors of Evaluatim c Office of Planning and Evaluation Regional Medical Programs Serv:Lce March, 1972 I.- SUMMARY This is a report of a survey of the present state of evaluation resources and activities in Regional Medical Programs. The survey conducted in 1971 was designed to obtain information and insights regarding: * How evaluation was defined and viewed -- its function, importance, and visibility -- at the regional level. * The RMP staff, other resources,, and organizational arrange- ments for evaluation. * The scope and nature of evaluation efforts and activities being carried out by R4Ps. * The effect or impact of these evaluation activities upon IW decisiomaking. That is, are evaluation results actually being utilized to mnitor and control perfo ce, to modify or, where indicated, discontinue RMP supported activities or projects, and/or to establish or alter program objectives, priorities, and strategies. * What major problems R4P evaluation efforts confront. This survey was prompted by a number of factors and considerations. Among them: * By 1971, W as a program had been underway for almst five years; and many individual RMPs were entering their third or fourth year of operational activity. It seemed a natural juncture in the program to take stock of what was actually happening in the Regions with respect to evaluation. * It was very unclear, simply based upon a continuing perusal of grant applications and progress reports, how Regions were evaluating and addressing problems relating to it; little in the way of evaluation "outcomes" was reflected in these materials. * The National RNP Conference and Workshop on Evaluation held in Chicago in September 1970 reinforced the impression that to gain an.overall picture of regional evaluation activities required some special effort or endeavor. * That conference also strongly suggested the need to @rove co ication between RNPS and the Regions as well as among Regions themselves in the area of evaluation. A survey such as that conducted was seen as one possible way of initiating better communications and understanding. Page 2 The survey resulted in a number of findings and highlighted certain problems. Some recommendations and suggestions are made in view of these. The following are among the more salient findings: * A significant fraction of total RMP resources are being devoted to evaluation, with an estimated $3.5-4 million being expended for evaluation activities and purposes. * Nearly all the present R4P evaluation efforts and activities are directed at assessing operational projects. There is, conversely, little or no evaluation of core activities. * Only a few Regions are beginning to grapple with the problem of program evaluation; and these efforts have not been very fruitful to date. * Although evaluation is defined and viewed by the Regions primarily as "a gement tool for decisiormaking," there does not seem to be any significant relationship between eval a@tion and decisiornaking in most FWs. * Certain promising new approaches and techniques are being tried by a number of R4Ps. Project site visits and eval- uation committees, for example, are being utilized increas- ingly. These and other devices may prove helpful in tying evaluation more closely to regional decisiornaking. * There does not appear to be any significant co ication or cooperation among R4Ps as relates to evaluation of similar activities or common problems. II. BACKGROLJND This survey was largely conducted during the first nine months of 1971. Although it did include a review of current applications, progress reports, and other documentation available within RMPS on all 56 Regions, the principal mechanism employed was that of visits to eleven RMPS. (See Appendix A for the Regi@ visited, persons contacted and OPPE staff making the visits.) A number of factors were taken into consideration in selecting the Regions to be visited. For example: One basic criterion was to get a "mix" of Regions reflecting various staffing and organizational patterns (e.g., small and large evaluation staffs, with and,without evaluation committees). Page 3 * Another important factor was whether the Regions appear(\l to have some semblance of an evaluation strategy or, as a minimum, evaluation seemed to have been built into most or all of their funded operational activities. * Some attempt also was made to insure that the Regions visited were collectively more rather than less "representative" in terms of certain salient characteristics (e.g., type grantee, urban vs. rural). In order to achieve some degree of comparability among the data collected a series of open-ended issue questions were developed which relate to the purposes of the study. (See Appendix B for the document employed, "Issues and Questions for RMP Evaluation Visit.") So the Regions taking part in the survey would be apprised as to the kinds of information being sought, the document was sent to them approximately two weeks before the visit was made. During the course of each visit, interviews were conducted with the Coordinator (or Program Director), the Evaluation Director and his staff, other R4P staff responsible for evaluation-related activities (e.g., data collection, project monitor- ing), the Regional Advisory Group Chairman and/or other RAG members, the Chairman of the Evaluation Committee if one had been established, and several project directors. Given the questionnaire-interview methodology employed, it should be obvious that what is presented in the way of information, findings, and conclusions is based largely on limited (as opposed to hard) data or constitutes informed speculation. Such reasonably hard data as are presented (e.g., academic backgrounds and salaries of Evaluation Direc- tors, members of staff) are quite limited; moreover it generally relates to those matters which are of lesser or minor significance. A conceptual construct which figured in the design and conduct of this survey was the functional evaluation "schema" described in the ADL Report on A Study of the Regional Medical Programs. That schema, in sumiary, views evaluation as functioning essentially to serve one of three basic purposes: "Justification -- to defend what is planned or what has been done. "Control -- to obtain performance details to assist Management in ma ing ehavior conform to a standard. "LearniN -- to help the evaluated activity grow by developing new goals, techniques, or strategies, creating new expectations and standards rather than conforming to old ones." Page 4 The survey itself was developed and conducted by the @IPS Office of Planning and Evaluation and the report drafted by Mr. Harold O"Flaherty of that Office's Evaluation Branch. Although nearly every member of the OPE staff contributed in some measure to the actual conduct of the study, Miss Rhoda Abrams, Assistant Chief of the Evaluation Branch, and Mr. O'Flaherty were the principal con- tributors to its overall development as well as its actual conduct. (Miss Abrams is now Chief, Program Planning and Reporting Branch, HMOS; and Mr. O'Flaherty is an Operations Officer with the Mid- Continent Desk.) III. RMP EVALUATION RESOLJRCES A. Evaluation Directors Fifty-three (53) of the 56 RMPs have an Evaluation Director. This individual obviously is a key staff person for evaluation purposes. Fifty-one (51) of these Evaluation Directors were full-time or major part-time (i.e., 75% or greater), with the other five (5) only on part- time. (Three Regions identified two individuals in effect jointly sharing the Evaluation Director position; thus, the total of 56.) The following chart summarizes the academic background of the 54 RW Evaluation Directors who hold degrees. Academic Background of the Evaluation Directors Discipline Total Bachelors Masters Doctorate Behavioral/Social Sciences 23 1 10 12 economics (4) (1) (1) (2) psychology (10) (3) (7) sociology (9) (6) (3) Biological/Physical Sciences 2 - 1 1 Business Administration 3 2 1 - Education 8 - 2 6 Planning 2 - 1 1 Statistics 5 - 4 1 Medicine/Public Health 6 - 1 5 community medicine (1) (1) internal medicine (1) (1) preventive medicine (2) (2) public health (1) (1) Other s 2 3 - Totals 54 5 23 26 Page 5 The salaries of the 56 Evaluation Directors ranged from a low of $7,140 to a high of $35,450. The average salary was $20,892 and the median was $19,1750. As might be expected there was a direct relation- ship between salary and academic achievement -- that is, those with a Ph.D. or M.D. degree were in the upper half of the range while those with a Masters or Bachelors degree were largely in the lower half. An attempt also was made to assess the Evaluation Directors' staff level within their own programs. Factors taken into account were (1) salary, (2) the relative placement of the position of the Evaluation Directorl@ position within the core staff hierarchy, and (3) academic background. Based upon these -factors, it was judged that about 23 were at what might be termed a "high" level, 18 "medium," and is "lower Someone at the "high!' level would exhibit all or most of the following characteristics: Possess an M.D. or Ph.D. degree; be desig- nated an Associate or Assistant Director; have a salary only slightly less than the Program Director; report directly to the Program Director; and be full-time. Someone at the "mediurWI level would for the most part be at the Masters level; report to someone other than the Program Direc- tbr; have a salary less than other senior core staff members; and be employed less than full-time. Someone at the "low' level would for the most part fill a staff position; have no supporting staff under him; have a comparatively low salary; and have either a Bachelors or Masters degree. This attempt to judge the staff level of RNT Evaluation Directors was made because of its possible significance as an indicator of (1) the priority a Region placed upon evaluation and (2) the Evaluation Direc- tor's influence in terms of decisionmaking. Whether or not there is any positive correlation was not shown, however. B. Staff The R4P Evaluation Directors are supported by an additional 110 professional staff members. About 90% of these are full-time, with the great majority (approximately 80%) having been trained in the behavior- al or social sciences. C. Consultants All of the Regions visited used outside consultants for evaluation purposes. Probably most or nearly all other RMPs also have. These outside consultants appear to be most frequently drawn from from medical schools., university departments of sociology and psychology, Page 6 university-based computer centers and state health departments. Regions varied with respect to how frequently outside resources were used. For examp'Le, in Kansas where there was a large core evaluation staff, outside consultants were infrequently employed, whereas in Western New York with an Evaluation Di-rector but no sup- porting staff, outside consultants were frequently utilized. The role played by these outside consultants can be defined in three ways: (1) To provide a review and critique of proposed evalu- ation strategies; (2) to carry out a statistical analysis of the col- lected evaluation data; and (3) to take part in site visits to ongoing or new projects for the purpose of reviewing the evaluation strategy, and if appropriate, to -recommnd necessary changes. D. Evaluation Committees Evaluation Committees, which now have been established in 27 Regions, appear to be an important resource. These Committees perform at least three roles: (1) To critique evaluation strategies; (2) to monitor ongoing activities; and (3) serve as a liaison between the core staff and Regional Advisory Group. Eight of the eleven Regions visited had appointed Evaluation Com- mittees. Seven of these were comprised entirely of Regional Advisory Group members. In the 19 Regions not visited which had Evaluation Committees, all but two were also made up of RAG members. There is some evidence that the evaluation effort is materially augmented when the Regional Advisory Group establishes or appoints an Evaluation Committee. Specifically, the Evaluation Directors in the Regions visited indicated they could use an Evaluation Committee to support their efforts particularly when a project was experiencing difficulty. As previously indicated, a major function of these Evalu- ation Comdttees is to establish and carry out annual site visits to ongoing projects. It was found that in all cases site visit reports were made available to the Regional Advisory Group, the Project Director and the Program Director. These reports,augmented by the results accruing :from core staff conducted evaluation, seem to be accorded some real weight by Regional Advisory Groups in terms of their decision- making with respect to project priorities and funding. E. Expenditures for Evaluation In order to estimate the total expenditures for evaluation by the 56 Regions, each of the eleven visited was asked to provide information Page 7 regarding staff salaries, consultant costs, travel and computer usage. They were requested to separate expenditures in these areas for evaluation purposes from those relating to the collection of data. They were then asked to determine what percentage evaluation costs and data collection costs were of their total core budget. It should be noted that in those Regions visited, project budgets did not contain funds for evaluation except in three instances. The percentage estimate of core dollars being spent for evaluation varied among these Regions. It ranged from 4% in Mountain States to 22"6 in Arkansas, with the average expenditure for the eleven Regions visited being approximately 10%. In fiscal year 1971 the grants to all 56 Regional Medical Programs totaled roughly $81 million; of this total, $39 million went for the support of core activities. If the average expenditure of core dollars going for evaluation purposes for all Regional Medical Programs is roughly the same as the eleven Regions visited (10%) the estimated out- lay of RNP dollars going for evaluation was about $3.5-$4 million in fiscal year 1971. As previously mentioned, this would not include core dollars being allocated for the collection and analysis of health and demographic data. 'Ihe percent of core funds being spent for data purposes varied among the eleven Regions visited, ranging from a high of 10% in Western New York to a low of 1% in Florida, with the average being 4%. Again, if the average of 4% is representative nationally the S6 Pms are spending $1.4- $1 .6 million for data collection. In summary, the 56 RNPS, based on the eleven visits made ' are spending $4.9-$S.6 million, or from 12% to 16% of their aggregate core funds, for evaluation and data collection purposes. IV. RMP EVALUATION ACTIVITIES As a backdrop to the evaluation activities being carried on, an attempt was made to determine how the eleven Regions visited perceived and defined evaluation in functional terms. Thus, the Program Coordi- nator, the Evaluation Director and the Regional Advisory Group Chairman in each case was asked to delineate what they felt to be the most important reasons for establishing an evaluation process. Evaluation Directors generally indicated that the data would pro- vide a meaningful base line for them to work with ongoing activities to improve their overall performance. Coordinators suggested that the results of the evaluation process should provide insight regarding Is Page 8 what activities have the greatest payoff as well as be a major mechanism for further planning, including charting out future programmatic direction. Regional Advisory Group Chairmen for the most part reported that albeit in future terms evaluation related information should be used for purposes of decisiomaking. Each of the above mentioned groups of individuals implied that evaluation is a management tool to be used as a major force in decisionmaking. (As noted below, there was little evidence that evaluation data and results were used in this way.) Each of the Evaluation Directors also was requested to spell out the approaches the Region was utilizing to evaluate funded activities. In sumary, four approaches and methodological techniques were most frequently encountered: (1) The goal attainment model used in social science and education to retrospectively measure progress in terms of predetermined standards; (2) managerial control, where projects are continuously and systematically monitored to Uetermine overall strengths and weaknesses; (3) on-site peer review with site visit teams inspecting projects to determine their overall accomplishments and problems; and (4 consisting of standardized reporting fo intervals for review and analysis. The Regions visited varied with respect to how evaluation is approached. Arkansas, for example, utilized almost exclusively a management approach. Intermountain relied largely on the goal attain- ment model to determine how effective the educational process had been in terms of changing knowledge. Western New York employed a variety of approaches and techniques, including the goal attainment model, a program reporting system, on-site peer review, and a special assessment of the effectiveness of the program as perceived by others in the Region. (The last was carried out as a part of a larger study funded by RWS with the Harvard Center for Community Health and Med- ical Care to develop, field test, and assess a new methodological tool for program evaluation (Information Support System)to assist RMPs in reviewing their own activities and the future development of their programs.) A. Project Evaluation Most project evaluation being carried out in the Regions visited was retrospective (i.e., at a point in time a determination is made as to whether or not an activity or project has thus far accomplished its stated objectives). However, a growing number of Regions were beginning to establish program reporting systems, a form of prospective evaluation defined as systematic continuous monitoring of events or occurrences to determine whether or not an activity continues to meet its stated objectives. The principle methodological technique used for carrying out project evaluation was the goal attainment model as used in social science and educational research. The model consists of the following Page 9 six steps: (1) determination of project goals; "2) determination of project objectives; (3) determination of measures of objective attainment; (4) establishment of standards; (S) collection of data on performance; and (6) comparison of actual performance with standards previously set. The above-mentioned model was being used in ten of the eleven Regions visited. An example of the effective use of this model was seen in the Coronary Care Unit Nurse Training project being carried out in the Texas Regional Medical Program. The objectives of this project were: (1) to increase the number of nurses trained in CCU management; (2) to increase on a statistically significant basis the knowledge level of nurses upon completion of the course; and (3) to determine the impact of the project upon the subsystem in which it was being carried out. Through interviewing the project director, it was learned that: (1) the number of trained nurses had been increased; (2) the knowledge level had been increased on a statistically significant basis; (3' the project had affected the attitudes of other health providers to the extent that hospital administrators were willing to reallocate resources to take over support of the project; and (4) physicians became cognizant of the fact that -they had need for a similar type course, which was subsequently put on at the physicians' expense. It should be noted that because of the evaluation done of this project the Texas RAG had given a number one priority ranking to it. The evaluation of this project was broader in scope than most of the evaluation going on in the Regions visited. It took into consideration such factors as the supply of manpower and the broker-facilitator effect of the project upon the subsystem where it was located. In terms of educational projects, the latter factor appears to be the most difficult to measure. As previously mentioned, the primary foci of the educational project evaluation being carried out within the Regions visited related to knowledge, performance, and attitudes. All of the Regions visited empha- sized at least one of these parameters. The Intermountain RMP, for example, in its CCU Nurse Training project has developed a pre- and post- test for assessing changes in knowledge and performance on the part of those trained. The test was rated by a panel of nurses and cardiologists coming from all parts of the country. It was one of the more standardized instruments encountered for RMP project evaluation purposes. Nurses take the test at the beginning of the course, the last day of the course, and three months after the course has been completed. ...ie test assesses changes in knowledge through the use of objective and essay questions and changes in performance by asking trainees to determine how they would respond to a typical situation occurring in a CCU. When the series of tests have been completed for each class,the collective and individual results are fed back to the project director and used as a mechanism for course improvement and as a basis for consulting with trainees. -Page 10 The Mountain States R4P also heavily emphasized continuing educa- tion as the modality for launching their program. The focus of the evaluation effort related to determining the number of types of pro- fessionals and paraprofessionals attending continuing education and training courses, their attitudes towards the material they have been presented, and suggestions for improvement in course content. This evaluation was done by implementing a computerized monitoring system which feeds back to project directors on a quarterly basis the above- mentioned data. The Kansas RMP also was primarily geared to making available con- tinuing education and training opportunities. The project evaluation process in this Region emphasized all three factors - changes in knowl- edge, attitudes and performance. Changes in knowledge were measured through the use of a pre- and post-test design developed jointly by the project director and the Evaluation Director. Changes in attitudes were measured through the use of a pre- and post-opinionnaire and changes in performance through administering a follow-up questionnaire to the hospital administrator. The latter questionnaire attempted to get a fix upon what if anything the trainee was doing differently upon his or her return to the hospital setting. Further, project staff, on a selected basis, carried out on-site visits to hospitals sponsoring trainees to determine whether or not there was indeed any significant behavioral change. As can be seen from the above threeexamples the foci of t eva - uation processes are quite different. In Intermountain the emphasis was on both changes in knowledge and behavior. In Mountain States the primary issue was documenting the numbers attending courses as well as changes in attitudes. In Kansas the evaluation process was set up to measure changes in knowledge, attitudes and performance. While the evaluative foci are somewhat different in these three Regions, the programmatic emphases are basically the same, i.e., each of the three programs had given high priority to continuing education and training. In each of the three examples mentioned above the goal attainment model was the primary methodological technique used. Other Regions visited had accomplished similar evaluation-related goals but had employed different evaluation related strategies. For example, in Western New York an evaluation design was approved by the Evaluation Committee and built into the project from the outset; a program report- ing system was being utilized to constantly and systematically monitor each project;and on-site visits were made by the Regional Advisory Group, Evaluation Comittee, and core staff in order to carry out an overall assessment of the project. In Arkansas the evaluation process was primarily management- oriented. When a project was being developed, the Evaluation Director and his staff worked with the project director to specify objectives and develop a record keeping system relating to project objectives. Once the project was funded, quarterly reports were submitted to the Page 11 Evaluation Director which spoke to such areas as what the project had done to meet its stated objectives, the problems that were hampering the satisfactory implementation of the project and were project funds being spent in the most appropriate manner. An annual assessment was then done by the Evaluation Director ana his staff which was fed back to the project director. Roughly three months later an evaluation staff site visit team visited the project to determine whether or not recommended changes had been accomplished. Based upon this visit, a recommendation was made to a project review committee of the Arkansas RAG regarding the future duration of the project. Through the use of this process a recomendation had been made for the early termination of five projects, two of which had been terminated by the Regional Advisory Group. B. Related Activities In the course of the eleven visits made, Evaluation Directors also were questioned about activities other than project evaluation which they and their staffs were involved with. Two major kinds of related activi- ties were described, (1) the conduct of special studies and (2) the col- lection of health and demographic data. Three of the eleven Regions visited (Intemountain, Kansas and Texas) indicated that special studies have been initiated to analyze salient programmatic trends. Each of these three Regions, it should be noted, has comparatively large evaluation staffs, from three to nine professionals. Examples of such special studies included a Task Analysis of Nurses in the Texas Region. That R4P in conjunction with the Texas Hos- pital Association, was analyzing hospital nursing tasks. In this study nurses were asked to define the tasks they perfom; and then the tasks of nurses, LPNs and orderlies were timed over a two-week period, 24-hours a day, in a 16-bed unit. It was aimed at identifying what nursing func- tions might be carried out by supportive personnel and how the nurses might more judiciously use their time. The Kansas RMP was engaged in a Coronary Care Unit Survey of Hospitals. This survey was investigating the type of equipment used in area hospitals, staffing patterns, pro- cedures followed, and the number of nurses trained by Kansas RMP functioning in the unit. The results of this study are intended for use by the State Heart Association in establishing norms of levels of care. In addition, the Program Coordinator indicated that the results of this investigation will be used by his staff and the Regional Advisory Group for determining future programmatic activity in the area of coronary care. Page 12 The second evaluation related activity carried out by evaluation staffs was the collection and analysis of health and demographic data. Each of the Regions visited, to a greater or lesser extent, collected and assembled sme of these kinds of data. From a historical perspective each of these Regions indicated that when their programs were getting off the ground a great deal of time and effort was spent in the collection and assembling of health and demographic data. For example, an estimated forty per cent of the first planning grant awarded to the Alabama RMP was spent for this purpose. As Regions became more project-oriented this changed dras- tically, to the extent that very little such data were collected. Three of the Regions visited (Alabama, Northlands and Texas) continue to assemble community or county profiles, however. Other health agencies as well as the RMPs visited appeared to be backing off from massive data collection efforts. In several of the Regions visited, the establishment of health data consortia is being considered. It is to be hoped that t se con- sortia (usually consisting of the State CHP, State health department, RMP, and other interested State agencies such as social and rehabili- tative services) would be able to provide morbidity and mortality statistics as well as population and resource data needed for health planning and evaluation purposes. It was the perception of those con- tacted that the establishment of these consortia would reduce the cost and improve the efficiency with respect to the collection and assembling of health and demographic data. C. Program Evaluation Very probably the most pressing evaluation problem confronting not only the eleven Regions visited, but all Regional Medical Programs, is how an RMP can assess its total program, its overall prograumtic impact. In the eleven Regions visited no endeavor of this nature had yet been initiated or tried. (Since that time a small number of RMPS, including at least one of the eleven visited, have tried to assess or evaluate their total programs by having their RAGs apply the RMP Review Criteria to their own programs.) Several problems and issues have characterized and probably hampered program evaluation efforts by RWS to date. First the tem program @ been defined in different ways: (1) Some view it in te@ of a group of related activities which can be organized into a single thrust such as iWTovewnt of coronary care facilities and services whereas (2) others see all the activities carried out under the purview of an @ as constituting the program to be evaluated. Second, goals, Page 13 objectives and priorities as delineated by the various R,4ps have generally lacked the degree of specificity, including target dates, required. Third, because of the broad scope and fluid character of a Regional Medical Program, i.e., multiple types of changing projects and core staff activities, none of the customary or more comon method- ological techniques or approaches appear to be appropriate for carrying out such an assessment, at least in the minds of those who have prime responsibility for this task at the regional level. Fourth to carry out an assessment of total programmtic impact would very possibly involve a significant dollar expenditure to develop and implement an appropriate instrument and procedures. In two Regions (Kansas and Intermountain), however, certain phases of programmd activity were being measured. The Kansas RMP had trained over 600 registered nurses in coronary care. Each hospital that sent one or more nurse trainees to a course offered under the aegis of that Region is being asked to supply data such as changes in death rates in the CCU, changes in the management of the unit, and its perception of how the trained nurse has affected both of the above factors. These data when collected and analyzed will be used to determine future program- matic endeavors in the CCU field. In the Intermountain RMP a cancer registry and cancer information system were being utilized to measure how effective the training provided to physicians and nurses had been with respect to iMrovements in the early recognition and diagnosis of cancer. In addition it was learned that several of the Regions were beginning to experiment with the use of the national RMP Review Criteria as a means of assessing their overall program effectiveness. These criteria provided a mechanism for those Regions using them to determine areas of both their strengths and weaknesses as perceived by RAG members, core staff, and others. D. Relationship of Evaluation to Decisionmaking Historically speaking evaluation has not markedly affected regional decisiormaking. In eight of the Regions visited, statements were made by the Coordinator, RAG Chairman and Evaluation Director to the effect that when projects were initially developed little emphasis was placed upon evaluation. Therefore, it has become necessary to build evaluation into projects well after they were approved and initiated. The Regions visited did indicate, however, that with decentrali- zation of project review and funding authority, it became necessary to better document the basis for allocational decisions. Evidence of the developing relationship between evaluation and the regional decision- making process is,found in the fact that the Arkansas, Florida, Oregon, Texas, and Western Now York RNPs have prematurely terminated projects. In each co-re staff were able to determine that serious Page 14 problem existed in term of the day-to-day management and implementation of the activities terminated. These data were reported to committees of the Regions' Advisory Groups. Following this action, site visits were held and the reports we-re made available to the Regional Advisory Groups for final action. It might be noted that in all the Regions mentioned except Arkansas, site visit team were comprised of RAG, core staff members and outside consultants; Arkansas used core staff only. In addition it did not appear that the evaluation-related data have had an impact upon the deliberations of the Regional Advisory Groups with respect to the development and delineation of goals, objectives and priorities. Evaluation still appeared to be a peripheral function in most Regional @dical Program ; where evaluation data we-re having an impact upon decisiomnaking and program development the Regional Advisory Group played a siginificant role in the evaluation process primarily through =king site visits. It would appear that Evaluation Directors need to be in contact on a more frequent basis with the Regional Advisory Groups in order that Evaluation Directors know what regional decision- makers need from evaluation. In Regions where there was only one proces- sional staff member functioning in the area of evaluation, mechanisms need to be developed to facilitate feedback among the Evaluation Directors and project directors. To acccMlish this end, program reporting systems or management infonation systems were being implemented in at least several of the Regions visited Obuntain States, Texas, and Western New York). A management informtion system should provide a project director data regarding progress and problems. These data form the basis of an outside evaluation that can be used by the project director to alter the direction of the project. IV. CONCLUSICN: FINDINGS AND RECOMMENDATIONS Evaluation as an integral facet of the development and implementation of the various Regional @dical Programs appeared to be receiving increased visibility. Over 90% of all projects underway in the eleven Regions surveyed included an evaluation strategy. These strategies varied in sophistication frm a simple -recapitulation of the number of those attending continuing education and training courses to an assessment of the iimpact of the project upon the subsystem in which it was being carried out. Given the fact that continuing education and training in mst Regions was the major vehicle for launching the program and establishing credi- bility in the Region, it follows that this area of program endeavor has received primary consideration for evaluation purposes. Further, there already had been established some tools that were beginning to measure changes in knowledge, attitudes, and performance that were applicable in terms of the RMP context. Therefore, it is not surprising that the evaluation of the educational process had progressed to a rather sophis- ticated state. However, except for a few instances, little evaluation of the impact of a project upon its target population was being carried out. Page 15 In fiscal year 1971 core staff funding totalled approximately $39 million. While project evaluation has received increased visibility and efforts are being made to think in tenz of program evaluation, it did not appear that the Regions visited were systematically evaluating core staff activities. Rhetorically, the question must be asked who would evaluate the effectiveness of core staff and the activities carried out under its purview. It would seem that the logical group for carrying out this exercise would be the Regional Advisory Group. Therefore, it is rec nded that the Regional Advisory Groups take whatever steps are necessary to evaluate co-re activities, and evaluate and rank the more or less discrete components of core activity in much the same mamer as operational proposals and ongoing projects. One of the most difficult tasks confronting the Evaluation Director, his support staff, the Program Director, and the Regional Advisory Group was found to be the development and implementation of some workable approach to assessing total progr tic impact. The major problem Evaluation Directors appeared to be facing was the dearth of guidelines and definitions for program evaluation; also, these individuals indicated that there did not appear to be any already existing methodologies that might be used for these purposes. To relieve this situation, R%IPS, working with the Regions, should strive to delineate guidelines that might be followed by a Regional @dical Program in carrying out program evaluation: (1) A detendnation by the RAG of overall program effectiveness through the use of the review criteria. (2) An assessment by the RAG of whether or not the activity is meeting its stated objectives. (3) Assigment of a funding level to each phase of program activity based on the above information as well as through the use of a priority ranking system reflecting Federal and regional goals, objectives, and priorities. Program evaluation, however defined, clearly is one of the major problems facting both the Regions and RMPS. Therefore, RNTS working with the Regions, the Ad Hoc R%IP Evaluation Comittee, and others needs to intensify its efforts to develop workable approaches and techniques that will help meet this problem. It appears that evaluation data are being used most effectively in those Regions that developed mechanisms such as Evaluation Comittees, for involving their RAGs in the evaluation process. Where these comittees had been established, RAG members participated in annual site visits to ongoing and proposed projects. The information gathered from carrying out these visits was used more consistently in decisiomaking. Therefore, it is recomended that evaluation not be considered as an isolated function, but rather it should be viewed as an integral facet, organizationally speaking, of the total program. Page 16 Frequently RMP core staff evaluation units are too small to systematically and continuously monitor each funded project. Therefore, it is necessary that ways be found to ameliorate this situation. The results of the study indicated that the use of outside consultants from university departments of sociology, psychology, and education, medical schools, and other health-related groups as well as Evaluation Committees can be of considerable value in enhancing the quality and utilization of the evaluation data. Although core staff expenditures generally and those for evaluation and data collection specifically are, when viewed in the aggregate, significant, there still appeared to be a paucity of talent available in many of the Regions to function in the area of evaluation. Those Regions that demonstrated a great deal of activity in the area of evaluation also had a significant number of staff working on the problem. In those Regions that do not have available a cadre of well-traiiied staff working in the area of evaluation and tangentially there is a paucity of outside resources t.Imt can be employed on a consultant basis, RMPS staff needs to provide assistance either through direct involvement or by making available to the Regions appropriate consultative expertise. In order to provide this service to the Regions, it is recomended that RMPS personnel who deal directly with the Regions, be provided "training" that would enable them to identify evaluation-related problems and how best to communicate these identified deficiencies to key RMP staff. One of the major stumbling blocks that has hampered the evaluation activities at the Regional level has been the lack of interregional communication. It is safe to say that there has been a great deal of fireinventing the wheel" with respect to the development of evaluation strategies and methodologies. In view of this it is suggested that RMP Evaluation Directors consider how they might better relate one to another, how they might share their experiences, co icate their successes and problems, to a greater extent, and the like. Further, contiguous Regions should consider the feasibility of establishing and implementing multi- regional evaluation efforts. If this were to be accomplished, certain costs could be reduced and quite possibly a better evaluation product produced in many instances. In conclusion, there appears to be an increased awareness and sensitivity regarding the role evaluation plays in a program. It is quite obvious that many problems still exist, but it does seem that the conditions for their mutual solution have been created. APPENDIX A REGIONS VISITED -- EVALUATION SURVEY Region Dates RHPS Staff Making Visit Persons Contacted Alabama Feb. 22-23, 1971 Rhoda Abrams, Assistant Branch Chief, Core Staff Evaluation Branch John M. Packard, M.D., Director Harold O'Flaherty, Program Analyst, M. D. Plowden, Deputy Director Evaluation Branch Douglas Patterson, Acting Associate Roland L. Peterson, Director, Director for Evaluation Office of Planning and Dr. Ed Smith, Evaluation Consultant Evaluation James Robertson, Associate Director for Program Management M. Lee, Assistant Director - Nursing D. Cusic, Associate Director - Planninj L. Gilmore, Associate Director - Education RAG Members Rush Jordan, Secretary Project Staff Dorothy Scarbrough, Project Director Dr. Jeanette Redford, Project Director Arkansas May 18, 1971 Rhoda Abrams Core Staff Joan Ensor, Program Analyst, Charles Silverblatt, M.D. Coordinator Evaluation Branch Ed Rensch, Associate Coordinator Harold O'Flaherty Roger Warner, Director, Division of Planning and Evaluation Mrs. Dortha Jackson, Project Evaluator Division of Planning and Evaluation Mrs. Norma Haughay, Systems Analyst Division of Planning and Evaluation Mrs. Jacqueblyn Walter, RN,Evaluator Division of Planning and Evaluation RAG Members Dr. Greifenstein Project Staff Sally Kasalko, Project Director Bill North, Project Director Page 2 Region Dates RNFS Staff Making Visit Persons Contacted Florida June 2-3, 1971 Spero Moutsatsos, Program Analyst Core Staff Evaluation Branch Dr. G. W. Larimore, State Director Harold O'Flaherty Dr. H. Hilleboe, District VIII Area Coordinator Dr. G. Engebretson, Associate Director Continuing Education Mr. J. Walker, Assistant Director Administration RAG Members Dr. R. P. Hampton Project Staff Dr. J.S. Neill,, Pro.ect Director Intermountain April 8-9, 1971 Rhoda Abrams Core Staff Spero Moutsatsos Robert Satovick, M.D. Coordinator Harold O'Flaherty Mitchell Schorow, Assistant Cbord., Education Planning and Evaluation Dona Harris, Assoc. for Evaludtion, Education and Planning Section Kenneth Denne, Health Research Assoc. Education and Planning Section Michael Hogben, Ph.D., Assoc. for Educational Design, Education and Planning Section Ed Catmul, Associate for Computer Data Analysis, Education & Planning Section Arthur Ruby, Administrative Director for Heart Disease Projects Vaughn Pulsipher, Administrative Dir. for Cancer Projects RAG Members Sister Ann Josephine, Ph.D, CSC Project Staff Marion Ford,, Project Director lge 3 Dates RMPS Staff Ma visit Persons Contacted August 9-12, 1971 Harold O'Flaherty Core Staff ansas Larry Witte, Senior Health Services Robert Brown, M.D. Coordinator officer, Planning Branch Ivan Anderson, Associate Director Chuck Adair, Ph.D., Coordinator Research and Evaluation Unit Thelma Schneider, Research Associate Research and Evaluation Unit Chuck Hine, Coordinator Institutions and Administration Tom Adams, Research Associate, Researchi Associate, Research and Evaluation Bill Morris, Coordinator Special Serv. J. Dale Taliaferro, Ph.D., Director Social Systems Research Margaret Brown, Research Associate, Research and Evaluation Unit Dr. Hinshaw, Subregional Coordinator Wichita Phil Patterson, Assistant Subrep-ional Coordinator, Wichita RAG Member Roy House, Member, Chairman Regional Advisory Group Evaluation Comm. Project Staff Desi Shafer, Project Director Sharon Lunn, Project Director Dr. Ernest Crow, Project Director 0 'age 4 region Dates Staff Making Visit Persons Contacted fountain States Sept. 7-8, 1971 Harold O'Flaherty Core Staff Lyman Van Nostrand, Senior Program Alfred M. Popma, M.D.. Regional Analyst, Planning Branch Director J. W. Gerdes, Ph.D. Deputy Regional Director Sidney C. Pratt, M.D., Director - Montana Fred 0. Graeber, M.D.,, Director - Idaho J. B. Deisher,, M.D., Director - Nevada Claude 0. Grizzle, M.D., Director - Wyoming C. E. Smith, Ph.D., Coordinator for Planning and Evaluation J. Breeden, Staff Associate, Montana L. G. Larson, R.N. Nursing Coordinator H. Thomson, Information Specialist Donald Erickson, M.Ed. Education Specialist, Wyoming Office RAG Members J. B. Gramlich, M.D, Member Regional Advisory Group Evaluation Committee Louise Haney, R.N., Member Regional Advisory Group Evaluation Committee William Johnstone, Member Regional Advisory Group Evaluation Committee Project Staff Dona Freshman, Project Director Page 5 Region Dates RMPS Staff Making Visit Persons Contacted Northlands January 18-20, 1971 Rhoda Abrams Core Staff Harold O'Flaherty W.R. Miller, M.D., Program Director R. J. Wilkins, Associate Director L. B. Stadler, Program Management Director L. G. Berglund, Project Management Coordinator E. D. Leyasmeyer, Continuing Education Coordinator R. N. Hill, Evaluation Officer M. J. Deschler, Rehabilitation Coordinator L. F. ColeResearch Sociologist L. A. Sonderegger, Research Assistant RAG Members Judge Stephen Maxwell, Past Chairman Pro_iect Staff Judith Thierer, Nursing Course Director Paul B. O'Donovan, M.D., Assistant Medical Director, Anita Smith, Ph.D.: Project Director Martin Leet, Evaluation Analyst Page 6 Region Dates RMPS Staff Making Visit Persons Contacted Oregon August 24, 1971 Rhoda Abrams Core Staff Loretta Brown, Program Analyst, R. S. Reinschmidt, M.D., Coordinator Evaluation Branch Kan Yagi, Ph.D., Consultant for Eugene Piatek, Program Analyst Education and Evaluation Planning Branch Mr. Bob Rasmussen, Coordinator for Program Administration Miss Susan Rich, RN, Coordinator for Nursing and Allied Health Mrs. Dale Caldwell, Coordinator for Information and Communications RAG Members Dr. Hutchinson, Chairman Mr. George Dewey, Chairman Evaluation Committee Project Staff Mrs. Elizabeth Burke, RN, Project Director Mrs. Fern Martinsen, RN, Project D rector Texas Jan. 27-28, 1971 Rhoda Abrams Core Staff Harold O'Flaherty Charles B. McCall, M.D. Coordinator David Ferguson, Acting Deputy Director Stanley Burnham, Ph.D., Director of Professional Programs Nathaniel D. Macon, Operations Officer Robert 0. Humble, Chief of Planning and Evaluation Hubert Reese, Data Acquisition Spec. RAG Members N.C. Hightower, Chairman Project Staff Levi V. Perry, M.b., Project Director Richard I. Evans, Ph.D., Associate Director for Evaluation Page 7 Rej%ion Dates RMPS Staff Making Visit Persons Contacted Western New York April 22-24, 1971 Harold O'Flaherty Core Staff John Ingall, M.D. Program Director Elsa Kellberg, Assoc. for search and Evaluation RAG Members Harry Sultz, DDS, Assoc. Professor, School of Medicine, State Univ. of New York at Buffalo Project Staff John Vance, M.D., Project Director Joe Reynolds, Project Director Wisconsin Sept. 2-3, 1971 Spero Moutsatsos Core Staff Eugene Nelson, Program Analyst, Dr. John Hirschboeckl Coordinator Planning Branch Dr. Paul Tracy, associate Coordinator for Program Development and Eval. Charles Lemke, Director of Evaluation Paul Nutt,, Assistant Coordinator for Program Development Norma Lang, Nursing Coordinator William Sheeley, Coordinator for Allied Health Manpower Dr. Al Rim, Evaluation Consultant Comprehensive Renal Disease Program RAG Members Judge Rodney Lee Young, Chairman Harold Gunther, Chairman Review and Evaluation Committee Kenneth Clark, Review and Evaluation Committee Dr. Glen Hobekg, Review and Evaluation Committee Dorothy Hutchinson, Review and Eval. Committee Dr. John Peterson, Review and Eval. Committee Dr. George Rowe, Review and Eval. Committee Dr. P. Richard Shall, Review and Evaluation Committee 0 0 0. Page 8 Region Dates RMPS Staff Making Visit Persons Contacted Wisconsin (cont'd) Sept. 2-3, 1971 Spero Moutsatsos RAG Members (cont'd) Eugene Nelson Dr. Philip White, Review and Eval. Committee Project Staff Janet Kraegel, Project Director APPENDIX B ISSUES AND QUESTIONS FOR He EVALUATTON VISIT I. EVALUATTON STAFFING AND RESOURCES 1. How is the evaluation function organized within the core? Where does it fit into the overall core organizational structure and how is it staffed (e.g., number of staff; full-ttw/part-time, etc.)? 2. What is the training and experience of the evaluation director as well as other staff functioning in this @a? Are there any projected staffing needs for evaluation purposes? 3. Is there an F&T Evaluation C@ttee in the Region? If so what is the composition and function of this Co@ttee and what have been its major accomplishments? 4. What other resources are used for evaluation purposes outside of core FW? For example: medical school departments, schools of public health, departrwnts of sociology, psychology, econ@cs, etc. and to what end? 5. How much core money is being spent for the development and iTrplemntation of evaluation at the program and project level? What portion (lo) of the core budget does this figure represent? In developing this figure you should consider staff salaries, consultant fees, travel and contracts. Estimate how much and what percentage of the amount awarded for the support of projects is being spent directly for evaluation purposes. 6. How much core money is being spent for the collection, analysis and storage of health and demographic data? In developing this figure you should consider staff salaries, consultant fees, travel, contracts and computer time. II. PURPOSES AND STRATEGIES FOR CARRYING OUT EVALUATION 1. What are the major reasons and purposes served by carrying out evaluation in this Region? To accomplish these purposes what strategies have been developed? Who is responsible for carrying out these strategies? III. PROJECT EVALUATION 1. At what point in the development of a project does the evaluator beco@ involved? What is the extent and character of the involvement of the evaluator in proposed and ongoing projects? 2. What @ the p evaluative rwthodological approaches utilized, e.g., epidendology, econondcs, sociology, system analysis, education, peer judgemnt, psychology, biostatisties, etc.? What is usually masur-ed? 3. Who conducts the evaluation? What steps are taken, if any, to encourage the acceptance of evaluation at the institutional level'.? 4. Have guidelines or a model been developed and disseminated to project staff and project sponsors to be followed in carrying out evaluation activities? 2 5. What feedback mechanisms 'if any, have been developed for evaluation and how frequently do evaluators meet with project directors? 6. What proportion of projects are evaluated? How @ these selected? 7. What have been the most significant project evaluations done to date? IV. PROGRAM EVALUATION 1. Has the Region developed a philosophy, approach and/or methodology fcr measuring pro tic @act? If yes, what is to be masured, how, and who is responsible for carrying it out? 2. What is the nature of the Region's decisionmaking process with respect to assessing, reviewing and approving the Region's program evaluation strategy and rwthodology? V. DATA 1. What, if wV, ongoing data collection system are or will relate to evaluation? 2. Are special data collection activities conducted for evaluation purposes? RELATIONSHIP OF EVALUA!FION TO DECISIONMAKING 1. Has any process been established to relate evaluation to the Region's decisiomaking process? 2. What are the program and project evaluation activities of the Regional Advisory Group (both retrospective and prospective)? What priority does the Regional Advisory Group place upon evaluation? 3. Have the results of the evaluation activities resulted in any significant program changes or modifications? VII. PROBLEMS 1. What are or have been the most significant evaluation problems? What steps have been taken to alleviate these? What constraints have inhibited adequate solutions? OPPE 2/4/71 AP@IDIX C REGIONAL MEDICAL PROGRAMS DIREMRS OF L@UATION E,7RCF2\TZ OF FEGION NAME DISCIPLINE TIME Alabama Ida Martha Reed 100% Coordinator Comunity Research and Developmnt Albany Rayrnond For-er, Ph.D. Sociology 40% Assistant Coordinator for Evaluation Arizona Allen Humphrey, Ph.D. Biostatistics 50% Evaluation Arkansas Roger Warner, M.S. Psychology 100% Director of Planning and Evaluation Bi-State Ralph T. Ove , Ph.D. Nuclear Cherdstry 100/@O Planning Director California Jack E. Thomon, Ed-D. Education 100% Coordinator for Evaluation Central New York Robert A. Schneider,, M.D. Instructional 100% Coordinator of Program @chnology Plaming and Evaluation Colorado/Wy@ Jams C. Syner, M.D. Internal Medicine 100% 'Associate Director, Project Administration and Health System Division Connecticut NONE Florida He E. Hilleboe, M.D. Preventive 50% Director Planning and Medicine and Evaluation Public Health Georgia Donald Trantow Operations 100% Director of Assessmnt Research Greater Delaware Donald Dyinski, F. S. Electrical 100% Valley Associate Director for Engineering Planning and Evaluation PERCENT OF REGION NAME DISCIPLINE TIIVE Hawaii Ruth Denney, M.A. Sociology Chief of Planning and Research Services Illinois Harry Auerbach, M.P.H. Biostatistics and 100% J.S.D. Administration Assistant Director for Research and Evaluation Indiana John Svann, Ph.D. Education 100% Director Educational Services Intermountain Mitchell Schorow, Ph.D. Educational 100 Assistant Coordinator Psychology Education Planning and Evaluation Section Iowa Phil Latessa, M.A. Economics iool/" Director of Health Statistics Kansas Charles H. Adair, Jr., Ph.D. Social Psychology 100/@o Assistant Coordinator for Research and Evaluation Louisiana Patrick Scheer., M.S. Business 100% Evaluator A@n-istration .Maine NONE Mar-yland Vern McM=in, B.S. Econondcs 100% Associate Coordinator for Evaluation @nphis Lewis N. Amis, Ph.D. Medical Econondcs 100% Chief of Planning Research and Evaluation Metropolitan Joel W. Novak., M.S. Psychology 100% Washington, D. C. Director, Office of Program Appraisal Michigan Gaetane Laroque, Ph.D. Program Planning 100 Associate Program Coordinator for Program Planning and Evaluation Mississippi Edwin B. Bridgforth, M.D. Statistics 50% Program Evaluator PERCENT OF MDF., REGION NAME DISCIPLINE -L Missouri Philip E. Morgan, M.D. OphtliaL-noloo loo(/, Director of Planning and Methodology Mountain States C. E. Smith, Ph.D. Counseling and 100% Coordinator for Planning Psychology and Evaluation Nassau-Suffol-k Rajah Prasad, M.A. Urban Planning 100% Evaluator Nebraska Geo@ L. Morris, Jr., Ed.D. Psychology ioo% Project Administrator Operations and Evaluation for Continuing Education New Jersey James P. Harkness, Ph.D. Sociology and 100% Deputy Program Coordinator Anthropology New Mexico Dudley Griffith, M.A. Psychology 100% Assistant Director for Planning and Evaluation Manuel F w, Ph.D. Psychology 100% Associate for Human Relations and Evaluation New York Yietropolitan John Eller, M.A. Sociology and 100% Evaluation Specialist and Methodology and Statistics North Carolina Manley Fishel, M.P.1i. Public Health 100% Acting Director of Evaluation North Dakota Lorraine P@.Ker, M.S. Counseling and 100% Associate Director Guidance Northeast Ohio Leonard Chansky, M.A. CorTuter Science 100% Assistant Director, and Education Evaluation Northern New England Edgar W. Francisco, III, Psycho ogy 100% Ph.D. Director of Planning and Evaluation Northlands Russell N. Hill, Ph.D. Education and 100% Evaluation Officer Sociology I"l@"RCI!NP OF REGION N@ DISCIPLINE TIYL,' .Northwestern Ohio Keith Jenkins, M.S. Education and 100,@- Program Evaluator Educational Administration Ohio State William A. Ternent, M.A. Camunication 1000/10 Director of Planning and Evaluation Ohio Valley Anne B. Cook, B.S. Business 100% @earch Associate Adndnistration Oklahoma R. W. Bexfield, M.A. Sociology 100% Associate for Evaluation and Rev. -ew Oregon Kan Yagi, Ph.D. Psychology 50% Consultant for Evaluation and Education Puerto Rico Camen Allende de Rivera, Biostatistics 100% M.P.H.E., M.S. Head, Section of Biostatistice Marta Tejada, M.S. Social Science 100% Social Scientist Rochester NOINE South Carolina Clarence W. Bowman, B.S. Pharmacy 100/0, Associate Coordinator Planning, Cperations and Evaluation South Dakota George R. Halter, Ed-D. Educational 100% Acting Director of Continuing Administration Education Susquehama Valley David Taylor, B.S. Business 100% Coordinator of Research Administration and Evaluation Temessee/Ydd-South Michael Zubkoff, Ph.D. Medical Economics 100% Head-@cal Economics Texas Robert 0. H@le., M.A. Sociology 00% Chief of Planning and Evaluation PERCENT OF REGION NAM DISCIPLINE Tri-State Harold W. Keairnes., M.D. Preventive Yied-'Icirie 15@ Coordinator for Evaluation Virginia Jack L. Mason, Ph.D. Education 1-001r, Education Sciences Off'icer Washingtm/Alaska Gaylord Duren, Ed.D. Education 100,@ Assistant Director for Evaluation West Virginia David S. Hall, Ph.D. Sociology 85% Behavioral Scientist Joseph Costello, M.S. Statistics 100% Biostatistician Western New York Elsa Kellberg, M.A. Sociology 100% Associate for Assessment and Research Western Pennsylvania David E. Reed, M.D. Co ty Medicine 100% Assistant Director eor 0 Evaluation Wisconsin Charles W. Lemke, M.P.H. Biology and 100% Evaluation Coordinator Chemistry