W


TO:




FROM:


SUBJECT:




U/Associate Administrator for Life & Microgravity Sciences and Applications
M/Associate Administrator for Space Flight
JSC/AA/Director
JSC/YA/Phase I Program Manager

W/Assistant Inspector General for Inspections, Administrative Investigations, and Assessments

Observations and Recommendations On the Phase I NASA-Mir Science Program

The Office of Inspector General is conducting an inquiry on United States (U.S.) participation in the Russian Mir Space Station Program. As part of this inquiry, we briefly reviewed the planning, management, and accomplishments of the Phase I NASA-Mir Science Program (NMSP). We provided the Agency a draft of this report for comment and have incorporated their response, included in its entirety as Appendix G, as appropriate.

Although the program has produced some important benefits [Note 1], we believe some improvements should be made which would enhance overall program success and also that of the Inter-national Space Station (ISS) science program. Specifically, NASA should: (1) require the Principal Investigators (PIs) to be more timely in reporting results and providing data on the experiments to the Agency; (2) improve the metrics established to evaluate program results; (3) ensure periodic, formal assessments of research progress and results are made by the cognizant Headquarters-level oversight Committee; and (4) utilize independent science advisory or working groups to evaluate the results and value of the NMSP science program.

    I. Background. In 1992, the U.S. and the Russian Space Agency (RSA) agreed to a joint Shuttle-Mir science program in which one U.S. astronaut would spend approximately 3 months on-board the Mir space station conducting research. This mission was known as NASA-1/Mir-18. Subsequently, NASA expanded cooperative efforts with the Russians and increased the total planned long-duration stays by U.S. astronauts on the Mir to seven. The NASA-Mir program was quickly implemented as a result of a high-level agreement between the U.S. and Russia for greater international cooperation.

    There was little time for the normal long-range planning and preparation that goes into a NASA program of this type. However, Agency managers viewed the U.S. presence on the Mir and participation in the program as an opportunity to perform some science in a long-duration microgravity environment.

    To help plan for the Mir program science, NASA used a Payload Steering Committee (PSC). The PSC obtained advice and recommendations from several external advisory groups, which included the Mir Science Working Group (Chaired by Dr. Jeffery Borer, Cornell University) as well as the NASA Advisory Committee and the Life and Microgravity Sciences Advisory Committee.

    The PSC was formed to review and approve, as appropriate, integrated NASA science and technology payload plans and allocations for Phase I of the ISS Program. It estab-lished the overall NMSP program guidelines, priorities, and performance metrics, and also recommended the specific experiments to be flown on the missions. The committee was chaired by the Deputy Associate Administrator for Life and Microgravity Sciences and Applications (Code U). The membership included the Deputy Associate Admini-strators for the Office of Space Science (Code S), Office of Mission to Planet Earth (Code Y), and the Johnson Space Center (JSC) Phase I Program Manager (See Appendix A for a description of the PSC) [Note 2].

    The Mir Science Working Group (Borer Group) was formed on an ad-hoc basis to advise NASA on an overall approach for conducting science on-board the Mir, as well as review and recommend individual experiments. It first met in July 1993. During its deliberations, the group acknowledged various constraints on a Mir science program, including lack of PI and crew planning times, potential electrical power limitations, and limited physical space and crew availability. Because of the constraints, the group recommended minimizing hardware and technical complexities, simplifying crew training requirements, and focusing on expanding projects previously carried out on the ground and during short-duration flights [Note 3].

    The scientific research planned for Missions 2 through 7, involved 7 different disciplines: Advanced Technology, Earth Sciences, Fundamental Biology, Human Life Sciences, ISS Risk Mitigation, Microgravity, and Space Sciences. Within these disciplines, there were 80 different experiments planned, some of which NASA expected would be performed on more than 1 mission (See Appendix C) [Note 4].

    While the PSC retained responsibility for evaluating research implementation, responsibility for overall program implementation was assigned to the Phase I Program Manager at JSC [Note 5]. Within the JSC office, several Mission Scientists and a Science Working Group manage the implementation. The Mission Scientists are responsible for establishing the operational requirements for the experiments, integrating the science payloads into the mission plans, monitoring in-flight operations, evaluating experiment and program results, and helping to ensure final reports and data are made available to the scientific community and public. The Mission Scientists also coordinate related matters with the research PIs-drawn from within and outside NASA-who propose and conduct the experiments and publish the results. The JSC Science Working Group, headed by the lead Mission Scientist and a Russian counterpart, makes decisions regarding the scientific aspects of all the experiments and equipment requirements during the imple-mentation phase. In doing so, this group routinely interacts with the PIs, Mission Control, the ISS office, and others involved with the Mir missions and science (See Appendix D for a description of the JSC/Russian working group structure).

    Missions 2 through 5 had been completed at the time of our review, Mission 6 was in process, and Mission 7 was planned for January 1998. The NASA-5 mission produced significantly reduced science results primarily because of a collision between a Progress supply craft and the Mir. The collision required the Mir crew to seal off the Spektr module that housed many of NASA's microgravity experiments and related hardware. To minimize the impact on the science program, NASA planned to transport replacement hardware and related items for some of those experiments to the Mir on Missions 6 and 7. However, routine problems continue to cause changes to the science experiments and mission schedule (e.g., powering off of systems due to the failure of Mir's central NCS computer on January 2, 1998).

II. Science Program Changes Would Enhance Mission Success.

    A. More Timely Dissemination of Scientific Data and Results. Part of NASA's strategic mission is to communicate to the scientific community, the commercial sector, and the public, the science and technology that results from its programs. Some important findings have been publicized, and some experiment results have been published in scientific journals. Nevertheless, there are a substantial number of experiments conducted on completed missions for which the data and results have not been widely disseminated or made available to the scientific and commercial communities and the general public [Note 6]. Obtaining and making the NASA-Mir scientific data and final results available in the most timely way contributes to achieving the Agency mission. It also provides Congress and the public evidence of the value of NASA's investment in this cooperative program.

    The Agency does not have a uniform or formal (contractual) requirement or timeframe by which PIs must submit their reports and data. Instead, it communicates through various means the expectation that PIs will provide data and reports to the Agency and subsequently publish the results. For example, PIs whose experiments were selected for the NMSP through a NASA Research Announcement (NRA) were informed that, "Investigators will have a 1-year period from the end of flight to complete their investigation, analyze results, and provide a final experiment report to NASA with suitable data sets for general release to the science community." [Note 7] That statement put the PIs on notice that NASA expected to receive results and data and that the Agency has a role in disseminating them.

    Also, the JSC Mission Scientists communicated the reporting requirement to the Phase I PIs through letters requesting them to submit certain reports and data on the experiment results [Note 8]. The Mission Scientists requested this information to allow NASA to monitor the experiment(s), ensure the PIs receive all their data and samples, and to help promote timely release of the results and data to the public.

    To evaluate how timely the PIs were in submitting the reports, data, and results on their experiments, we reviewed information on the NASA-2 and -3 missions. These missions had been completed long enough for PIs to issue all Preliminary and some Final Reports. Using information from the JSC Mission Science office on the reports received as of November 7, 1997, we identified 48 experiments performed on these 2 missions for which the PIs should have submitted either a Preliminary or Final Report by that date [Note 9]. Of the 48, 3 had not yet submitted a Preliminary Report, and 20 of the 26 experiments (on NASA-2) had not submitted a Final Report. In addition, many of the reports which had been submitted were overdue at the time they were received. Of the Preliminary Reports submitted, 34 (70 percent) were late (by an average of 8 weeks) and 4 of the Final Reports were late (by an average of 1 month).

    The PIs we contacted cited several factors causing delays in submitting the reports. These included delays in the PI obtaining the data or samples, lack of additional funding needed by the PIs to complete the analysis, the PIs considering the reporting requirements an administrative burden or low priority (particularly if the preliminary data analysis did not reveal any new or interesting results), and PIs electing to batch results from an experiment conducted on multiple flights into a single report at the conclusion of the last mission [Note 10].

      While some delays may be justified, we believe that the PIs have an obligation to make the results and data available to NASA as timely as possible. Further, program officials have a responsibility to establish and enforce timely standards [Note 11]. Without including a formal reporting requirement in the contract or grant with the PI, the Agency can neither control nor enforce timely report submissions. Ensuring that the results of the remaining experiments on Phase I are released in a timely way and have wide distribution could have positive effects. For example, it would provide a basis to consider and select these PIs for conducting future NASA research and ensure all lessons learned from Mir are applied, would contribute to more fully achieving the Agency mission to generate knowledge and enhance commercial technology, and would greatly aid in justifying to Congress and the public future ISS science missions and obtaining the necessary funding.

      Recommendation 1. The Associate Administrator, Code U, should coordinate with other appropriate Headquarters offices to:

      a. Develop an effective Agency policy that requires timely analysis, reporting and dissemination of data and results on Mir and ISS-related science. Strong consideration should be given to: (1) structuring the language of the individual grants and contracts governing the experiments to include specific reporting requirements and timeframes, (2) adequately funding analysis, and (3) holding some experiment funding in reserve to enforce the reporting requirements.

      b. Include a criteria that considers the effectiveness and timely reporting on prior NASA funded experiments by the prospective researchers when reviewing and selecting future proposals for science and research work on flight projects.

    Management Response. Management stated that the external peer review process was working well. They stated that withholding funds from PIs as an incentive to submit their reports and data timely and completely was not practical because the cost of research is an ongoing requirement. Management further stated that several years may lapse between the time the PI receives the data and actual publication date since neither NASA nor the PI control the schedule of scientific meetings and journal publication. Therefore, the Agency did not understand how withholding funds would address the issue.

    OIG Evaluation and Comments. The response indicates that NASA is satisfied with the timeliness in making Phase I NMSP results and related data available to the public and scientific community, and that no action is needed. As discussed in the report, there were numerous experiments for completed Mir missions for which the expected preliminary and final reports and data either had not been submitted to NASA, or were submitted late. We continue to believe that this situation can, and should, be improved. Accordingly, we reaffirm the recommended actions and request that management reconsider its position and agree to make improvements in this area.

    The high cost of performing flight experiments and the limited opportunity to reproduce these experiments make it even more imperative that NASA timely receive the operational accomplishments reports, preliminary science results analysis, and the final reports and data. While NASA and the PIs cannot control acceptance and publication of results in peer-review journals, they can control submission of these reports and data and their timely dissemination to the public and scientific community. Code U is currently revising its policy on the review, selection, and support of research. Any new policy should clearly identify PI responsibilities for submitting final reports and complete data sets to NASA. This policy should also include specific language to define the timeliness of those submissions.

    We accept management's position that the peer-review process is effective in selecting the best research. However, it is our understanding that the peer review and OLMSA (Code U) evaluations of proposals regarding prior results focus mostly on what has been published in peer-reviewed journals and sources. The recommended action was based on the belief that some consideration (when ranking and selecting by either the peer group or OLMSA) should be given to the factors of timeliness, completeness, and effective dissemination of the information, and not solely on peer-reviewed publication.

    In reconsidering final action on these issues (as well as others in the report), management is encouraged to consider and implement action(s) that would be appropriate and effective when there is disagreement with the OIG's recommendation.

    Recommendation 2. The JSC Mission Scientist(s) on the NMSP should:

      a. Re-emphasize to all PIs the importance of timely submission of the Preliminary and Final Reports and data sets, and take prompt follow-up action to obtain those that are overdue.

      b. Identify and correct the impediments that hamper the PI's timely completion and dissemination of the results.

    Management Response. Management indicated that symposiums had been held and others were planned to present some of the research results on NMSP, and that there had already been 15 related publications. Management also stated that the JSC mission scientists already emphasize timely submission of Preliminary and Final Reports and associated data sets, including pursuing corrective action as necessary. However, it was noted that contractual leverage is reduced once a flight experiment is flown because most of the NASA funding has already been applied.

    OIG Evaluation and Comment. We were aware of the symposiums and commend NASA on its efforts in this regard. These symposiums and the publications help communicate the NMSP results. However, the report addressed the more fundamental issue of ensuring that NASA obtains the required reports and related data sets timely and completely. NASA can then monitor the research it has funded and disseminate the data on a wide basis to the public and interested scientists. The final data sets are particularly important since they are used by other scientists both in their own research and for further analysis and comparison. The importance of NASA rigorously ensuring the receipt of final data sets from the PIs after the 1-year proprietary period has been emphasized by the Space Studies Board of the National Research Council in its report "Archiving Microgravity Flight Data Samples" (National Academy Press, Washington, DC, 1996).

    Also, we recognize that the JSC mission scientists' responsibilities cover many aspects of the NMSP research, beyond just monitoring and evaluating progress. The scientists we spoke to seemed genuinely interested in ensuring the reports and data were submitted promptly by the PIs. However, as noted in management's response, their leverage to enforce or ensure this was limited. The intent of our recommendation is to suggest improvements to that situation. Because we still consider obtaining and disseminating all the Phase I data in the most timely way to be very important, we reaffirm the need for the actions in recommendation 2. We request that management regard this issue more closely. In deciding final action, we also suggest that management consider tasking one of the science advisory or working groups to review this area to determine the full extent of the issues, the reasons, and to recommend improvements to help ensure similar issues do not occur on ISS.

    B. Metrics for Evaluating Program Performance Could Be Improved. To measure accomplishment of the NMSP goals and objectives, the PSC developed and approved program metrics. In October 1996, Code U distributed the metrics to the JSC Phase I Program Manager and others involved in the program for use in evaluating Phase I missions and results.

    Eleven metrics were established to measure areas such as the success rate for integrating and flying the experiments, the efficiency of experiment data/sample return, crew-time spent on science, and the number of experiments publishing results in scientific journals and other sources [Note 12].

    The PSC and Code U efforts to establish metrics have resulted in relevant and useful tools for evaluating the program's overall success. However, we believe NASA should continue to refine and improve the use of metrics to evaluate individual mission and overall program success.

      1. Better Metrics. Our review of the metrics issued by the PSC to evaluate NMSP performance and success identified areas that could be improved. First, the only metric that specifically addresses the quality of the research is #9-the number of publications in respected, peer-reviewed journals. Providing oversight to ensure that NASA-funded research is of the highest quality is an important program and Agency function. Accordingly, one or more metrics should be developed to evaluate other quality aspects on each experiment. Some of these metrics should be post-flight, including whether the research objectives were achieved, and whether the experiment proce-dures and equipment were adequately designed and tested. Post-flight assessment of quality would add to the overall evaluation of the program, such as helping to determine if the PIs and their experiment(s) should be selected for future ISS and other missions.

      Second, cost management has not been addressed as a metric. During our discussions with some of the PIs, we learned that analysis is sometimes not carried out due to a lack of funds. According to them, this can happen either as result of minimal funds being initially allotted for the analysis or funds running out at the end. Reasonably estimating and controlling the costs of the experiments-including ensuring sufficient funds for analysis-is vital to any assessment of program success. For this reason, we believe one or more cost-related metrics should be developed for the NMSP program (as well as for future ISS science).

      Finally, certain metrics could be reworded to be more effective [Note 13].

    Recommendation 3. The Associate Administrator, Code U, as PSC Chair, should request that the committee:

      a. Develop and issue additional metrics to evaluate the overall quality and cost control of NMSP research and results.

      b. Use ongoing assessments to re-evaluate the existing metrics as a success measurement tool.

    Management Response. See Recommendation #4.

    OIG Evaluation and Comment. See Recommendation #4.

      2. Use of Metrics. In discussing the NASA-Mir science with the lead JSC Mission Scientist, we asked what metrics were established and being used to evaluate the NMSP scientific results. We were told that there had not been time to establish formal metrics due to the other tasks involved in getting the experiments flown and operational. The only two metrics issued by the PSC that we detected as monitored and used in preparing the Science Mission Summaries for each mission were (a) the number of experiments actually flown vs. the number in the mission plan, and (b) the number of individual experiment sessions [Note 14] accomplished vs. the number scheduled (metrics #2 and #3 respectively, as shown in Appendix E).

      We believe that the other PSC metrics (except #10 and #11) [Note 15] are also relevant to evaluating the science on each mission, and are important tools which the JSC Program Manager and Mission Scientists should also be using to carry out their program implementation responsibilities in assessing mission performance. For example, #4 measures the operational success rate and efficiency of data return for each experiment or instrument. Routinely tracking this and other applicable established metrics for each mission would allow management to more effectively monitor program progress and identify needed changes.

    Recommendation 4. The Associate Administrator, Code U, as PSC Chair, should ensure that all key NMSP program officials having responsibility over the Mir science experiments, especially the JSC Program Manager and Mission Scientists, use the PSC approved metrics to prepare their reports assessing performance on each mission.

    Management Response. In commenting on Recommendations 3 and 4, manage-ment stated that other metrics, in addition to those established by the PSC, would be used to assess NMSP results and identify "lessons learned." Those were (a) milestones in the Mir contract with Russia, and (b) metrics established by OLMSA for the ISS program. Management also explained that OLMSA reviews NMSP research activities prior to, during, and after commencement of each new Mir stage. Finally, management indicated that JSC had been requested to contact all investigators to assure that they have adequate funds. Since no investigator(s) had contacted NASA regarding that issue, management believes the report's assertion that there was inadequate funding is unsubstantiated.

    OIG Evaluation and Comments. The PSC-established metrics were considered to be adequate tools for assessing NMSP performance, as was noted in the report. However, we did recommend adding some metrics to specifically address "quality" and "cost-control" aspects of the individual experiments. Using the Russian contract and ISS program metrics, along with the PSC metrics, for assessing the overall NMSP performance and success is acceptable, as long as the overall metrics used include some to specifically address experiment quality and cost-control. We understand that the ISS metrics referred to in the response are still under development as part of an effort to establish strategic planning perfor-mance measures for all types of Code U research. Therefore, we reaffirm recommendation 3.a., and request management to include the recommended metrics in those developed and issued for the ISS program. Regarding the cost metric, we would envision a simple metric such as determining if the PI achieved the stated research objectives within the negotiated budget.

    The actions addressed in recommendations 3.b. and 4 dealt more with the need to apply the metrics in assessing research results and performance immediately following each NMSP mission. Since the last NASA-Mir mission is underway, and the entire NMSP program is therefore almost complete, we do not feel there is a compelling need to assess each individual mission against the metrics. Therefore, management's should decide whether to perform such an evaluation. We will monitor NASA's use of the established metrics to evaluate the overall NMSP research program and actions taken to apply the "lessons learned" and other results to the ISS and other flight research programs.

    Regarding management's comment on inadequate funding for research, as explained in the report, this was only one of several reasons cited by PIs for not submitting their reports and data to NASA. We did not challenge the reasons given. We reported the reasons given so that management could follow up on all the experiments that were late, determine if the delays were justified, and make appropriate adjustments to ensure timely products. We still believe that it would be prudent and beneficial to do so.

    C. The Process for Evaluating Scientific Results Needs Improving.

      1. No PSC Evaluation. At the time of our review, the PSC had neither received nor reviewed any implementation reports or overall assessments (based on the established metrics) from the JSC Program Management office to monitor research implementation. According to the PSC Chair-man, there was no current plan to perform an evaluation until after all missions are completed. Nevertheless, we believe it is important for the PSC to monitor ongoing research and results and, to the extent practicable, make any corrections to priorities or other areas based on program results. The group obviously recognized the need to maintain independent oversight of the research when it developed its charter. We agree that this is a key HQ function and that the PSC needs to perform it on a routine basis to help maximize the science program results and success.

    Recommendation 5. The Associate Administrator, Code U, as PSC Chair, should request all science and technical reports or assessments prepared by JSC (or others) regarding overall Phase I implementation. The PSC Chair should periodically convene the PSC to review this information and evaluate NMSP research progress, results, and priorities.

    Management Response. Management agreed that oversight is an important Headquarters function. Because the Phase I NMSP program was nearing its end and there had been many organizational changes affecting the PSC membership, management decided to disestablish the PSC and transfer its functions to the Space Station Utilization Board (see Appendix H).

    OIG Evaluation and Comments. The transfer of PSC responsibilities regarding Phase I NMSP research seems prudent and acceptable. While management's response neither agreed nor disagreed with the recommended action, we presume that the the intention is for the Utilization Board chairperson to take the action described.

      2. Contractor Science Assessment Questioned. The problems created by the collision of the Progress supply craft and Mir caused NASA to reassess continuation of the Phase I program. As part of this reassessment, in July 1997, Code U tasked a contractor (ANSER Corporation) to provide the Agency with:

      a. An expert assessment of the scientific benefits accrued to date from the NMSP.

      b. An expert assessment of the anticipated scientific value of continuing the NMSP into the sixth and seventh long-duration missions with the Mir.

      c. An assessment of the tangible research results developed thus far in the NMSP.

      d. A preliminary assessment of the potential for future U.S.-Russian collaborative space flight research.

    The contractor issued a final report on September 30, 1997. Although Agency officials considered the report information useful in deciding to continue the NASA-Mir program, we have two concerns with using a contractor in this instance.

    First, we do not believe that the first two objectives (assessing the scientific benefits to date and the value of continuing the program) were an appropriate task for a contractor. Such assessments, while certainly germane and vital to the decision as to whether to continue the program, are more appropriate for the peer review and advisory committee process which the Agency routinely uses. The contractor's assessment is a single opinion, and not as broad and objective as the peer review process allows. Therefore, peer reviews are a more appropriate and objective method with which to assess the results and value of NMSP [Note 16], as well as ISS science and research.

    Second, we believe that it was neither cost-effective [Note 17] nor appropriate for a Headquarters' program office to pay a contractor to collect and review data that was readily available at JSC. We learned that most of the information on the experiments and results used in the ANSER assessment was obtained from either JSC or from the PI presentations at the August 1997, science symposia [Note 18]. Also, as discussed earlier in this report, assessments of program results are something that should have been-and already are to some extent-prepared and used by the JSC Program Manager's office to monitor implementation. We believe Code U officials should have obtained information directly from JSC and performed their own assessment of program performance. And regarding any comment of the value of the scientific results, we believe that is something more appropriate for the current advisory and peer review processes.

    Recommendation 6. The Associate Administrator, Code U, should:

      a. Ensure that all required expert assessments regarding the scientific results, benefits, and value of the NMSP or ISS be conducted by an independent group, ideally through one of the established NASA Science Advisory or Working Groups.

      b. Obtain all future information on the implementation and results of the NMSP from the JSC Program Management office rather than commission a contractor to provide it.

    Management Response. The response indicated that NASA had a peer review process that was reviewed and approved by high-level external groups like the National Academy of Sciences, and was held to be a model for others. It was explained that OLMSA had always obtained technical information from JSC program managers, but reserved the right to obtain independent reports from others. In that regard, the ANSER tasking was considered appropriate and the report was not regarded as a scientific assessment but one that met a very specific requirement for a semi-qualitative, top-level, independent assessment of whether the Mir configuration post-collision was capable of continuing to support research objectives.

    Management also cited concern with a statement in the Conclusion section of the report that "it is essential for the Agency to ensure that the scientific experiments are value added." Management believes that this questions the underlying scientific validity of the Phase I program. Accordingly, they suggested that the phrase be reworded to "continue to communicate the high scientific merit."

    OIG Evaluation and Comments. The OIG did not intend to question the quality or appropriateness of NASA's peer review process. In discussing our concerns about the tasks assigned to the contractor, ANSER, we believe that the expert assessments called for (of scientific benefits and continuing the Mir missions) were more appropriate for an independent group like some of the established NASA Science Advisory and Working Groups. We used the term "peer review" to include those processes. Discussions subsequent to the draft report revealed that NASA, along with some people in the scientific community, define the term peer review to mean strictly relating to the initial screening and selection of proposals (for flight in the case of the NMSP), rather than including in that definition, as we did, a possible post-flight assessment process. We have reworded recommendation 6.a. to clarify this point.

    We recognize the right, and at times the necessity, of NASA to obtain contractor support and independent reports. However, in this case, we continue to believe that it was neither appropriate nor cost-effective to do so. In our opinion, the JSC Program Management Office (and Mission Scientists) should be monitoring and evaluating the experiments and results, and generating adequate reports and other information needed by the Program Scientist or NASA Headquarters for decision-making. Whenever possible, Code U, in its oversight role, should have internal processes and personnel to analyze program information on the NMSP and ISS rather than unnecessarily rely on a contractor.

    Regarding management's comment on our use of the term "value added," we have reworded the Conclusion to clarify the apparent misunderstanding. Our use of the term was meant to apply to ensuring that selected research projects continue to provide new, valuable results and data after launch and into operational status.

    D. Conclusion.

    Because of the limited flight opportunities for scientists to perform microgravity experiments, it is essential that valuable scientific results and related data are disseminated as timely and widely as possible, and that the overall program objectives are achieved. We believe the actions recommended will help achieve these results.



    David M. Cushing

    8 Enclosures

    cc:
    Distribution


    DISTRIBUTION

    National Aeronautics and Space Administration (NASA) Officials-In-Charge

    A/Administrator
    AD/Acting Deputy Administrator
    AT/Associate Deputy Administrator (Technical)
    G/General Counsel
    I/Associate Administrator for External Relations
    J/Associate Administrator for Management Systems and Facilities
    L/Associate Administrator for Legislative Affairs
    P/Associate Administrator for Public Affairs
    Z/Associate Administrator for Policy and Plans

    NASA Advisory Officials

    Chairman, NASA Advisory Committee
    Chairman, NASA Aerospace Safety Advisory Panel
    Chairman, Advisory Committee on the International Space Station
    Chairman, Shuttle-Mir Rendezvous and Docking Missions and ISS Operational Readiness Task Force
    Chairman, Life and Microgravity Sciences Advisory Committee
    Chairman, Mir Science Working Group

    Chairman and Ranking Minority Member of each of the following Congressional Committees and Subcommittees:

    Senate Committee on Appropriations
    Senate Subcommittee on VA-HUD-Independent Agencies
    Senate Committee on Commerce, Science and Transportation
    Senate Subcommittee on Science, Technology and Space
    Senate Committee on Government Affairs
    House Committee on Appropriations
    House Subcommittee on VA-HUD-Independent Agencies
    House Committee on Government Reform and Oversight
    House Subcommittee on National Security, International Affairs, and Criminal Justice
    House Committee on Science
    House Subcommittee on Space and Aeronautics


    For copies of appendices, contact Dana Mellerio, as above.


    REPORT END NOTES

    Note 1. Important NMSP results include better understanding the microgravity and radiation environments in which NASA Space Station crews operate.

    Note 2. The PSC was chaired by the Associate Administrator for Code U, who was previously the Deputy Associate Administrator. Subsequently, on January 26, 1998, after our office issued the draft report the PSC was abolished to reflect changes in NASA's organizations (see Appendix H).

    Note 3. See Appendix B, Recommendations of the Mir Science Working Group (Borer Group).

    Note 4. The exact number of experiments planned and/or flown can vary because there are several ways to count experiements, e.g., some experiments may have multiple investigator teams working on different aspects of what is listed as one experiment and some experiments flown on more than one mission may be listed multiple times.

    Note 5. The Task Force on Shuttle-Mir Rendezvous Docking Missions recommended establishing a separate Phase I program management office at JSC.

    Note 6. Results and data on NASA-funded research are disseminated in several ways. PIs can publish their results in professional, peer-reviewed journals. NASA can make Final and Preliminary results available during symposiums, on web sites and in other ways. Also, it can make data (stored in repositories) available to various user communities.

    Note 7. NRAs are widely distributed public announcements describing the opportunities for science and research experiments and soliciting proposals for performing them. NMSP experiments for Microgravity and Life Sciences were selected via NRAs. Experiments sponsored by other enterprises such as Earth Sciences (formerly Mission to Planet Earth) were selected in other ways. Some were an add-on to experiments already approved and funded and others were selections from proposals already received by NASA from prior solicitations.

    Note 8. The requested reports and data are as follows: an Operational Accomplishments report 30 days after flight describing how well the hardware operated and what kind of data was obtained, a Preliminary Science Results report (and data) 180 days after flight describing any substantive findings or observations resulting from early analysis, and a Final report (and data) within 1 year after flight describing conclusive results. The 1-year period generally begins when the PI receives the data and/or samples, which on some experiments is not until after the end of the mission.

    Note 9. Reports for experiments in the ISS Risk Mitigation and Space Medicine areas are not submitted to the Mission Scientist so we did not include them in our analysis. We also excluded the Operational Accomplishments reports and experiments that were characterized as "facilities." The latter support other experiments and are not considered true experiments (e.g., the glove box experiment). Final reports on the NASA-3 mission were not due at the time of our analysis and, thus were not included in our analysis.

    Note 10. In some instances, batching results for several missions into a single report is appropriate as, for example, with experiments covering covering several missions or where crew confidentiality must be protected.

    Note 11. JSC mission personnel tract the science/research via scheduled sessions with the crew and through the crew timeline process. Also, some informal reviews are conducted shortly after each mission and discern "lessons learned" and to monitor whether PIs have or will receive the data they are expecting. However, this information is not detailed and not widely shared with the science community or the commercial sector. As a result, these groups lack information which would be useful to them, for example, in better designing flight experiments and/or applying results to other related research, including ground-based research.

    Note 12. See Appendix E for details on these metrics. It should be noted that Code U has an ongoing effort to clarify and evaluate the metrics.

    Note 13. For example, metric #5 does not measure the time allocated to science; rather, it addresses total U.S. crew time on Mir. To be useful in evaluating science, metric #5 should either be reworded to include only that time spent on the NMSP experiments or another metric developed to measure actual science time. Metric #6 may not be very useful as currently worded since the RSA contract requirement is a "not-to-exceed" figure of 25 percent of total cosmonaut time available. Therefore, the amount of time "promised" as described in the denominator of the metric is not a specific, fixed number. Rewording metric #6 to reflect the "scheduled" time (which is a known, specific number) for the experiments on each mission would provide a more accurate and meaningful measure for the program.

    Note 14. A session is a single observation, sample, or similar event in performing the experiment (e.g. taking blood sample).

    Note 15. These two metrics are more relevant to the entire program rather than individual missions and, thus, are excluded.

    Note 16. At the conclusion of Mission 7, a thorough review of the science program needs to occur to promote quality research for ISS Phases II and III. As discussed in the background section the NMSP was hurriedly assemblied with many constraints, with NASA selecting many experiments that related to ongoing or prior research (see Appendix F).

    Note 17. The cost of this report (the amount that might have been saved) was not available because it was not individually negotiated and priced but was performed under a larger $4.1 million long-term contract (1-year basic and four 1-year options) between Code U and ANSER.

    Note 18. A post-flight science symposia was held at JSC in August 1997, to present and discuss the results at that time on some of the NMSP experiments.