[Code of Federal Regulations]
[Title 14, Volume 4]
[Revised as of January 1, 2008]
From the U.S. Government Printing Office via GPO Access
[CITE: 14CFR417.305]

[Page 578-579]
 
                     TITLE 14--AERONAUTICS AND SPACE
 
     CHAPTER III--COMMERCIAL SPACE TRANSPORTATION, FEDERAL AVIATION 
              ADMINISTRATION, DEPARTMENT OF TRANSPORTATION
 
PART 417_LAUNCH SAFETY--Table of Contents
 
                     Subpart D_Flight Safety System
 
Sec. 417.305  Command control system testing.

    (a) General. (1) A command control system, including its subsystems 
and components must undergo the acceptance testing of paragraph (b) of 
this section when new or modified. For each launch, a command control 
system must undergo the preflight testing of paragraph (c) of this 
section.
    (2) Each acceptance and preflight test must follow a written test 
plan that specifies the procedures and test parameters for the test and 
the testing sequence. A test plan must include instructions on how to 
handle procedural deviations and how to react to test failures.
    (3) If hardware or software is redesigned or replaced with a 
different hardware or software that is not identical to the original, 
the system must undergo all acceptance testing and analysis with the new 
hardware or software and all preflight testing for each launch with the 
new hardware or software.
    (4) After a command control system passes all acceptance tests, if a 
component is replaced with an identical component, the system must 
undergo testing to ensure that the new component is installed properly 
and is operational.
    (b) Acceptance testing. (1) All new or modified command control 
system hardware and software must undergo acceptance testing to verify 
that the system satisfies the requirements of Sec. 417.303.
    (2) Acceptance testing must include functional testing, system 
interface validation testing, and integrated system-wide validation 
testing.
    (3) Each acceptance test must measure the performance parameters 
that demonstrate whether the requirements of Sec. 417.303 are 
satisfied.
    (4) Any computing system, software, or firmware that performs a 
software safety critical function must undergo validation testing and 
satisfy Sec. 417.123. If command control system hardware interfaces 
with software, the interface must undergo validation testing.
    (c) Preflight testing--(1) General. For each launch, a command 
control system must undergo preflight testing to verify that the system 
satisfies the requirements of Sec. 417.303 for the launch.
    (1) General. For each launch, a command control system must undergo 
preflight testing to verify that the system satisfies the requirements 
of Sec. 417.303 for the launch.
    (2) Coordinated command control system and flight termination system 
testing. For each launch, a command control system must undergo 
preflight testing during the preflight testing of the associated flight 
termination system under section E417.41 of appendix E of this part.
    (3) Command transmitter system carrier switching tests. A command 
transmitter system must undergo a test of its carrier switching system 
no earlier than 24 hours before a scheduled flight. The test must 
satisfy all of the following:
    (i) Automatic carrier switching. For any automatic carrier switching 
system, the test must verify that the switching algorithm selects and 
enables the proper transmitter site for each portion of the planned 
flight; and
    (ii) Manual carrier switching. For any manual carrier switching, the 
test must verify that the flight safety system crew can select and 
enable each transmitter site planned to support the launch.
    (4) Independent radio frequency open loop verification tests. A 
command control system must undergo an open loop end-to-end verification 
test for each launch as close to the planned flight as operationally 
feasible and after any modification to the system or break in the system 
configuration. The test must:

[[Page 579]]

    (i) Verify the performance of each element of the system from the 
flight safety system displays and controls to each command transmitter 
site;
    (ii) Measure all system performance parameters received and 
transmitted using measuring equipment that does not physically interface 
with any elements of the operational command control system;
    (iii) Verify the performance of each flight safety system display 
and control and remote command transmitter site combination by repeating 
all measurements for each combination, for all strings and all 
operational configurations of cross-strapped equipment; and
    (iv) Verify that all critical command control system performance 
parameters satisfy all their performance specifications. These 
parameters must include:
    (A) Transmitter power output;
    (B) Center frequency stability;
    (C) Tone deviation;
    (D) Tone frequency;
    (E) Message timing;
    (F) Status of each communication circuit between the flight safety 
system display and controls and any supporting command transmitter 
sites;
    (G) Status agreement between the flight safety system display and 
controls and each and any supporting command transmitter sites;
    (H) Fail-over conditions;
    (I) Tone balance; and
    (J) Time delay from initiation of a command at each flight safety 
system control to transmitter output of the command signal.
    (d) Test reports. If a Federal launch range oversees the safety of a 
launch, the range's requirements are consistent with this subpart, and 
the range provides and tests the command control system, a launch 
operator need only obtain the range's verification that the system 
satisfies all the test requirements. For any other case a launch 
operator must prepare or obtain one or more written reports that:
    (1) Verify that the command control system satisfies all the test 
requirements;
    (2) Describe all command control system test results and test 
conditions;
    (3) Describe any analysis performed instead of testing;
    (4) Identify by serial number or other identification each test 
result that applies to each system or component;
    (5) Describe any test failure or anomaly, including any variation 
from an established performance baseline, each corrective action taken, 
and all results of any additional tests; and
    (6) Identify any test failure trends.