Click here
      Home    DAG Tutorial    Search    Available Downloads     Feedback
 
The DAG does not reflect the changes in the DoDI5000.02. Work is in progress to update the content and will be completed as soon as possible.
 
.

13.7. Countermeasures

Topic
Previous Page Next Page

Previous and Next Page arrows

DEFENSE ACQUISITION GUIDEBOOK
Chapter 13 – Program Protection

13.7. Countermeasures

13.7.1. Anti-Tamper

13.7.1.1. Critical Technologies (CT)

13.7.1.2. Anti-Tamper Considerations

13.7.1.3. Anti-Tamper Execution

13.7.1.3.1. Process

13.7.1.3.2. Sustainment

13.7.1.3.3. Packaging

13.7.1.4. Anti-Tamper Disclosure Guidelines

13.7.1.5. DoD Anti-Tamper Executive Agent (ATEA) Office

13.7.1.6. Anti-Tamper Verification and Validation (V&V)

13.7.1.7. Anti-Tamper and Verification and Validation (V&V) Plan Approval

13.7.2. Information Assurance (IA)

13.7.2.1. Critical Program Information (CPI) in DoD Information Systems

13.7.2.2. Critical Program Information (CPI) in Other Than DoD Information Systems

13.7.2.3. Indicators of Achieving Baseline Information Assurance (IA) Protection of Critical Program Information (CPI)

13.7.3. Software Assurance

13.7.3.1. Development Process

13.7.3.1.1 Static Analysis

13.7.3.1.2 Design Inspection

13.7.3.1.3 Code Inspection

13.7.3.1.4. Common Vulnerabilities and Exposures (CVE)

13.7.3.1.5. Common Attack Pattern Enumeration and Classification (CAPEC)

13.7.3.1.6. Common Weakness Enumeration information (CWE)

13.7.3.1.7. Penetration Test

13.7.3.1.8 Test Coverage

13.7.3.2. Operational System

13.7.3.2.1. Failover Multiple Supplier Redundancy

13.7.3.2.2. Fault Isolation

13.7.3.2.3. Least Privilege

13.7.3.2.4. System Element Isolation

13.7.3.2.5. Input Checking/Validation

13.7.3.2.6. Software Encryption and Anti-Tamper Techniques (SW load key)

13.7.3.3. Development Environment

13.7.3.3.1 Source Code Availability

13.7.3.3.2. Release Testing

13.7.3.3.3. Generated Code Inspection

13.7.3.3.3. Additional Countermeasures

13.7.4. Supply Chain Risk Management (SCRM)

13.7.4.1. Scope of Supply Chain Risk Management (SCRM)

13.7.4.2. Supply Chain Risk Management (SCRM) Throughout the System Lifecycle

13.7.4.2.1. Criticality Analysis

13.7.4.2.2. Supplier Annotated Work Breakdown Structure (WBS) or System Breakdown Structure (SBS)

13.7.4.2.3. Securing the Supply Chain Through Maintaining Control Over the Information and Information Flow

13.7.4.2.4. Design and Engineering Protections

13.7.4.2.5. Supply Alternatives

13.7.4.2.6. Procurement Authorities

13.7.4.2.7. Enhanced Vulnerability Detection

13.7.5. Trusted Suppliers

13.7.6. System Security Engineering

13.7.6.1. Definitions

13.7.6.2. Context of Systems Security Engineering (SSE) Within SE

13.7.6.3. Systems Security Engineering (SSE) Across the Lifecycle

13.7.7. Security

13.7. Countermeasures

This section describes the guidance and expectations for Program Protection countermeasures. Countermeasures are cost-effective activities and attributes to manage risks to Critical Program Information (CPI) and critical functions and components. They vary from process activities (e.g., using a blind buying strategy to obscure end use of a critical component) to design attributes (e.g., Anti-Tamper design to defeat Critical Program Information (CPI) reverse engineering attempts) and should be selected to mitigate a particular threat. For each countermeasure being implemented, the program should identify someone responsible for its execution and a time- or event-phased plan for implementation.

Many countermeasures may have to be partially or completely implemented by prime and subcontractors on the program. See Section 13.13 for guidance on contracting for the implementation of Program Protection.

13.7.1. Anti-Tamper

Anti-Tamper (AT) is the Systems Engineering activities intended to deter and/or delay exploitation of critical technologies in U.S. defense systems in order to impede countermeasure development, unintended technology transfer, or alteration of a system. (DoDI 5200.39) Properly fielded Anti-Tamper (AT) should:

  • Deter countermeasure development that would decrease U.S. warfighting capabilities
  • Prevent the unauthorized or out of baseline augmentation or change of capabilities in Direct Commercial Sales (DCS), Foreign Military Sales (FMS), and International Cooperative Development Programs
  • Prevent unauthorized technology changes on Critical Program Information (CPI) or any released capability
  • Protect U.S. critical design knowledge and manufacturing processes from unauthorized transfer
  • Protect U.S. investment in weapon system capabilities, and avoid additional unplanned investment to regain the U.S. advantage

13.7.1.1. Critical Technologies (CT)

A subset of Critical Program Information (CPI) that specifically resides within a weapon system, training or its support equipment, must be considered for protection by Anti-Tamper (AT) techniques to delay or prevent Reverse Engineering (RE). Critical Technologies can be found in: System hardware, embedded software, application software, and data. Critical Technologies should not be confused or associated with Critical Technology Elements (CTE), in other words, Critical Technologies (CTs) as it apply to Anti-Tamper (AT) is not a matter of maturity or integration level.

13.7.1.2. Anti-Tamper Considerations

Anti-Tamper (AT) is more cost effective when implemented at program onset. Therefore, Anti-Tamper (AT) considerations and techniques should be initiated prior to MS A, during program development, preferably in the program material solution analysis phases:

  • The PM should include both Anti-Tamper (AT) requirements and costs in capability development, acquisition and Planning, Programming, Budgeting and Execution (PPBE) process planning cycles.
  • Anti-Tamper (AT) requirements may affect other aspects of a program, such as associated maintenance and training devices, and should include end item assessment of cost, schedule and performance if not considered at program onset.
  • Anti-Tamper (AT) requirements should be included (but not limited to) in the following documents; Request for Proposal (RFP), SOO, Statement of Work (SOW), Initial Capabilities Document (ICD), Capability Production Documents (CPD), Capability Development Documents (CPD), Acquisition Strategy (AS), Work Breakdown Structure (WBS), Test and Evaluation Master Plan (TEMP), Information Assurance Strategy (IAS), Systems Engineering Plan (SEP), and Systems Engineering Management Plan (SEMP) should be included in the DoD Acquisition systems review process (i.e., Systems Engineering Technical Reviews (SETRs)). Refer to DoD Anti-Tamper (AT) Desk Reference for sample language and further guidance.
  • Anti-Tamper (AT) is also applicable to DoD systems during Pre-Planned Product Improvement (P3I) upgrades as new Critical Technologies (CT) may be added to the system. Additionally, Anti-Tamper (AT) should be specifically addressed in export sales (direct commercial sales, foreign military sales) and international cooperative programs if those systems have Critical Technologies (CT) to protect.
  • Anti-Tamper (AT) also involves risk management. The level of Anti-Tamper (AT) should be based on the risk of the lost of U.S. control on the asset containing Critical Program Information (CPI) (level of exposure) and the operational impact (criticality and consequence) if the Critical Program Information (CPI) is lost or compromised. Refer to DoD Anti-Tamper (AT) Guidelines for further guidance.

13.7.1.3. Anti-Tamper Execution

The DoD Anti-Tamper Executive Agent (ATEA) provides support to the PM by helping to determine whether or not to implement Anti-Tamper (AT), per DODI 5200.39. The decision to use or not to use Anti-Tamper (AT) will be documented in a classified annex to the Program Protection Plan (PPP), referred to as the Anti-Tamper (AT) Plan. The Anti-Tamper (AT) Plan includes, but is not limited to, the following information:

  • The Program Manager (PM) recommendation and the Milestone Decision Authority (MDA) decision on Anti-Tamper (AT);
  • Identification of the Critical Technology (CT) being protected and a description of its criticality to system performance;
  • Foreign Teaming and foreign countries / companies participating;
  • Threat assessment and countermeasure attack tree;
  • Anti-Tamper (AT) system level techniques and subsystem Anti-Tamper (AT) techniques investigated;
  • System maintenance plan with respect to Anti-Tamper (AT);
  • Recommended Anti-Tamper (AT) solution set to include system, subsystem and component level;
  • Determination of how long Anti-Tamper (AT) is intended to delay hostile, or foreign exploitation or reverse-engineering efforts;
  • The effect that compromise would have on the acquisition program if Anti-Tamper (AT) were not implemented;
  • The estimated time and cost required for system or component redesign if a compromise occurs and;
  • Key Management Plan.

13.7.1.3.1. Process

The Anti-Tamper (AT) Process consists of fifteen steps:

  1. Identify critical program information
  2. Refine results from Step 1 to determine Critical Technologies (CT)
  3. Evaluate Critical Technologies (Critical Technologies (CT)) exposure level
  4. Evaluate criticality and consequence of compromise of Critical Technologies (CT)
  5. Identify Anti-Tamper (AT) protection level requirements
  6. Identify potential Anti-Tamper (AT) solution sets considered
  7. Describe Critical Program Information (CPI)/Critical Technologies (CT) engineering solution analysis
  8. Select initial architecture technique(s) and solution set(s)
  9. Identify Anti-Tamper (AT) requirements in the System Functional Baseline
  10. Develop Anti-Tamper (AT) architecture
  11. Identify Anti-Tamper (AT) implementations in allocated baseline
  12. Finalize Anti-Tamper (AT) architecture
  13. Implement Anti-Tamper (AT) Architecture and identify residual vulnerabilities
  14. Fabricate system and Test (Verification &Validation)
  15. Verification and Validation (V&V) results published 60 days prior to deployment

(Consult the DoD Anti-Tamper (AT) Desk reference for further guidance.)

Note: It is highly recommended that the program contact the Component Anti-Tamper (AT) Office of Primary Responsibility (OPR) to obtain a name of a Verification and Validation (V&V) lead. This Verification and Validation (V&V) lead and his team will follow the progress of the Anti-Tamper (AT) Plan implementation and provide consultation. This Verification and Validation (V&V) lead will also determine if the Anti-Tamper (AT) Plan meets the protection level and provide whether the Component Anti-Tamper (AT) Office of Primary Responsibility (OPR) should concur/non-concur with the Anti-Tamper (AT) Plan. This Verification and Validation (V&V) lead will also be the witness to the actual testing of the Anti-Tamper (AT) Plan and provide a memo back to the program as to whether it did complete the Anti-Tamper (AT) testing. The Verification and Validation (V&V) lead and team will be provided to the program at no cost but only as consultants. They will not develop the Anti-Tamper (AT) Plan. That is for the program office/contractor.

13.7.1.3.2. Sustainment

Anti-Tamper (AT) is not limited to development and fielding of a system. It is equally important during life-cycle management of the system, particularly during maintenance. Maintenance instructions and technical orders should clearly indicate the level at which maintenance is authorized; and include warnings that damage may occur if improper or unauthorized maintenance is attempted.

To protect Critical Technologies (CT) during maintenance, it may be necessary, as prescribed by the Delegation of Disclosure Authority Letter, to limit the level and extent of maintenance an export customer may perform. This may mean that maintenance involving the Anti-Tamper (AT) measures will be accomplished only at the contractor or U.S. Government facility in the U.S. or overseas. Such maintenance restrictions may be no different than those imposed on U.S. Government users of Anti-Tamper (AT) protected systems. Contracts, purchase agreements, memoranda of understanding, memoranda of agreement, letters of agreement, or other similar documents should state such maintenance and logistics restrictions. The contract terms and conditions should establish that unauthorized maintenance or other unauthorized activities:

  • Should be regarded as hostile attempts to exploit or reverse engineer the weapon system or the Anti-Tamper (AT) measure itself; and
  • Should void the warranty or performance guarantee.

Note: The U.S. Government and U.S. industry should be protected against warranty and performance claims in the event Anti-Tamper (AT) measures are activated by unauthorized maintenance or other intrusion. Such unauthorized activities are regarded as hostile attempts to exploit or reverse engineer the system or the Anti-Tamper (AT) measures.

Note: Programs should also plan and budget for Anti-Tamper (AT) maintenance to include government/contractor investigations of tamper events.

13.7.1.3.3. Packaging

Anti-Tamper (AT) affected equipment may need specially designed and approved shipping containers ready upon delivery. The containers should provide the same level of protection from exploitation as the protected Critical Technologies (CT) within the container while in the supply chain or have the Anti-Tamper (AT) equipment active while shipping.

13.7.1.4. Anti-Tamper Disclosure Guidelines

Anti-Tamper (AT) processes and techniques cannot be discussed or revealed to non-U.S. or unauthorized U.S. persons:

  • The fact that Anti-Tamper (AT) has been implemented on a specific system is classified as Unclassified/FOUO (For Official Use Only) unless otherwise directed (e.g. specific direction requiring system Anti-Tamper (AT) be handled at a higher classification level, system security classifying system Anti-Tamper (AT) higher)
  • The fact that Anti-Tamper (AT) has been implemented on a specific sub-system or even a component of a sub-system is classified SECRET. Refer to the DoD Anti-Tamper (AT) Security Classification Guide (SCG) for further clarification.

13.7.1.5. DoD Anti-Tamper Executive Agent (ATEA) Office

The DoD Anti-Tamper Executive Agent (ATEA) is responsible for all Anti-Tamper (AT) policy consistent with the DoDI 5000.02 and DoDI 5200.39. The office has established a network of DoD Component Anti-Tamper (AT) points of contacts (POCs) to assist program managers in responding to Anti-Tamper (AT) technology and/or implementation questions. Additionally, the Acquisition Security Database (ASDB) has been developed as a common shared database of Anti-Tamper (AT) related information.

13.7.1.6. Anti-Tamper Verification and Validation (V&V)

Anti-Tamper (AT) implementation is tested and verified during developmental test and evaluation and operational test and evaluation.

The PM develops the validation plan and provides the necessary funding for the Anti-Tamper (AT) Verification and Validation (V&V) on actual or representative system components. The Verification and Validation (V&V) plan, which is developed to support Milestone C, is reviewed and approved by the DoD Anti-Tamper (AT) Executive Agent, or Component Anti-Tamper (AT) Office of Primary Responsibility (OPR), prior to milestone decision. The program office conducts the Verification and Validation (V&V) of the implemented Anti-Tamper (AT) plan. The Anti-Tamper (AT) Verification and Validation (V&V) team witnesses these activities and verifies that the Anti-Tamper (AT) techniques described in the Anti-Tamper (AT) Plan are implemented into the system and performs according to the Anti-Tamper (AT) plan. The validation results are reported to the Milestone Decision Authority.

13.7.1.7. Anti-Tamper and Verification and Validation (V&V) Plan Approval

The DoD Anti-Tamper Executive Agent (ATEA) has published a DoD Anti-Tamper (AT) and Verification and Validation (V&V) Plan Templates to assist program managers and contractors with the required content for approval. The latest Templates can be downloaded by registered users at https://www.at.dod.mil/.

There is a two step approval process for all Anti-Tamper (AT) plans:

Domestic cases:

The Anti-Tamper (AT) Plans (Initial and Final) are to be created by the government or government contractor and approved first by the program manager. Then, they are submitted to the Component Anti-Tamper (AT) Office of Primary Responsibility (OPR) 60 days prior to PDR for initial plans and 60 days prior to Critical Design Review (CDR) for final Anti-Tamper (AT) Plans. The same approval timeline holds true for the verification and validation plans (Initial and Final) if separated from the Anti-Tamper (AT) Plan.

After the program manager has approved the Anti-Tamper (AT) Plan, the Anti-Tamper (AT) Executive Agent or Component Anti-Tamper (AT) Office of Primary Responsibility (OPR), provides an evaluation of the Anti-Tamper (AT) Plan and a letter of concurrence to the program office and Milestone Decision Authority.

Export cases:

  1. An Initial Anti-Tamper (AT) Plan MUST be submitted to the DoD Anti-Tamper Executive Agent (ATEA) (or designee) NLT 60 days prior to submission of a Letter of Offer and Acceptance (LOA) or contract signature, whichever comes first. Written DoD Anti-Tamper Executive Agent (ATEA) (or Component Anti-Tamper (AT) Office of Primary Responsibility (OPR)) approval of the Initial Anti-Tamper (AT) Plan must be obtained prior to release of the Letter of Agreement (LOA) or contract signature. As a minimum, the plan should include:
    1. A description of the architecture
    2. The Critical Program Information (CPI) and proviso limits requiring protection
    3. Level of Anti-Tamper (AT) required for each Critical Program Information (CPI) (1-5)
    4. Top-level solution w/core Anti-Tamper (AT) technology being implemented
    5. Penalty/Response initial thoughts
    6. Support and tamper event reporting concept
    7. Cost and Schedule Rough Order of Magnitude (ROM) for Anti-Tamper (AT) implementation
    8. Risk to implementation
  2. An update to the Initial Anti-Tamper (AT) Plan must be provided to the DoD Anti-Tamper Executive Agent (ATEA) (or Component Anti-Tamper (AT) Office of Primary Responsibility (OPR)) within 60 days after contract award.
  3. A Final Anti-Tamper (AT) Plan must be submitted to the DoD Anti-Tamper Executive Agent (ATEA) (or Component Anti-Tamper (AT) Office of Primary Responsibility (OPR)) NLT 60 days prior to Critical Design Review (CDR). Written DoD Anti-Tamper Executive Agent (ATEA) (or Component Anti-Tamper (AT) Office of Primary Responsibility (OPR)) approval of the Final Anti-Tamper (AT) Plan must be obtained prior to Critical Design Review (CDR) closure.
  4. Verification and Validation (V&V) testing must be completed NLT 60 days prior to system export. Written DoD Anti-Tamper Executive Agent (ATEA) (or Component Anti-Tamper (AT) Office of Primary Responsibility (OPR)) concurrence of satisfactory Verification and Validation (V&V) completion must be obtained prior to system export.

13.7.2. Information Assurance (IA)

Information Assurance (IA) is defined as measures that protect and defend information and information systems by ensuring their availability, integrity, authentication, confidentiality, and non-repudiation.

All mission critical functions and components, and information systems storing, processing, or transmitting Critical Program Information (CPI) must be appropriately protected, regardless of whether the information systems are owned and controlled by the Department of Defense or by external entities. Programs with identified Critical Program Information (CPI) need to ensure that the Critical Program Information (CPI) is protected in every computing environment that hosts it, or over which it is transmitted. With the requirement to identify system critical functions and associated components, Information Assurance (IA) needs to determine the Information Assurance (IA) controls needed for their protection. (See Chapter 7 for further details on Information Assurance (IA) Implementation.)

13.7.2.1. Critical Program Information (CPI) in DoD Information Systems

DoDD 8500.01E and DoDI 8500.2 detail the policy, process, and procedures for implementing appropriate Information Assurance (IA) into DoD information systems. They mandate a controls-based approach, which considers a system's assigned Mission Assurance Category (MAC) and Confidentiality Level (CL) in determining the required robustness of Information Assurance (IA) controls to be implemented. DoD information systems with Critical Program Information (CPI) must be accredited in accordance with DoDI 8510.01 (DIACAP). The DoD Information Assurance Certification and Accreditation Process (DIACAP) establishes a standard process, set of activities, general task descriptions, and a management structure to certify and accredit information systems throughout the system lifecycle. The DoD Information Assurance Certification and Accreditation Process (DIACAP) provides an independent validation process that verifies that appropriate protection measures have been implemented, tested, and maintained, and that any residual risk is at an acceptable level for system operation.

It is important to differentiate between the implementation of Information Assurance (IA) in program support information systems (U.S. Government or contractor) for the protection of Critical Program Information (CPI) as opposed to the implementation of Information Assurance (IA) in the system being acquired. For example, an acquisition program office acquiring a new weapons system may utilize a DoD information system that hosts Critical Program Information (CPI). Similarly, that same program may have Critical Program Information (CPI) being processed or transmitted on the prime contractor or systems integrator's design, development, or support systems. The Information Assurance (IA) requirements and certification and accreditation process for each of these support systems are totally separate and distinct from those of the weapons system being acquired, which may also contain Critical Program Information (CPI).

In practice, the implementation of Information Assurance (IA) to protect Critical Program Information (CPI) is no different from the implementation to protect any other information type. DoD information systems with Critical Program Information (CPI) must have both a Mission Assurance Category (MAC) and Confidentiality Level (CL) designated in accordance with DoDD 8500.01E. The presence of Critical Program Information (CPI) may be a factor in the Confidentiality Level (CL) assigned (public, sensitive, or classified), if the Critical Program Information (CPI) sensitivity drives the assignment to a higher level.

13.7.2.2. Critical Program Information (CPI) in Other Than DoD Information Systems

As previously noted, adequate security must be provided to all Critical Program Information (CPI) released to or developed by and in the possession of offerors, DoD contractors, grantees, or other sharing partners, to include when it is stored or processed on information systems and networks that are not owned by or operated on behalf of the Department. This may be a very diverse group, and may include prime and subcontractors, system integrators, program management support contractors, Federally Funded Research and Development Centers (FFRDC), and independent test organizations. Critical Program Information (CPI) that is classified must be protected in contractor facilities with Defense Security Service (DSS) accredited information systems in accordance with the National Industrial Security Program Operating Manual (NISPOM). Unclassified Critical Program Information (CPI) that resides on unclassified non-DoD systems must be protected in accordance with Directive-Type Memorandum (DTM) 08-027, "Security of Unclassified DoD Information on Non-DoD Information Systems." These requirements should be incorporated as clauses in contracts or change orders, or as appropriate language in sharing agreements and grants.

13.7.2.3. Indicators of Achieving Baseline Information Assurance (IA) Protection of Critical Program Information (CPI)

For DoD information systems containing Critical Program Information (CPI):

  • Confidentiality Level (CL) appropriate to the Critical Program Information (CPI) sensitivity and/or classification, and applicable baseline Information Assurance (IA) controls is implemented.
  • Authorization to Operate (ATO) or Interim Authorization to Operate (IATO) issued by the hosting system's designated accrediting authority (DAA) is current.
  • Information Technology (IT) security plan of action and milestones (POA&M) does not identify any security weaknesses impacting Critical Program Information (CPI) that are not sufficiently mitigated; Information Technology (IT) security plan of action and milestones (POA&M) indicates appropriate level of follow-up actions.
  • Inventory of Critical Program Information (CPI) (item, site, system hosting, and Information Assurance (IA) Point of Contact (POC)) is complete.
  • Any supplemental Information Assurance (IA) controls specified to protect Critical Program Information (CPI) are incorporated into the DoD Information Assurance Certification and Accreditation Process (DIACAP) implementation plan or equivalent security requirements traceability matrix.

For non-DoD information systems containing Critical Program Information (CPI):

  • Required Information Assurance (IA) protection measures are negotiated and agreed to in contract or sharing agreement. Protection requirements "flow down" through prime to subcontractors, as appropriate.
  • For DoD contractor systems accredited under the National Industrial Security Program Operating Manual (NISPOM), accreditation decision issued by the Defense Security Service (DSS) Designated Approving Authority (DAA) is current.
  • Reports of Information Assurance (IA) protection self-assessments are submitted to the program office periodically. Reports include appropriate levels of follow-up activity to clear discrepancies. Defense Security Service (DSS) will notify the Government Contracting Agency (GCA) of security compromises or serious security violations involving such systems and of a marginal or unsatisfactory security review rating for the facility.
  • Inventory of Critical Program Information (CPI) (item, site, system hosting, and Information Assurance (IA) Point of Contact (POC)) is complete.

The details of the program's Information Assurance (IA) approach to protecting all Critical Program Information (CPI) and system critical functions and components should be documented in the Countermeasures subsections of the Program Protection Plan (PPP), and should address the content of Sections 13.7.2.1 and 13.7.2.2, as applicable.

13.7.3. Software Assurance

The extensive use of Commercial off-the-shelf (COTS), Government off-the-shelf (GOTS), open source, and other off the shelf software as well as developmental software in DoD systems necessitates early planning for and design of software security protections that address the threats to those systems and the types of attacks those threats can orchestrate against the systems. Systems must be securely supplied, designed and tested to assure mission success as well as the protection of critical functions, associated components, and Critical Program Information (CPI). Of particular interest are the protection and assurance activities that are undertaken during the COTS integration and development processes, those aimed at mitigating attacks against the operational system (the fielded system), and those that address threats to the development environment. The purpose of this section is to develop a plan and statement of requirements for software assurance early in the acquisition lifecycle and incorporate the requirements into the request for proposal (RFP). Then use that plan to track software assurance protections throughout the acquisition. The progress toward achieving the plan is measured by actual accomplishments/results that are reported at each of the Systems Engineering Technical Reviews (SETRs) and recorded as part of the Program Protection Plan.

The Program Protection Plan (PPP) Outline and Guidance requires acquisition programs to address software assurance responsibilities for the planning and implementation of program protection countermeasures. Such countermeasures address the anticipated attacks a system may experience from the threats it will face by eliminating or reducing vulnerabilities. The countermeasures are selected with an understanding of which parts of the software are the most critical to the success of the mission. The plan includes a sample Software Assurance Countermeasures Table, which summarizes the planned and current state of a program's software assurance activities. The table is also used as part of a vulnerability assessment to identify operational, developmental, design, COTS and software tool vulnerabilities that that can be addressed by planning and implementing software assurance countermeasures.

The table in the PPP is divided into 3 sections that provide different vulnerability and countermeasure perspectives on software assurance plans and implementation:

  • Development Process –assurance activities conducted during the development process to mitigate and minimize attacks (e.g., threat assessment and modeling, attack surface analysis, architecture and design reviews, application of static and dynamic code assessment tools and services, penetration testing, and red teaming) that the developed system is likely to face when deployed into operation
  • Operational System –attack countermeasures and other assurance activities applied within the operational environment (e.g., failover, fault isolation, encryption, application firewalls, least privilege, and secure exception handling) to mitigate attacks against the delivered system and software interfaces, which may include COTS, GOTS, open source, and other off the shelf software
  • Development Environment –assurance activities and controls (e.g., access controls, configuration management, and release testing) applied to tools and activities (e.g., compilers, linkers, integrated development environments, run-time libraries, and test harnesses) used to develop and sustain software to mitigate attacks

Given the constraints of cost, schedule, and performance, fully comprehensive assessment and testing is often not feasible. Thus SwA planning should reflect priorities chosen to mitigate risk and deliver mission capability with acceptable levels of assurance. The coding language, source of code (i.e. custom, COTS, GOTS, open source), platform (i.e. web based, mobile, embedded, etc) as well as the results of criticality analysis (see 13.3.2.1) will be used to prioritize software assurance activities when planning for SwA.

13.7.3.1. Development Process

The purpose of this section of the table is to measure and explicitly capture the assurance activities conducted during software development and the integration of off-the-shelf components. As appropriate to the risk of compromise and criticality of the software in question, PMs are to analyze the development activities for:

  • potential introduction of vulnerabilities and risks based on the anticipated threat and the attacks the threats are capable of making against the system;
  • development of a plan for the assurance process as well as the technical disciplines and knowledge needed for Integrated Project Teams (IPT's);
  • how IPT's address the architecture, design, code, and implementation choices to include the appropriate mitigations necessary to address the anticipated attacks and assure the critical function software components; and
  • review points to track/assess the progress at the milestones in the Program Protection Plan.

Not all software will require the same level of software assurance activities and mitigation planning and implementation – in programs with millions of lines of code, there may be some functions (perhaps a monthly reporting feature) that are less mission-critical than other (perhaps a satellite station-keeping module). It may also be difficult to perform some types of assessment and mitigation activities on COTS software for which the source code is not available. Note that in such cases software related risks still exists and may be unmitigated. The software assurance table in the PPP recognizes these varying types of software and allows for differing plans/implementation of assurance as needed.

13.7.3.1.1 Static Analysis

Programs should investigate the applicability of automated static analysis tools to review source and/or binary copies of their software and, where advantageous, apply both static source code and static binary analysis to assist in identifying latent weaknesses that would manifest as operational system vulnerabilities and allow attackers to interfere, manipulate, or otherwise suborn the systems mission capabilities. The use of these types of tools within the development activity (i.e., as an add-on to the developer's Integrated Development Environment (IDE)) as well as in the Independent Test and Evaluation (IT&E) activities are both valuable and useful. Approaches that integrate such forms of continuous assessment into the developer's activities should be emphasized and encouraged.

13.7.3.1.2 Design Inspection

The establishment and update of secure design and code standards by the program should address the potential types of attacks the system would face and draw upon DoD, Government, Federally Funded Research and Development Centers (FFRDC), academia, commercial web sites and industry sources for mitigation approaches and methods to address those that could impact the systems mission capabilities. The list of attack patterns captured in the Common Attack Pattern Enumeration and Classification (CAPEC™) collection can be used to help consistently analyze a system for potential types of attacks they may face and to bring consistency into the validation activities when the program is verifying that the design and coding standards are being followed.

13.7.3.1.3 Code Inspection

Due to the subtle nature of most weaknesses in code that lead to unreliable, insecure, and brittle applications that are easily influenced by attackers it is important that code inspections utilizing tools be part of the approach used to minimize these weaknesses. There are over 700 documented types of weaknesses in code, design, architecture, and implementation captured in the Common Weakness Enumeration (CWE™) catalog but not all of them are equal threats to any specific application or system. Programs may wish to draw upon secure design and coding approaches defined on websites such as "top 10 secure coding practices" (https://www.securecoding.cert.org/confluence/display/seccode/Top+10+Secure+...) and the Common Weakness Enumeration (CWE)/ SysAdmin, Audit, Network, Security (SANS) top 25 most dangerous software errors (http://cwe.mitre.org/top25/index.html) to establish and update their secure design and coding standards. As a minium the code inspection is used to inspect for conformance to the secure design and coding standards established for the program.

An important part of the code inspection is to identify the subset of the overall CWE collection to focus on initially. Alternate approaches to focusing in on a subset of the weaknesses are described in the CWE paragraph below (13.7.3.1.6.) and the CAPEC paragraph (13.7.3.1.5.). These approaches can be used independently or in combination if desired.

Because of the dynamic nature of the threat environment and information about how systems can be compromised through software weaknesses, the program should have a methodology to periodically update their secure design and coding standards so that reviews using them address new types of attacks and types of weaknesses.

The next three sections of this document describe the middle three columns of the PPP Software Assurance Table, which are meant to capture how the established vulnerability (CVE), weakness (CWE), and attack pattern (CAPEC) collections are being used by the project team to identify and mitigate the most dangerous types of vulnerabilities in the software. These columns are further defined below but the most critical part of completing these three columns is the analysis of which CVEs, CWEs, and CAPECs should be used as the denominator of these percentage calculations and the documentation within the project team of the rationale and methodology followed in determining those lists and keeping them current throughout the project as the system design, development and testing progresses and the threat environment and other factors change.

13.7.3.1.4. Common Vulnerabilities and Exposures (CVE)

Common Vulnerabilities and Exposures (CVE®) information is used to identify, track, and coordinate mitigation activities of the publicly known vulnerabilities in commercial (COTS) and open source software which are often used by threats actors/agents to attack systems. Programs that incorporate COTS software into their systems should perform regular searches of the CVE lists before purchase and throughout the software lifecycle to understand vulnerabilities in those COTS software components and assess potential threat to mission success.

The CVE list is a compilation of publicly known information about security vulnerabilities and exposures. The list is international in scope, free for public use, and referenced in most commercial tools that scan operational systems and networks for vulnerabilities. The CVE list can be used to identify publicly known software vulnerabilities that could:

  • Allow an attacker to execute unauthorized code or commands;
  • Allow an attacker to gain privileges or assume identities;
  • Allow an attacker to access and/or manipulate data that is contrary to the specified access restrictions for that data;
  • Bypass protection mechanisms:
  • Allow an attacker to hide their activities; and
  • Allow an attacker to conduct denial of service attacks.

CVE is intended for use by security experts, so it assumes a certain level of knowledge. Programs should use a tool during incremental software testing of their commercial and open source packages that scans those operational components and matches the results with the CVE dictionary. Alternately, a scan of the affected software packages on the CVE list can be used to review the list for any publicly known vulnerabilities for the software packages being used by a DoD program. A list of CVE compatible tools is available at http://cve.mitre.org/compatible/product.html.

The CVE column in the Program Protection Plan Software Assurance table reports the planned and actual percentages of software components that incorporate COTS or open source that have been analyzed and acceptably remediated against the CVEs from the CVE list that apply to those COTS and open source packages.

Supportive analysis by the project team must record the CVEs found, the remediation applied, and the residual risk to the mission of any unresolved CVEs. To identify which CVEs should be included in the analysis the list of CVEs for each COTS product and open source should be tracked and those that were remediated documented as such. For each COTS and open source package utilized as part of the system, the project staff should determine whether an explicit vulnerability advisory/alert activity is provided/offered by the provider/developer of those packages.

For those that do not provide publicly available advisories/alerts about security issues that need to be resolved the project staff should carefully consider the risk they are inheriting from that developer by not providing patch information in a manner that CVE identifiers can be assigned. Without CVE identifiers it is much harder to track and manage the state of deployed software within the DoD's vulnerability management practice and the automation tooling deployed within the DoD. 100% of developmental Critical Program Information (CPI) software and developmental critical-function software packages, whether COTS or open source, must be evaluated using CVE, to surface exposures inherited by incorporating open source or COTS libraries or products.

Guidance on searching the CVE is located at http://cve.mitre.org/about/faqs.html#c. An important aspect of applying CVE tools and reviews to a collection of COTS and open source is to apply the Common Vulnerability Scoring System (CVSS) to the determination of which CVEs to mitigate first and to understand the severity of the remaining CVEs.

If the selected tool outputs any CVE with a CVSS score above medium (4), programs should mitigate the vulnerability with highest priority first and then work through the next highest priority issue until the residual risk represented by the remaining vulnerabilities is acceptable to the mission owner. CVEs that are included in any DoD Information Assurance Vulnerability Management (IAVM) alerts and advisories should be addressed in accordance to the priorities and timeframe included in the IAVM from DISA.

The CVE web site is at http://cve.mitre.org

13.7.3.1.5. Common Attack Pattern Enumeration and Classification (CAPEC)

Common Attack Pattern Enumeration and Classification (CAPEC) is meant to be used for the analysis of common patterns of attacks against systems, whether for understanding how attacks are done, scoping of relevant threats, templates for malicious testing, or as a foil for thinking about the susceptibility of system's architecture, design, and technical implementation to specific attacks.

CAPEC is international in scope, free for public use, catalog of attack patterns outlining information such as a comprehensive description of the phases and steps in attacks, the weaknesses they are effective against (using CWEs), and a classification taxonomy that can be used for the analysis of common attack patterns. CAPEC attack patterns cover a wide variety of families of attacks including: data leakage attacks, resource depletion attacks, injection attacks, spoofing attacks, time and state attacks, abuse of functionality attacks, attacks using probabilistic techniques, attacks exploiting authentication, attacks exploiting privilege/trust, attacks exploiting data structure, resource manipulation attacks, network reconnaissance, social engineering attacks, as well as some physical security attacks and supply chain attacks.

The attack patterns in CAPEC can be a powerful mechanism to capture and communicate the attacker's perspective, organize the analysis of a system with respect to attacks, and prioritize weaknesses (CWEs) based on the anticipated attack patterns. They are descriptions of common methods for exploiting software. Identified attack patterns may influence the selection of the COTS and open source software products, programming languages, and design alternatives. By understanding the attacker's perspective and how a program's software is likely to be attacked, programs can directly consider these exploit attempt methods and mitigate them with design, architecture, coding and deployment choices that will lead to more secure software.

Programs should identify the set of attack patterns that pose the most significant risk and leverage them at each stage of the Software Development Lifecycle (SDLC). A discussion of how to use CAPEC in this manner is available on the "Engineering for Attack" page on the CWE site (http://cwe.mitre.org/community/swa/attacks.html). This is the same basic methodology described in the new ISO/IEC Technical Report 20004, "Refining software vulnerability analysis under ISO/IEC 15408 and ISO/IEC 18045", which describes an alternate approach for doing a vulnerability analysis of a software-based system under the Common Criteria regime. ISO/IEC 15408 and ISO/IEC 18045 are the two standards that guide and describe the Common Criteria evaluation methodology.

Basically that page describes how an analysis using attack patterns to represent the expected threat and identify the subset of weaknesses that are of most concern, can be used to identify which weaknesses those attacks would be effective at exploiting and that list can be used to influence the choices about design and architecture, considering the planned operational use, the creation of security policies, requirements, and thinking through the risks related to the system's intended use. This list of the weaknesses, the ones that are exploitable by the attack patterns the system's adversary are capable of using against the system can be used to identify the subset of relevant CWE weaknesses to avoid and to vet for during implementation. The list's associated CAPECs can be used to guide the software testing by identifying high priority test cases that should be created for risk-based security testing, penetration testing, and red teaming.[1]

The CAPEC column in the Program Protection Plan Software Assurance table reports the planned and actual percentages of developed software components that have been evaluated utilizing the attack patterns from the CAPEC list to identify the appropriate sub-set of CWEs, to consider alternate design and architectures or implementations, or to drive the creation of appropriate misuse and abuse test cases.

Supportive analysis by the project team must record the CAPECs identified as germane to the system, the CWEs identified as being susceptible to those CAPECs and the remediation applied along with an understanding of the residual risk to the mission of any CWEs that weren't tested by simulating CAPECs against the system. To identify which CWEs should be included in the testing analysis based on CAPEC inspired test cases the list of CWEs reviewed for the static analysis tools/services should be tracked and those that were identified, covered by the analysis tool/service and appropriately remediated should be documented as such. For each CWE that was not covered by a static analysis tool/service, the project staff should determine whether an appropriate CAPEC inspired test case or Red Team activity was conducted without finding an exploitable CWE.

For those CWEs that were not covered by static analysis or testing, the project staff should carefully consider the risk to the mission from the potential of those weaknesses remaining in the system. Without demonstrable evidence that the CWEs that an attacker could exploit are mitigated there will always be some level of risk but it is incumbent on the project staff to document this residual risk for the end user so they can manage that risk when the system is deployed within the DoD. 100% of developmental Critical Program Information (CPI) software and developmental critical-function software should be evaluated against the CAPEC list.

The CAPEC web site is http://capec.mitre.org. A description of the CAPEC schema is located in the "Documentation" portion of the CAPEC Documents page at http://capec.mitre.org/about/documents.html.

13.7.3.1.6. Common Weakness Enumeration information (CWE)

The Common Weakness Enumeration (CWE) is international in scope and free for public use. CWE provides a unified, measurable set of software weaknesses to enable more effective discussion, description, selection, and use of software security tools and services to find weaknesses in source code and operational systems components as well as to better understand and manage software weaknesses related to architecture and design.

CWE is targeted to developers and security practitioners. Programs should use CWE-compatible tools to scan software for CWE. A list of CWE-compatible products is available at http://cwe.mitre.org/compatible/product.html.

The CWE column in the table reports the planned and actual percentages of developed software components that have been evaluated utilizing the weaknesses from the CWE list to identify the appropriate sub-set of CWEs, to consider alternate design and architectures or alternate coding constructs.

Supportive analysis by the project team must record the subset of CWEs identified as being most germane to the secure operation of the system. The subset of CWEs can be taken from the CWE/SANS Top 25 Most Dangerous Software Errors list or by utilizing the Common Weakness Risk Analysis Framework (CWRAF) to identify the subset of CWEs that are the most dangerous to the system's mission given what the software is doing for the mission. CWRAF allows a project team to create their own list of the most dangerous CWEs based on the specifics of their system and which failure modes are the most important to mitigate/prevent.

The CWE/SANS Top 25 Most Dangerous Software Errors list on the CWE and SANS Web sites provides detailed descriptions of the top 25 programming errors along with authoritative guidance for mitigating and avoiding them.

The CWRAF methodology is described on the CWE web site and numerous examples are provided to help a project team learn how to apply the methodology to their system in combination with the Common Weakness Scoring System (CWSS).

By using the Common Weakness Scoring System (CWSS) a program can also reflect their specific list of dangerous CWEs into their tools so the risk to the mission of the weaknesses found during static and dynamic analysis or penetration testing reflects the relative importance of those impacts.

The CWE web site is at http://cwe.mitre.org and the CWSS web page is at http://cwe.mitre.org/cwss/.

Additionally, the project team should have a documented understanding of the residual risk to the mission of any CWEs that weren't reviewed for by static analysis tools/services or tested by simulating the CAPECs that would be effective against those CWEs. For CWEs deemed to be dangerous but not covered by a static analysis tool/service, the project staff should determine whether an appropriate CAPEC inspired test case or Red Team activity was conducted without finding an exploitable CWE.

For those CWEs that were not covered by static analysis or testing, the project staff should carefully consider the risk to the mission from the potential of those weaknesses remaining in the system. Without demonstrable evidence that the CWEs that an attacker could exploit are mitigated there will always be some level of risk but it is incumbent on the project staff to document this residual risk for the end user so they can manage that risk when the system is deployed within the DoD. 100% of developmental Critical Program Information (CPI) software and developmental critical-function software should be evaluated against the identified subset of the CWE list.

In addition to the above listed MITRE websites, PMs should consider best practices identified at http://www.safecode.org/index.php.

13.7.3.1.7. Penetration Test

Programs should report what portion of the system will undergo penetration testing. The purpose of penetration testing is to subject the system to an attack exercise to raise awareness of exploitable vulnerabilities in the system and accelerate their remediation. Also the knowledge that a system will undergo penetration testing increases the vigilance of the software engineers responsible for architecting, designing, implementing, and fielding the systems.

The text should support the number with brief an explanation of the penetration testing performed and a reference to any supporting reports generated by that testing.

The units used for planned/actual percentages for this metric are at the discretion of the program. They should be explained in the text and be meaningful and provide insight into the completeness of the testing. For example a network that exposes a certain number of protocols may measure the percentages in the space of protocol states. A system with an API may measure the number of interface functions probed.

13.7.3.1.8 Test Coverage

Programs should report on their planned and actual test coverage. Units and metrics for test coverage are at the discretion of the program, but should be meaningful and yield insight into the completeness of the testing regimen.

Possible measure for test coverage include percentage of statements exercises, percentages of API calls and exception conditions exercised, number of function points tested.

13.7.3.2. Operational System

This section refers to the software and firmware on the fielded system. Software assurance countermeasures is a rapidly evolving area. Successful assessments, techniques, applications, and example outcomes are frequently published in papers that can be found at DoD, Government, Funded Research and Development Centers (FFRDC), and commercial web sites. The FFRDC Carnegie Mellon Software Engineering Institute (SEI) and MITRE both have searchable libraries containing information about the approaches to Software Assurance indicated in the Program Protection Plan Outline & Guidance, Table 5.3.3-1 Application of Software Assurance Countermeasures.

13.7.3.2.1. Failover Multiple Supplier Redundancy

Identical code for a failed function will most likely suffer the same failure as the original. For redundancy in software, therefore, a completely separate implementation of the function is needed. This independence reduces the probability that the failover code will be susceptible to the same problem.

13.7.3.2.2. Fault Isolation

Design principles applied to software to isolate faults, include functions to trap, log, and otherwise protect element failures from affecting other elements and the larger system. Logs help trace the sources of operational faults. Logs can also be examined to determine whether there was a malicious attack.

Programs reporting a 'Yes' in the table should be prepared elaborate with technical detail on how the fault isolation mechanisms were employed in the architecture and design for the particular component or sub-system. Fail over or fault isolation is also where the logging of the failure event and the capture of relevant data needed to determine root cause of the failover event is best included.

13.7.3.2.3. Least Privilege

Design principle applied to software that limits the number, size, and privileges of system elements. Least privilege includes separate user roles, authentication, and limited access to enable all necessary functions but minimize adverse consequences of inappropriate actions.

Programs reporting a 'Yes' in the table should be prepared elaborate with technical detail on how least privilege principles were employed in the architecture and design for the particular component or sub-system.

13.7.3.2.4. System Element Isolation

Design principles applied to software to allow system element functions to operate without interference from other elements.

Programs reporting a 'Yes' in the table should be prepared elaborate with technical detail on how system element isolation principles were employed in the architecture and design for the particular component or sub-system.

13.7.3.2.5. Input Checking/Validation

The degree to which software element inputs are checked and validated according to defined criteria and functionality. Input checking and validation should ensure that out-of-bounds values are handled without causing failures and the invalid input events are logged

Programs reporting a 'Yes' in the table should be prepared to elaborate on the architectural and design criteria governing the extent of input checking/validation employed.

13.7.3.2.6. Software Encryption and Anti-Tamper Techniques (SW load key)

The degree to which executable software code is encrypted or otherwise protected (e.g., by checksums or cyclic redundancy checks) from corruption, between factory delivery and use in a military mission. Defense Acquisition University (DAU) currently teaches an anti-tamper course, which provides some Anti-Tamper (AT) techniques that can be used for software encryption.

Programs reporting a 'Yes' in the table should be prepared to elaborate on specific anti-tamper techniques are included in the architecture, design, and implementation of the software component or sub-system and what risks they are intended to mitigate.

13.7.3.3. Development Environment

Software tools used in the development environment (as opposed to the actual fielded software) are another source of risk to warfighting capability and should be considered in the Program Protection Plan (PPP). In particular a compromised development environment could be leveraged by an attacker to insert malicious code, exploitable vulnerabilities, and/or software backdoors into the operational software before it is fielded.

Examples of software development tools include:

  • Compilers, assemblers, pre-compilers, and other code generating tools such as design templates
  • Structured code editors and pretty printers
  • Code static analysis tools
  • Debugging and timing analysis tools
  • Code configuration management tools
  • Accounts and access controls on development computers and networks
  • Test management tools, test data generators, test harnesses, automated regression testing tools

Examples of compromising tools to achieve malicious insertion include

  • Modify compiler to generate or insert additional functionality into the operational code
  • Modifying a math library of routines with malware that then gets incorporated into the operational code.

Programs should tailor the list contents of the SW Product column in this section of the table to enumerate the software tools pertinent to the program's development environment(s). For each SW product listed table entries should address the items enumerated in the following columns.

13.7.3.3.1 Source Code Availability

When source code is available, it becomes easier to answer some questions about the behavior of the tool and detect potential compromise.

Is source code available for the tool? A simple yes or no should suffice. If further information (e.g. coding language, code size, licensing cost constraints) would provide useful insight annotate the entry with a note.

13.7.3.3.2. Release Testing

Software tools are often updated. These updates are a potential path for an attacker to compromise the development environment and thus the operational software.

Indicate whether testing for indications of malicious insertion or tool compromise are performed on each update of the tool before that update is incorporated into the development environment.

13.7.3.3.3. Generated Code Inspection

Indicate whether/how any generated code for the system is examined for malicious code or exploitable vulnerability potentially inserted by the software tool in question.

In general, the problem of how to effectively inspect generated code for malicious insertion remains an open area of research. From the practical standpoint, it is better to perform some inspection than to ignore the problem entirely. That at least raises the bar for what an attacker needs to do compromise the system undetected.

Potential code inspection countermeasures include:

  • Manual inspection of a representative sample of the generated code
  • Analysis of the code with reverse engineering tools
  • Identification of the libraries compiled into an executable
  • A sanity check of components output by the tools against components expected
  • Comparison to baselines generated by previous versions of the tool.
  • Manual inspection of tool outputs against a known/analyzable test corpus.
  • Advanced/experimental techniques such as automated function extraction.

Note that in many instances simple sanity checks can be effective in detecting some injected malware. For example: extracting, comparing and sorting strings might point to a trigger string used to open a backdoor. Decompiling an executable may reveal the presence of OP codes not normally generated by the compiler.

Where generated code inspection is deemed of benefit programs should tailor the form of inspection to the unique aspects of the program and report planned and actual percentages appropriately.

13.7.3.3.4. Additional Countermeasures

Programs should consider adding additional columns to this area of the software assurance table with the rationale for the additions if programs judge them to significantly reduce the risk of malicious insertion.

Additional countermeasures may include:

  • Access controls and other controls detect malicious behavior or suspicious artifacts in the development environment?
  • Information assurance controls to safeguard technical data in the development environment (networks, computers, test equipment, and configuration systems)?
Controlling and accounting for printing of technical manuals and other documentation.

[1] http://capec.mitre.org/documents/An_Introduction_to_Attack_Patterns_as_a_Software_Assurance_Knowledge_Resource.pdf

13.7.4. Supply Chain Risk Management (SCRM)

This section describes how a program can manage supply chain risks to critical program information and critical functions and components with a variety of risk mitigation activities throughout the entire system lifecycle. The Supply Chain Risk Management (SCRM) guidance in this section identifies references that establish a sample of practices for managing supply chain risks. As Supply Chain Risk Management (SCRM) techniques and practices continue to evolve, additional guidance will be published to refine the Department's understanding and implementation of Supply Chain Risk Management (SCRM). There are a variety of existing resources available to aid in the understanding and implementation Supply Chain Risk Management (SCRM). The following is a list that includes, but is not limited to the following foundational documents:

  • DTM 09-016 – Supply Chain Risk Management (SCRM) to Improve the Integrity of Components Used in DoD Systems - Establishes authority for implementing Supply Chain Risk Management (SCRM) throughout DoD and for developing initial operating capabilities.
  • DoD Supply Chain Risk Management (SCRM) Key Practices and Implementation Guide – Provides a set of practices that organizations acquiring goods and services can implement in order to proactively protect the supply chain against exploitation, subversion, or sabotage throughout the acquisition lifecycle.
  • National Defense Industrial Association (NDIA) System Assurance Guidebook – Provides guidance on how to build assurance into a system throughout its lifecycle, organized around the Undersecretary of Defense for Acquisition, Technology, and Logistics (USD(AT&L)) Life Cycle Management Framework.
  • DoD Instruction O-5240.24, Counterintelligence (CI) Activities Supporting Research, Development, and Acquisition (RDA)

13.7.4.1. Scope of Supply Chain Risk Management (SCRM)

Currently, Program Protection Planning Supply Chain Risk Management pertains to Information and Communications Technology (ICT). In the digital age where supply chains are distributed globally, and design, manufacturing and production often occur internationally, supply chains have a greater exposure to threats and exploitation.

Supply chain risk management provides programs with a framework for analyzing all the risks associated with the supply chain, which enables the determination of what risks may be mitigated, and what risks may be accepted. This determination will vary based on the purpose and mission being performed. Applying Supply Chain Risk Management (SCRM) early in the production lifecycle will allow for earlier mitigations and a more strategic approach for managing risk.

13.7.4.2. Supply Chain Risk Management (SCRM) Throughout the System Lifecycle

The protection of mission-critical systems (including the information technology that compose those systems) must be a priority throughout the entire system development life cycle (i.e., during design and development, manufacturing, packaging, assembly, distribution, system integration, operations, maintenance, and disposal). This is accomplished through threat awareness and by the identification, management, and potential elimination of vulnerabilities at each phase of the life cycle using complementary, mutually reinforcing strategies to mitigate risk.

Figure 13.7.4.2.F1 illustrates how key Supply Chain Risk Management (SCRM) activities align with the steps in the DoD Acquisition Lifecycle. Activities are organized by the various roles that should perform the indicated functions/procedures. Due to the multidisciplinary nature of Supply Chain Risk Management (SCRM), Program Protection requires careful planning and coordination across multiple stakeholders.

Figure 13.7.4.2.F1. Supply Chain Risk Management (SCRM) Activities throughout the System Lifecycle

Supply Chain Risk Management Chart (SCRM)

Mitigation of supply chain risks is most effective when identification and implementation occur early in a program's acquisition planning and contracting. Generally, mitigation choices narrow and become more expensive the further into the lifecycle they occur. Given the amount of information and supply choices that are present in the marketplace, Operations Security (OPSEC) related to the acquisition process for programs is vital.

13.7.4.2.1. Criticality Analysis

Information on Criticality Analysis can be found in Section 13.3.2.1.

13.7.4.2.2. Supplier Annotated Work Breakdown Structure (WBS) or System Breakdown Structure (SBS)

A cornerstone for the identification of supply chain risks and the development of supply chain risk management strategies and mitigations for critical components is the criticality analysis. The Work Breakdown Structure (WBS) or System Breakdown Structure (SBS) may be used to annotate the suppliers of the critical components identified by the criticality analysis. A Supplier-Annotated Supply Chain Risk Management (SCRM) Work Breakdown Structure (WBS) or System Breakdown Structure (SBS) is a helpful tool to assist with tracking and managing the supply chain risks. The Supply Chain Risk Management (SCRM) Work Breakdown Structure (WBS) or System Breakdown Structure (SBS) is a detailed breakdown identifying all system assemblies, subassemblies and components and their suppliers for, at a minimum, all critical components identified through criticality analysis. The Supply Chain Risk Management (SCRM) Work Breakdown Structure (WBS) or System Breakdown Structure (SBS) may also include alternative suppliers for all critical components down to the Commercial off-the-shelf (COTS)-item level, with the cost, schedule, and performance impact data for each alternative. Although the Supply Chain Risk Management (SCRM) Work Breakdown Structure (WBS) or System Breakdown Structure (SBS) is not a current requirement, it may be an effective way to record, track and manage the potential suppliers of critical functions as the trade-offs analysis between security, performance, and cost is examined.

The Supply Chain Risk Management (SCRM) System Breakdown Structure (SBS) may provide insight into any teaming arrangements based on an understanding of the defense industrial base and subsequent supply chain. Prior to Milestone B, manufacturers typically develop their supplier lists and enter into teaming agreements. Because of that, programs may consider requiring oversight and input into any supplier teaming arrangements. The program could put controls in place so that supplier lists provide alternatives/alternative suppliers for critical components. Between Preliminary Design Review (PDR) and Critical Design Review (CDR), the Supply Chain Risk Management (SCRM) Work Breakdown Structure (WBS) or System Breakdown Structure (SBS) should be provided by suppliers to the government for review and vulnerability/risk assessment. It is essential that the DoD work with potential Prime Contractors to develop supplier lists and gain insight to potential teaming arrangements. This input is supported by contract clauses such as Consent to Subcontract.

13.7.4.2.3. Securing the Supply Chain Through Maintaining Control Over the Information and Information Flow

OPSEC

Sensitive information must be protected from suppliers and potential suppliers. Operations Security (OPSEC) and appropriate classification guides should be employed to protect system supply chain, design, test and other information from potential adversaries. This includes limiting the sharing of information with suppliers and potential suppliers at multiple tiers sufficient to manage risk. Confidentiality of key information (such as user identities, element uses, suppliers, and their processes, requirements, design, testing, etc.) must be protected when critical to mission success.

Provenance

It is important to establish and maintain the origin, development, delivery path, and mechanisms to protect the integrity of critical components, tools, and processes, as well as their associated changes, throughout the lifecycle. This enables accurate supply chain (SC) risk assessment and mitigation, which requires accurate information on the origin of components, how they are developed, how they are delivered throughout the supply chain. This includes strong system and component configuration management to ensure traceability against unauthorized changes. Selecting suppliers who maintain provenance is the first step to reducing supply chain (SC) risks.

13.7.4.2.4. Design and Engineering Protections

Once critical functions and components have been identified, design and engineering protections can be employed to reduce the attack surface and reduce what is considered critical. These protections should further protect intrinsically critical functions and reduce existing unmediated access to them.

System elements may still have unintentional or intentional vulnerabilities (whether in isolation or when combined) even if they all come from trustworthy suppliers. Defensive Design helps reduce the attack surface and limit the exposure of vulnerabilities. Defensive approaches reduce opportunities to expose or access an element, process, system, or information and to minimize adverse consequences of such exposure or access. In particular, defensive design should be used to protect critical elements and functions by reducing unnecessary or unmediated access within system design.

13.7.4.2.5. Supply Alternatives

Application-specific integrated circuits (ASICs) should be acquired from a trusted supplier because if they are compromised, then unique DoD designs could be exposed and critical system information could become available to adversaries. For information on trusted suppliers of microelectronics, please refer to Section 13.7.5.

13.7.4.2.6. Procurement Authorities

Procurement language has been developed and is available for use to help mitigate supply chain risk through contractual requirements in the Statement of Work (SOW). Refer to Section 13.13.1.2 below for suggested language.

Supplier Security Practices

Organizations can help mitigate supply chain risk down the contract stack by requiring and encouraging suppliers and sub-suppliers to use sound security practices and allow transparency into processes and security practices. It is recommended that contract vehicles should require, encourage, or provide incentives for suppliers to deliver up-to-date information on changes that affect supply chain (SC) risk, such as changes in their suppliers, locations, process, and technology.

Use of the acquisition and procurement process early in the system lifecycle is a key way to protect the supply chain by defining and creating supplier requirements and incentives; using procurement carve-outs and controlled delivery path processes; and using all-source intelligence in procurement decisions. Source selection criteria and procedures should be developed in order to encourage suppliers to provide detailed visibility into the organization, elements, services, and processes. Other procurement tools may be available to manage the criticality of components and address risk in acquisition planning and strategy development.

13.7.4.2.7. Enhanced Vulnerability Detection

Due diligence analysis for items of supply is performed to counter unintentional vulnerabilities, intentional vulnerabilities (e.g., malicious wares and processes), and counterfeits. It may include software static analysis, dynamic analysis (including the use of simulation, white and black box testing), penetration testing, and ensuring that the component or service is genuine (e.g., tag, digital signature, or cryptographic hash verification). Tools can include development, testing and operational tools.

Refer to Section 13.7.3 for more information on Software Assurance.

13.7.5. Trusted Suppliers

In the context of Program Protection, trusted suppliers are specific to microelectronic components and services. The Department is currently developing new policy on the criteria for using trusted suppliers. This content will be updated when that policy is published.

Defense Microelectronics Activity (DMEA) maintains a list of accredited suppliers on its website at http://www.dmea.osd.mil/trustedic.html.

13.7.6. System Security Engineering

13.7.6.1. Definitions

System Security Engineering (SSE): "An element of system engineering that applies scientific and engineering principles to identify security vulnerabilities and minimize or contain risks associated with these vulnerabilities. It uses mathematical, physical, and related scientific disciplines, and the principles and methods of engineering design and analysis to specify, predict, and evaluate the vulnerability of the system to security threats." (MIL-HDBK-1785)

System Assurance (SA): "The justified confidence that the system functions as intended and is free of exploitable vulnerabilities, either intentionally or unintentionally designed or inserted as part of the system at any time during the life cycle." (National Defense Industrial Association (NDIA) Guidebook, Engineering for System Assurance, Ver. 1.0)

Therefore, Systems Security Engineering (SSE) comprises the security-related Systems Engineering (SE) processes, activities, products, and artifacts for Systems Analysis (SA). Chapter 4 discusses the need to apply Systems Engineering (SE) early in the acquisition lifecycle. Accordingly, Systems Security Engineering (SSE) must also be considered early (and often).

13.7.6.2. Context of Systems Security Engineering (SSE) Within SE

In order to be cost-efficient and technically effective, Systems Security Engineering (SSE) must be integrated into Systems Engineering (SE) as a key sub-discipline. In fact, Section 5.3.5 of the Program Protection Plan (PPP) Outline indicates that the Program Protection Plan (PPP) should "Describe the linkage between system security engineering and the Systems Engineering Plan" and answer the question, "How will system security design considerations be addressed?"

DAG Chapter 4 provides comprehensive guidance for Systems Engineering (SE). In this chapter, Section 13.7.6.3 provides an overview of Systems Security Engineering (SSE) as a key countermeasure and Section 13.14 provides elaboration on how to include Systems Security Engineering (SSE) within Systems Engineering (SE). As a contextual starting point, the evolution of specifications and associated baselines across the acquisition is shown in Table 13.7.6.2.T1.

Table 13.7.6.2.T1 Evolution of Specifications/Baselines

The…

is developed by…

and forms the…

System Requirements Document (SRD)

the Government

Requirements Baseline

System Specification

the Contractor

Functional Baseline

Lower-level Subsystem Spec

the Contractor

Allocated Baseline

Fully-decomposed Component Spec

the Contractor

Product Baseline

Each of these specifications should baseline the developing system security requirements to be included in system design by applying the methods and tools of good Systems Engineering (SE). For example, as described in Section 13.3.2.1, repeated application of the Criticality Analysis (CA) methodology, reinforced by Systems Engineering (SE) tools such as fault isolation trees and system response analysis, will yield incremental refinements in the determination of what to protect and how to protect it.

Table 13.7.6.2.T2 shows the expected maturity of the baselines across the system lifecycle, according to the Systems Engineering Technical Reviews (SETR) events at which they should be assessed. It is noteworthy that even as early as the Alternative Systems Review (ASR), the preliminary system requirements should be identified. For further details concerning the appropriate system security content of the maturing baselines as they relate to the Systems Engineering (SE) review timeline, see Section 13.10.2 (Systems Engineering Technical Reviews).

Table 13.7.6.2.T2. Expected Maturity of Baselines at Each Systems Engineering Technical Reviews (SETR) Event

SETR or Audit

Typical Required Maturity of the Baselines

Requirements

Functional

Allocated

Design Release

Product

ASR

Preliminary

--

--

--

--

SRR

Draft

Preliminary

--

--

--

SFR

Approved

Entrance: Draft
Exit: Established

Preliminary

Initial
Preliminary

--

PDR

Maintained

Approved and
Maintained

Entrance: Draft
Exit: Established

Preliminary
Draft

--

CDR

Maintained

Maintained

Approved and
Maintained

Approved

Exit: Initial
Baseline
Established

FCA

Maintained

Maintained

Maintained

--

Controlled

SVR

Maintained

Maintained

Maintained

--

Controlled

PCA

Maintained

Maintained

Maintained

Finalized;
Approved

13.7.6.3. Systems Security Engineering (SSE) Across the Lifecycle

While Systems Security Engineering (SSE) is categorized in this chapter as a countermeasure, it is important to realize that Systems Security Engineering (SSE) is actually an overarching Systems Engineering (SE) sub-discipline, within the context of which a broad array of countermeasures is appropriate. Some of these Systems Security Engineering (SSE)-related countermeasures represent an overlap with other countermeasure categories, such as Software Assurance (see Section 13.7.3) and Supply Chain Risk Management (SCRM) (see Section 13.7.4).

As a countermeasure in its own right, key Systems Security Engineering (SSE) activities are highlighted as follows:

  • Integrate Security into Requirements, Systems Security Engineering (SSE) Processes, and Constructs
    • Integrate security requirements into the evolving system designs and baselines
    • Use secure design considerations to inform lifecycle trade space decisions
  • Activity Considerations by Phase
    • Pre-Milestone A: Evaluate mission threads, identify system functions, and analyze notional system architectures to identify mission critical functions
    • Pre-Milestone B: Refine critical function list and identify critical system components and candidate subcomponents (hardware, software, and firmware)
    • Pre-Milestone C: Refine list of critical system components and subcomponents
    • Note: Consider rationale for inclusion or exclusion in the list and for priority assignments
  • Identify and implement countermeasures and sub-countermeasures
    • Assess risks and determine mitigation approaches to minimize process vulnerabilities and design weaknesses
    • Perform cost/benefit trade-offs where necessary

Key Systems Security Engineering (SSE) criteria can be specified for each of the phases leading up to a major program Milestone, and it is important to establish these criteria across the full lifecycle in order to build security into the system. Further details are provided in Section 13.14.

13.7.7. Security

This section will be updated in the next Defense Acquisition Guidebook (DAG) update.

[1] http://capec.mitre.org/documents/An_Introduction_to_Attack_Patterns_as_a_So...

Previous and Next Page arrows

List of All Contributions at This Location

No items found.

Popular Tags

Browse

https://acc.dau.mil/UI/img/bo/minus.gifWelcome to the Defense Acquisition...
https://acc.dau.mil/UI/img/bo/plus.gifForeword
https://acc.dau.mil/UI/img/bo/plus.gifChapter 1 -- Department of Defense...
https://acc.dau.mil/UI/img/bo/minus.gifChapter 2 -- Program Strategies
https://acc.dau.mil/UI/img/bo/plus.gif2.0 Overview
https://acc.dau.mil/UI/img/bo/plus.gif2.1. Program Strategies—General
https://acc.dau.mil/UI/img/bo/plus.gif2.2. Program Strategy Document...
https://acc.dau.mil/UI/img/bo/plus.gif2.3. Program Strategy Relationship to...
https://acc.dau.mil/UI/img/bo/plus.gif2.4. Relationship to Request for...
https://acc.dau.mil/UI/img/bo/plus.gif2.5. Program Strategy Classification...
https://acc.dau.mil/UI/img/bo/plus.gif2.6. Program Strategy Document Approval...
https://acc.dau.mil/UI/img/bo/plus.gif2.7. Acquisition Strategy versus...
https://acc.dau.mil/UI/img/bo/plus.gif2.8. Technology Development...
https://acc.dau.mil/UI/img/bo/plus.gifChapter 3 -- Affordability and...
https://acc.dau.mil/UI/img/bo/plus.gifChapter 4 -- Systems Engineering
https://acc.dau.mil/UI/img/bo/plus.gifChapter 5 -- Life-Cycle Logistics
https://acc.dau.mil/UI/img/bo/minus.gifChapter 6 -- Human Systems Integration...
https://acc.dau.mil/UI/img/bo/plus.gif6.0. Overview
https://acc.dau.mil/UI/img/bo/plus.gif6.1. Total System Approach
https://acc.dau.mil/UI/img/bo/plus.gif6.2 HSI - Integration Focus
https://acc.dau.mil/UI/img/bo/plus.gif6.3. Human Systems Integration Domains
https://acc.dau.mil/UI/img/bo/plus.gif6.4. Human Systems Integration (HSI)...
https://acc.dau.mil/UI/img/bo/plus.gif6.5. Manpower Estimates
https://acc.dau.mil/UI/img/bo/plus.gif6.6. Additional References
https://acc.dau.mil/UI/img/bo/minus.gifChapter 7 -- Acquiring Information...
https://acc.dau.mil/UI/img/bo/plus.gif7.0. Overview
https://acc.dau.mil/UI/img/bo/plus.gif7.1. Introduction
https://acc.dau.mil/UI/img/bo/plus.gif7.2. DoD Information Enterprise
https://acc.dau.mil/UI/img/bo/plus.gif7.3. Interoperability and Supportability...
https://acc.dau.mil/UI/img/bo/plus.gif7.4. Sharing Data, Information, and...
https://acc.dau.mil/UI/img/bo/plus.gif7.5. Information Assurance (IA)
https://acc.dau.mil/UI/img/bo/plus.gif7.6. Electromagnetic Spectrum
https://acc.dau.mil/UI/img/bo/plus.gif7.7. Accessibility of Electronic and...
https://acc.dau.mil/UI/img/bo/plus.gif7.8. The Clinger-Cohen Act (CCA) --...
https://acc.dau.mil/UI/img/bo/plus.gif7.9. Post-Implementation Review (PIR)
https://acc.dau.mil/UI/img/bo/plus.gif7.10. Commercial Off-the-Shelf (COTS)...
https://acc.dau.mil/UI/img/bo/plus.gif7.11. Space Mission Architectures
https://acc.dau.mil/UI/img/bo/minus.gifChapter 8 -- Intelligence Analysis...
https://acc.dau.mil/UI/img/bo/plus.gif8.0. Introduction
https://acc.dau.mil/UI/img/bo/plus.gif8.1. Threat Intelligence Support
https://acc.dau.mil/UI/img/bo/plus.gif8.2. Signature and other Intelligence...
https://acc.dau.mil/UI/img/bo/plus.gif8.3. Support to the Intelligence...
https://acc.dau.mil/UI/img/bo/plus.gifChapter 9 -- Test and Evaluation (T&E)
https://acc.dau.mil/UI/img/bo/plus.gifChapter 10 -- Decisions Assessments and...
https://acc.dau.mil/UI/img/bo/minus.gifChapter 11 -- Program Management...
https://acc.dau.mil/UI/img/bo/plus.gif11.0. Overview
https://acc.dau.mil/UI/img/bo/plus.gif11.1. Joint Programs
https://acc.dau.mil/UI/img/bo/plus.gif11.2. International Programs
https://acc.dau.mil/UI/img/bo/plus.gif11.3. Integrated Program Management
https://acc.dau.mil/UI/img/bo/plus.gif11.4. Knowledge-Based Acquisition
https://acc.dau.mil/UI/img/bo/plus.gif11.5. Technical Representatives at...
https://acc.dau.mil/UI/img/bo/plus.gif11.6. Contractor Councils
https://acc.dau.mil/UI/img/bo/plus.gif11.7 Property
https://acc.dau.mil/UI/img/bo/plus.gif11.8. Modeling and Simulation (M&S)...
https://acc.dau.mil/UI/img/bo/plus.gifChapter 12 - Defense Business System...
https://acc.dau.mil/UI/img/bo/minus.gifChapter 13 -- Program Protection
https://acc.dau.mil/UI/img/bo/plus.gif13.0 Overview
https://acc.dau.mil/UI/img/bo/plus.gif13.1 The Program Protection Process
https://acc.dau.mil/UI/img/bo/plus.gif13.2 The Program Protection Plan (PPP)
https://acc.dau.mil/UI/img/bo/plus.gif13.3 Critical Program Information (CPI)...
https://acc.dau.mil/UI/img/bo/plus.gif13.4. Intelligence and...
https://acc.dau.mil/UI/img/bo/plus.gif13.5. Vulnerability Assessment
https://acc.dau.mil/UI/img/bo/plus.gif13.6. Risk Assessment
https://acc.dau.mil/UI/img/bo/minus.gif13.7. Countermeasures
https://acc.dau.mil/UI/img/bo/plus.gif13.8. Horizontal Protection
https://acc.dau.mil/UI/img/bo/plus.gif13.9. Foreign Involvement
https://acc.dau.mil/UI/img/bo/plus.gif13.10. Managing and Implementing PPPs
https://acc.dau.mil/UI/img/bo/plus.gif13.11. Compromises
https://acc.dau.mil/UI/img/bo/plus.gif13.12. Costs
https://acc.dau.mil/UI/img/bo/plus.gif13.13. Contracting
https://acc.dau.mil/UI/img/bo/plus.gif13.14. Detailed System Security...
https://acc.dau.mil/UI/img/bo/plus.gif13.15. Program Protection Plan (PPP)...
https://acc.dau.mil/UI/img/bo/plus.gif13.16. Program Protection Plan (PPP)...
https://acc.dau.mil/UI/img/bo/plus.gifChapter 14 -- Acquisition of Services
https://acc.dau.mil/UI/img/bo/plus.gifDoD Directive 5000.01
https://acc.dau.mil/UI/img/bo/plus.gifDoD Instruction 5000.02
https://acc.dau.mil/UI/img/bo/plus.gifRecent Policy and Guidance
https://acc.dau.mil/UI/img/bo/minus.gifCurrent JCIDS Manual and CJCSI 3170.01 I
https://acc.dau.mil/UI/img/bo/plus.gifDefense Acquisition Guidebook Key...
ACC Practice Center Version 3.2
  • Application Build 3.2.9
  • Database Version 3.2.9