Your browser does not appear to support Javascript, please update your browser or contact your system administrator to enable Javascript on your Internet browser. Thank you. Chapter 1: Introduction — U.S. Election Assistance Commission
Skip to content

U.S. Election Assistance Commission

Personal tools
You are here: Home TGDC Recommended Guidelines Part 3: Testing Requirements Chapter 1: Introduction
TGDC Recommended
Guidelines

VVSG Navigation
 

Chapter 1: Introduction

This part of the VVSG, Testing Requirements, contains requirements applying to the conformity assessment to be conducted by test labs. It is intended primarily for use by test labs.

This part contains 5 chapters, organized as follows:

  • Chapter 2: an overview of the conformity assessment process and related requirements;
  • Chapter 3: overview of general testing approaches;
  • Chapter 4: requirements for documentation and design reviews; and
  • Chapter 5: requirements for different methods for testing.

NOTE: Requirements in Part 3 do not contain “Test Reference:” fields, as the testing reference is implied by the requirement and its context within Part 3.

1.1 Changes from VVSG 2005 and Previous Versions of the Standards

1.1.1 Reorganization of testing-related material

Part 3, Testing Requirements, focuses on test methods and avoids repetition of requirements from Parts 1 and 2. VVSG 2005’s Volume II did contain voting equipment-related requirements as well as testing information.

The hardware testing vs. software testing distinction is no longer a guiding principle in the organization of the Guidelines. Although different testing specialties are likely to be subcontracted to different laboratories, the prime contractor must report to the certification authority on the conformity of the system as a whole.

1.1.2 Applicability to COTS and borderline COTS products

To clarify the treatment of components that are neither manufacturer-developed nor unmodified COTS and to allow different levels of scrutiny to be applied depending on the sensitivity of the components being reviewed, new terminology has been introduced: aaplication logic, border logic, configuration data, core logic, COTS (revised definition), hardwired logic, and third-party logic. Part 3:Table 1-20 describes the resulting categories.

Table 1-1 Levels of scrutiny

CATEGORIES

LEVEL OF SCRUTINY

TESTED?

SOURCE CODE/DATA REQUIRED?

CODING STANDARDS ENFORCED?

SHOWN TO BE CORRECT?

COTS

Black-box

Yes

No

No

No

third-party logic, border logic, configuration data

White-box

Yes

Yes

No

No

application logic

Coding standards

Yes

Yes

Yes

No

core logic

Logic verification

Yes

Yes

Yes

Yes

COTS may be tested as a black-box (i.e., exempted from source code inspections). Whether it is exempted from specific tests depends on whether the certifications and scrutiny that it has previously received suffice for voting system certification purposes. This determination is made by the test lab and justified in the test plan as described in Requirement Part 2: 5.1-D.

Notably, the distinction between software, firmware, and hardwired logic does not impact the level of scrutiny that a component receives; nor are the requirements applying to application logic relaxed in any way if that logic is realized in firmware or hardwired logic instead of software.

By requiring "many different applications," the definition of COTS deliberately prevents any application logic from receiving a COTS designation.

Finally, the conformity assessment process has been modified to increase assurance that what is represented as unmodified COTS is in fact COTS (Part 3: 2.4.3.4 “Unmodified COTS verification”).

1.1.3 New and revised inspections

1.1.3.1 Source code review for workmanship and security

In harmony with revisions to the requirements in Part 1: 6.4 “Workmanship”, the source code review for workmanship now focuses on coding practices with a direct impact on integrity and transparency and on adherence to published, credible coding conventions, in lieu of coding conventions embedded within the standard itself. A separate section for security has been added to focus on source code reviews for security controls, networking-related code, and code used in ballot activation.

1.1.3.2 Logic verification

This version of the VVSG adds logic verification to the testing campaign to achieve a higher level of assurance that the system will count votes correctly.

Traditionally, testing methods have been divided into black-box and white-box test design. Neither method has universal applicability; they are useful in the testing of different items.

Black-box testing is usually described as focusing on testing functional requirements, these requirements being defined in an explicit specification. It treats the item being tested as a "black-box," with no examination being made of the internal structure or workings of the item. Rather, the nature of black-box testing is to develop and utilize detailed scenarios, or test cases. These test cases include specific sets of input to be applied to the item being tested. The output produced by the given input is then compared to a previously defined set of expected results.

White-box testing (sometimes called clear-box or glass-box testing to suggest a more accurate metaphor) allows one to peek inside the "box," and focuses specifically on using knowledge of the internals of the item being tested to guide the testing procedure and the selection of test data. White-box testing can discover extra non-specified functions that black-box testing would not know to look for and can exercise data paths that would not have been exercised by a fixed test suite. Such extras can only be discovered by inspecting the internals.

Complementary to any kind of operational testing is logic verification, in which it is shown that the logic of the system satisfies certain constraints. When it is impractical to test every case in which a failure might occur, logic verification can be used to show the correctness of the logic generally. However, verification is not a substitute for testing because there can be faults in a proof just as surely as there can be faults in a system. Used together, testing and verification can provide a high level of assurance that a system's logic is correct.

A commonly raised objection to logic verification is the observation that, in the general case, it is exceedingly difficult and often impractical to verify any nontrivial property of software. This is not the general case. While these Guidelines try to avoid constraining the design, all voting system designs must preserve the ability to demonstrate that votes will be counted correctly. If a voting system is designed in such a way that it cannot be shown to count votes correctly, then that voting system does not satisfy Requirement Part 1: 6.1-B.

1.1.4 New and revised test methods

1.1.4.1 End-to-End testing

The testing specified in [VSS2002] and [VVSG2005] is not required to be end-to-end but may bypass portions of the system that would be exercised during an actual election ([VVSG2005] II.1.8.2.3).

The use of text fixtures that bypass portions of the system may lower costs and/or increase convenience, but the validity of the resulting testing is difficult to defend. If a discrepancy arose between the results reported by test labs and those found in state acceptance tests, it would likely be attributable to this practice.

Language permitting the use of simulation devices to accelerate the testing process has been tightened to prohibit bypassing portions of the voting system that would be exercised in an actual election, with few exceptions (Part 3: 2.5.3 “Test fixtures”), and a volume test analogous to the California Volume Reliability Testing Protocol [CA06] has been specified (Requirement Part 3: 5.2.3-D).

1.1.4.2 Reliability, accuracy, and probability of misfeed

Previous versions of these Guidelines specified a Probability Ratio Sequential Test [Wald47][Epstein55][MIL96] for assessment of reliability and accuracy. No test was specified for assessment of probability of misfeed, though it would have been analogous.

The Probability Ratio Sequential Tests for reliability and accuracy ran concurrent with the temperature and power variation test. There was no specified way to assess errors and failures observed during other portions of the test campaign.

Reliability, accuracy, and probability of misfeed are now assessed using data collected through the course of the entire test campaign. This increases the amount of data available for assessment of conformity to these performance requirements without necessarily increasing the duration of testing.

1.1.4.3 Open-ended vulnerability testing

This version adds Open Ended Vulnerability Testing (OEVT) as a test method. OEVT is akin to vulnerability penetration testing, conducted by a team of testers in an open-ended fashion not necessarily constrained with a test script. The goal of OEVT is to discover architecture, design and implementation flaws in the system that may not be detected using systematic functional, reliability, and security testing and which may be exploited to change the outcome of an election, interfere with voters’ ability to cast ballots or have their votes counted during an election or compromise the secrecy of vote.

OEVT is generally not called out in Test reference: fields; the assumption is that any requirement in the VVSG or aspect of voting system operations is “fair game” for OEVT. In particular, OEVT should be useful for testing those requirements that require source code inspection as a test method.