<DOC>
[108th Congress House Hearings]
[From the U.S. Government Printing Office via GPO Access]
[DOCID: f:92771.wais]



 EXPLORING COMMON CRITERIA: CAN IT ASSURE THAT THE FEDERAL GOVERNMENT 
                   GETS NEEDED SECURITY IN SOFTWARE?

=======================================================================

                                HEARING

                               before the

                SUBCOMMITTEE ON TECHNOLOGY, INFORMATION
                POLICY, INTERGOVERNMENTAL RELATIONS AND
                               THE CENSUS

                                 of the

                              COMMITTEE ON
                           GOVERNMENT REFORM

                        HOUSE OF REPRESENTATIVES

                      ONE HUNDRED EIGHTH CONGRESS

                             FIRST SESSION

                               __________

                           SEPTEMBER 17, 2003

                               __________

                           Serial No. 108-126

                               __________

       Printed for the use of the Committee on Government Reform


  Available via the World Wide Web: http://www.gpo.gov/congress/house
                      http://www.house.gov/reform


                                 ______

92-771              U.S. GOVERNMENT PRINTING OFFICE
                            WASHINGTON : 2003
____________________________________________________________________________
For Sale by the Superintendent of Documents, U.S. Government Printing Office
Internet: bookstore.gpo.gov  Phone: toll free (866) 512-1800; (202) 512ÿ091800  
Fax: (202) 512ÿ092250 Mail: Stop SSOP, Washington, DC 20402ÿ090001

                     COMMITTEE ON GOVERNMENT REFORM

                     TOM DAVIS, Virginia, Chairman
DAN BURTON, Indiana                  HENRY A. WAXMAN, California
CHRISTOPHER SHAYS, Connecticut       TOM LANTOS, California
ILEANA ROS-LEHTINEN, Florida         MAJOR R. OWENS, New York
JOHN M. McHUGH, New York             EDOLPHUS TOWNS, New York
JOHN L. MICA, Florida                PAUL E. KANJORSKI, Pennsylvania
MARK E. SOUDER, Indiana              CAROLYN B. MALONEY, New York
STEVEN C. LaTOURETTE, Ohio           ELIJAH E. CUMMINGS, Maryland
DOUG OSE, California                 DENNIS J. KUCINICH, Ohio
RON LEWIS, Kentucky                  DANNY K. DAVIS, Illinois
JO ANN DAVIS, Virginia               JOHN F. TIERNEY, Massachusetts
TODD RUSSELL PLATTS, Pennsylvania    WM. LACY CLAY, Missouri
CHRIS CANNON, Utah                   DIANE E. WATSON, California
ADAM H. PUTNAM, Florida              STEPHEN F. LYNCH, Massachusetts
EDWARD L. SCHROCK, Virginia          CHRIS VAN HOLLEN, Maryland
JOHN J. DUNCAN, Jr., Tennessee       LINDA T. SANCHEZ, California
JOHN SULLIVAN, Oklahoma              C.A. ``DUTCH'' RUPPERSBERGER, 
NATHAN DEAL, Georgia                     Maryland
CANDICE S. MILLER, Michigan          ELEANOR HOLMES NORTON, District of 
TIM MURPHY, Pennsylvania                 Columbia
MICHAEL R. TURNER, Ohio              JIM COOPER, Tennessee
JOHN R. CARTER, Texas                CHRIS BELL, Texas
WILLIAM J. JANKLOW, South Dakota                 ------
MARSHA BLACKBURN, Tennessee          BERNARD SANDERS, Vermont 
                                         (Independent)

                       Peter Sirh, Staff Director
                 Melissa Wojciak, Deputy Staff Director
                      Rob Borden, Parliamentarian
                       Teresa Austin, Chief Clerk
              Philip M. Schiliro, Minority Staff Director

   Subcommittee on Technology, Information Policy, Intergovernmental 
                        Relations and the Census

                   ADAM H. PUTNAM, Florida, Chairman
CANDICE S. MILLER, Michigan          WM. LACY CLAY, Missouri
DOUG OSE, California                 DIANE E. WATSON, California
TIM MURPHY, Pennsylvania             STEPHEN F. LYNCH, Massachusetts
MICHAEL R. TURNER, Ohio

                               Ex Officio

TOM DAVIS, Virginia                  HENRY A. WAXMAN, California
                        Bob Dix, Staff Director
                 Chip Walker, Professional Staff Member
                      Ursula Wojciechowski, Clerk
           David McMillen, Minority Professional Staff Member


                            C O N T E N T S

                              ----------                              
                                                                   Page
Hearing held on September 17, 2003...............................     1
Statement of:
    Davidson, Mary Ann, chief security officer, Server Technology 
      Platforms, Oracle..........................................    73
    Fleming, Michael G., Chief, Information Assurance Solutions, 
      Information Assurance Directorate, National Security Agency    21
    Gorrie, Robert G., Deputy Director, Defensewide Information 
      Assurance Program Office, Office of the Assistant Secretary 
      of Defense for Networks and Information Integration, and 
      DOD Chief Information Officer..............................    43
    Klaus, Christopher W., chief technology officer, Internet 
      Security Systems, Inc......................................    83
    Roback, Edward, Chief, Computer Security Division, National 
      Institute of Standards and Technology, U.S. Department of 
      Commerce...................................................     7
    Spafford, Eugene H., professor and director, Center for 
      Education and Research in Information Assurance and 
      Security, Purdue University................................    88
    Thompson, J. David, director, Security Evaluation Laboratory, 
      Cygnacom Solutions.........................................    66
Letters, statements, etc., submitted for the record by:
    Clay, Hon. Wm. Lacy, a Representative in Congress from the 
      State of Missouri, prepared statement of...................    56
    Davidson, Mary Ann, chief security officer, Server Technology 
      Platforms, Oracle, prepared statement of...................    76
    Fleming, Michael G., Chief, Information Assurance Solutions, 
      Information Assurance Directorate, National Security 
      Agency, prepared statement of..............................    23
    Gorrie, Robert G., Deputy Director, Defensewide Information 
      Assurance Program Office, Office of the Assistant Secretary 
      of Defense for Networks and Information Integration, and 
      DOD Chief Information Officer, prepared statement of.......    46
    Klaus, Christopher W., chief technology officer, Internet 
      Security Systems, Inc., prepared statement of..............    85
    Putnam, Hon. Adam H., a Representative in Congress from the 
      State of Florida, prepared statement of....................     4
    Roback, Edward, Chief, Computer Security Division, National 
      Institute of Standards and Technology, U.S. Department of 
      Commerce, prepared statement of............................    10
    Spafford, Eugene H., professor and director, Center for 
      Education and Research in Information Assurance and 
      Security, Purdue University, prepared statement of.........    90
    Thompson, J. David, director, Security Evaluation Laboratory, 
      Cygnacom Solutions, prepared statement of..................    69

 
 EXPLORING COMMON CRITERIA: CAN IT ASSURE THAT THE FEDERAL GOVERNMENT 
                   GETS NEEDED SECURITY IN SOFTWARE?

                              ----------                              


                     WEDNESDAY, SEPTEMBER 17, 2003

                  House of Representatives,
   Subcommittee on Technology, Information Policy, 
        Intergovernmental Relations and the Census,
                            Committee on Government Reform,
                                                    Washington, DC.
    The subcommittee met, pursuant to notice, at 10:10 a.m., in 
room 2154, Rayburn House Office Building, Hon. Adam Putnam 
(chairman of the subcommittee) presiding.
    Present: Representatives Putnam, Clay and Watson.
    Staff present: Bob Dix, staff director; John Hambel, senior 
counsel; Chip Walker, professional staff; Ursula Wojciechowski, 
clerk; Suzanne Lightman, fellow; Erik Glavich, legislative 
assistant; Ryan Hornbeck, intern; David McMillen, minority 
professional staff member; and Jean Gosa, minority chief clerk.
    Mr. Putnam. The Subcommittee on Technology, Information 
Policy, Intergovernmental Relations and the Census will come to 
order. Good morning, and I apologize for running a few minutes 
late. I have 20 high school students in Washington for a week 
for a congressional classroom program to become familiar with 
the city and our government and how everything works. None of 
us were figuring on Hurricane Isabel, so we are trying to 
figure out a way to get 20 airline tickets in very short order, 
and it's not going to be terribly easy.
    Welcome to another important hearing on cybersecurity. 
Today the subcommittee continues its aggressive oversight and 
examination of the information security issues most important 
to our Nation. As many of you know, Secretary Ridge announced 
the creation of the U.S. Computer Emergency Response Team 
[U.S.-CERT] in conjunction with Carnegie Mellon University. 
This is an important step in the progress that needs to be made 
by our government in protecting the Nation's computers from 
cyber attack. It's no longer a question of if our computer 
networks will be attacked, but rather when, how often and to 
what degree.
    Experts from the government and the private sector who have 
come before this subcommittee are very concerned that the 
United States is not adequately prepared to ward off a serious 
cyber attack that could cause severe economic devastation as 
well as contribute potentially to the loss of life. Blaster and 
SoBigF are stark examples of how worm and virus vulnerabilities 
can cost us billions of dollars in lost productivity and 
administrative costs in a very short period of time. From the 
home user to the private enterprise to the Federal Government, 
we all need to take the cyber threat more seriously and move 
expeditiously to secure our Nation's computers. I look forward 
to continuing to work with the Department of Homeland Security 
and other key Federal agencies in this national security 
endeavor.
    Today's hearing will examine the Common Criteria and 
whether or not a similar certification should be applied to all 
government software purchasers. For years countries around the 
globe have wrestled with the inability to have a commonly 
recognized method for evaluating security software. Out of this 
climate, the Common Criteria evolved and represents standards 
that are broadly useful within the international community.
    The international members of the Common Criteria share the 
following objectives: to ensure that evaluations of information 
technology products and protection profiles are performed to 
high and consistent standards, and are seen to contribute 
significantly to confidence in the security of those products; 
to improve the availability of evaluated, security-enhanced IT 
products; to eliminate the burden of duplicating evaluations; 
and to continuously improve the efficiency and cost-
effectiveness of the evaluation and certification/validation 
process.
    The Common Criteria are maintained by an international 
coalition and is designed to be useful within the widely 
diverse international community. Currently the recognition 
arrangement has 15 member countries. The National Security 
Agency and NIST represent the United States. Each member 
country accepts certificates issued by the members, making the 
Common Criteria a global standard. The criteria are technology-
neutral and are designed to be applied to a wide variety of 
technologies and levels of security.
    The criteria work by providing standardized language and 
definitions of IT security components. That standardization 
allows the consumer, in our case the Department of Defense, to 
create a customized set of requirements for the security of a 
product, or protection profile. This profile would include the 
level of security assurance that the customer desires, 
including the various mechanisms that must be present for 
achieving that assurance. Alternatively the criteria allows the 
producer of the technology to develop their own set of targets 
called a security target. An independent lab overseen by the 
participating agencies, in the United States' case NIST and 
NSA, then test the product against either the profile or the 
target and certifies that it can satisfy the requirements.
    Currently the Department of Defense requires Common 
Criteria certification for all security-related software 
purchases. NSA requires Common Criteria certification for all 
purchases for systems classified as national intelligence.
    One of the more useful aspects of the Common Criteria is 
its ability to allow the purchaser of security software to 
compare apples to apples. The protection profile which is cast 
in the language of the Common Criteria provides a view of 
security features independent of vendor claims. It allows the 
purchaser to find out with certainty the security features in a 
product and to compare that product with other similar ones to 
determine which ones to purchase.
    The certification process, conducted by independent labs 
overseen by NIST in the United States, concentrates on 
analyzing the documentation provided by the vendor testing the 
product, documenting the result and reporting it out to its 
oversight agency. That agency then reviews the validation 
report and issues certification. The process is paid for by the 
vendor and can be both expensive and time-consuming. Estimates 
for operating systems can be anywhere from 1 to 5 years and 
costs in the millions of dollars.
    The expense and time commitment of the process has given 
rise to some questioning about the usefulness of the process. 
For example, the adoption of the Common Criteria could shut 
small vendors out of the acquisition process because they might 
not have the resources to go through certification. Another 
potential problem is the timing. Because certification takes a 
significant amount of time, the government might not get the 
most cutting-edge technology available. Conversely, the 
government does need to gain assurance that security features 
in products exist and function as advertised.
    This is the larger question that we are faced with: How can 
we--governmentwide--get the most secure products available in a 
timely and cost-efficient manner and at the same time have IT 
companies compete on a level playing field in a competitive 
market that rewards rather than stifles innovation? I look 
forward to the expert testimony we have assembled today, and I 
thank the witnesses for their participation.
    As with all of our hearings, today's hearing can be viewed 
live via WebCast by going to reform.house.gov. We will hold off 
on the other opening statements until the Members arrive, and I 
would ask that all of our witnesses comply with the light and 
the timing. Your written statement will be submitted for the 
record and will be included in its entirety, but we ask that 
you summarize your verbal comments to 5 minutes.
    [The prepared statement of Hon. Adam H. Putnam follows:]

    [GRAPHIC] [TIFF OMITTED] T2771.001
    
    [GRAPHIC] [TIFF OMITTED] T2771.002
    
    [GRAPHIC] [TIFF OMITTED] T2771.003
    
    Mr. Putnam. With that, as is the custom of this 
subcommittee, we will swear in the witnesses. I will ask our 
first panel rise and raise your right hands.
    [Witnesses sworn.]
    Mr. Putnam. Note for the record all of the witnesses 
responded in the affirmative. And we will move right to our 
distinguished panel.
    Our first witness is Edward Roback. Mr. Roback serves as 
the Chief of the Computer Security Division at the National 
Institute of Standards and Technology supporting the agency's 
responsibility's to protect sensitive Federal information and 
promote security and commercial information technology 
products. As Chief, he leads the implementation of NIST 
responsibilities under FISMA and Cybersecurity Research and 
Development Act. Mr. Roback heads NIST's participation on the 
NIST-NSA Technical Working Group and serves on the Committee of 
National Security Systems. He has chaired the Federal Agency 
Computer Security Programs Managers Forum and co-authored An 
Introduction to Computer Security, The NIST handbook. He has 
also recently authored NIST's Guidelines to Federal 
Organizations on Security Assurance and Acquisition/Use of 
Tested/Evaluated Products. For those of who you would like a 
copy, they will be available at Barnes and Noble afterwards, 
and he will be happy to autograph them for you.
    Mr. Roback, you are recognized for 5 minutes. Welcome to 
the subcommittee.

STATEMENT OF EDWARD ROBACK, CHIEF, COMPUTER SECURITY DIVISION, 
NATIONAL INSTITUTE OF STANDARDS AND TECHNOLOGY, U.S. DEPARTMENT 
                          OF COMMERCE

    Mr. Roback. Thank you, Chairman Putnam. Thank you for the 
opportunity to testify today. In response to your invitation, I 
first would like to discuss what security assurance is and the 
role it plays in overall cybersecurity. I then would like to 
turn to the role that security testing and particularly the 
Common Criteria and the NIST-NSA-NIAP program play. I would 
like to leave you with some ideas as to what else the research 
community can do to improve the trust and confidence that we 
must have in the proper, correct and secure functioning of 
information systems. So let me start.
    What is security assurance? If we look at assurance 
broadly, it's the basis we need for overall trust and 
confidence in the correct and secure information systems. The 
overall question of assurance tries to address two questions: 
Does the system do what it is supposed to do, and does it not 
do the unintended? Within that context, security assurance, 
simply put, is the degree of confidence one has that the 
security mechanisms of a system is intended. It is not an 
absolute guarantee that security is achieved.
    How do we get security assurance? There is no single way. 
One can get some degree by looking at how a system is built, 
the past use of a system, manufacturers' warranties or lack 
thereof, and, of course, independent testing and evaluation. 
This testing can vary from the straightforward and repeatable 
through the more complex and time-consuming. When we have a 
standard specification that is very precise, such as with an 
encryption algorithm, testing is straightforward, although not 
necessarily easy. When a specification is exact, the test can 
be correspondingly precise.
    On the other hand, when we look at more complex and diverse 
IT products which lack common standards specification at the 
bits and bytes level, we're often confronted with products 
containing millions of lines of code for which a standard spec 
does not exist, and testing is not just straightforward. 
Testing such products necessarily involves human subjectivity. 
NIST refers to such testing as evaluation. NIAP is such a 
testing program.
    Turning to the NIAP and the CC, in my written statement I 
have provided a summary of the development of each, the Mutual 
Recognition Arrangement and some of the uses of the criteria 
both domestically and overseas, and indeed there have been some 
very significant uses. Major issuers of bank cards have formed 
work groups to use the Common Criteria to develop a profile for 
smart cards; the Financial Services Business Roundtable is 
doing that for the financial services community. The Process 
Control Security Requirements Forum is using the Common 
Criteria for SCADA systems security, and it is also being used 
in the health care community trying to use the Common Criteria 
to define requirements for the health care systems.
    But I think it's important to take a minute to review the 
meaning of a Common Criteria certificate. A Common Criteria 
evaluation is a measure of the information technology's 
compliance to the vendor's claimed security. It is not a 
measure or a guarantee that the product is free from malicious 
code or that the overall comprised system is secure. Any 
product that has a Common Criteria specification can undergo an 
evaluation and receive its certificate if the evaluation 
process is completed. I provided additional details in my 
written statement.
    As you mentioned, we have issued advice to the agencies on 
the use of evaluated products for non-national security 
systems. We described the overall role that assurance can play. 
And, of course, the Committee for National Security Systems has 
issued its Policy No. 11, and I will defer to my colleagues for 
additional comments on that.
    As to whether that policy should be extended, I believe 
that more data is needed from the CNSS policy experience before 
extension is considered or recommended for unclassified 
systems. One of the criticisms often levied on NIAP is that 
evaluations take too long and cost too much. We hear this from 
the small business community. Of course, one would expect to 
hear that of any evaluation process that is not free and 
instantaneous. However, these products do involve millions of 
lines of code. But given resolve, flexibility, resources and 
research, significant progress can be made.
    For example, the research community should look at new ways 
to develop enhanced security testing. We need new methods. The 
current process we have is too expensive and involves too much 
human subjectivity. We need to invest more in doing such 
research, because the sooner we do, the sooner we will have 
benefits from the results. We need to look outward at system-
level composability issues and enterprise architecture issues, 
and we need to look inward to some of the security issues that 
are present with things like protocols. You have to look across 
the entire spectrum.
    In summary, the Common Criteria provides the means to 
develop specifications and a common means to develop security 
evaluations. However, more can be done to streamline this 
process through research and standards development, resources 
permitting. We must also keep in mind that technology alone 
will not achieve security, although we are focused on 
technology today.
    Thank you for the opportunity to testify today.
    Mr. Putnam. Thank you very much.
    [The prepared statement of Mr. Roback follows:]

    [GRAPHIC] [TIFF OMITTED] T2771.004
    
    [GRAPHIC] [TIFF OMITTED] T2771.005
    
    [GRAPHIC] [TIFF OMITTED] T2771.006
    
    [GRAPHIC] [TIFF OMITTED] T2771.007
    
    [GRAPHIC] [TIFF OMITTED] T2771.008
    
    [GRAPHIC] [TIFF OMITTED] T2771.009
    
    [GRAPHIC] [TIFF OMITTED] T2771.010
    
    [GRAPHIC] [TIFF OMITTED] T2771.011
    
    [GRAPHIC] [TIFF OMITTED] T2771.012
    
    [GRAPHIC] [TIFF OMITTED] T2771.013
    
    [GRAPHIC] [TIFF OMITTED] T2771.014
    
    Mr. Putnam. Our second witness is Michael Fleming. Mr. 
Fleming currently leads the National Security Agency group 
responsible for development and customer implementation support 
of a broad set of IA solutions. Prior to this assignment, he 
held positions as the Deputy Chief of Network Security Group, 
Chief of Network Security Systems Engineering Office, Chief of 
Network Security Products Office and special technology 
transfer assignment with the NSA Deputy Director For Plans and 
Policy. Early in his NSA career Mr. Fleming served in a variety 
of technical and program management assignments in 
communications security and signals intelligence.
    He is a recipient of the NSA Meritorious Civilian Service 
Award and twice received the Presidential Rank Award for 
Meritorious Service.
    It is a pleasure to have you, and you're recognized for 5 
minutes.

 STATEMENT OF MICHAEL G. FLEMING, CHIEF, INFORMATION ASSURANCE 
SOLUTIONS, INFORMATION ASSURANCE DIRECTORATE, NATIONAL SECURITY 
                             AGENCY

    Mr. Fleming. Thank you for your interest in cybersecurity, 
information security, or information assurance. We have three 
words that describe this very important endeavor. I would like 
to provide a quick overview of the Common Criteria and the NIAP 
current status, some of the potential for even greater 
applications, and discuss some of the issues that you have 
already raised.
    As you stated, it establishes a language which is very 
important, a syntax. The criteria is, in fact, a language, a 
dictionary for describing user needs and vendor claims. It also 
establishes the methodology to make those comparisons in terms 
of how well those claims meet needs.
    I think it's important to note the criteria does not apply 
to all information technology products that make no information 
assurance claims. The criteria employs a distinct but related 
set of functional requirements which describe the mechanisms 
and the assurance requirements which Mr. Roback described in 
terms of gaining confidence that those mechanisms work 
correctly. There are seven assurance levels, one being the 
lowest and the least rigorous; seven being the highest and most 
rigorous.
    In 1997, we entered a partnership with NIST called NIAP to 
promote, demand investment in security products, and establish 
the commercial security evaluation capability. To support the 
demand the Committee on National Security Systems in January 
issued NSTISSP 11, which stipulated the acquisition of 
commercial IA products, and IA-enabled products would be 
limited to those evaluated under formal schemes such as NIAP.
    In terms of demand, we have defined, in fact, 21 protection 
profiles, and 31 more are in development. These profiles 
address key technology such as operating systems, firewalls, 
and intrusion detection systems and other things. And the 
demand trend there is encouraging.
    As profiles are introduced for a technology, the number of 
evaluation claims is increasing. For example, all the operating 
systems in evaluation or that have been evaluated are 
compliant. All public key infrastructure are compliant. And 
about half of the firewall intrusion detection systems are 
either claiming compliance or compliant with protection 
profile.
    As far as the second goal of NIAP, establishing the labs, 8 
labs are accredited and have completed 38 evaluations with an 
additional 55 underway and more being negotiated continuously. 
In terms of expanding the use across a broader spectrum of 
environments than just the Department of Defense, the 
requirements for information assurance in the national security 
market are almost identical to those in other mission-critical 
government or commercial systems. Common Criteria can be 
leveraged to converge these markets. The larger market would 
result in greater return on investment for the vendors, and 
everyone in the buying sector would benefit from that leverage.
    Regarding limitations, you address cost and timeliness. 
Evaluation of timeliness and cost actually is a function of a 
number of factors: Product complexity, assurance level aspired 
to, the vendor's preparedness to undergo an evaluation. And any 
problems found during evaluation typically want to be fixed 
before the evaluation is completed. All those can lead to some 
time and cost. But a vendor can capitalize on an initial 
evaluation investment by reusing parts for subsequent 
evaluations for subsequent releases.
    While the criteria makes every attempt to identify and 
correct security vulnerabilities to ensure--there is no 
assurance that these products are bulletproof, especially at 
the lower assurance level. Vulnerabilities can be introduced in 
a number of ways, from poor design, inappropriate operation. 
Source code evaluation is not always required, particularly 
until you get to the higher assurance levels, which many 
vendors don't aspire to. And vulnerabilities in an IA-enabled 
product introduced by unevaluated nonsecurity functionality may 
go undetected. Mechanisms complementary to the Common Criteria 
are needed to increase our ability to find and eliminate 
malicious code in large software applications.
    In conclusion, information systems require assurance that 
it was specified and designed properly, that it was 
independently evaluated against a prescribed set of security 
standards, that it will maintain proper operation during its 
lifetime even in the face of malicious attacks and human error. 
The Common Criteria in NIAP are working. The trends are up, and 
process improvements continue. A converged market for security 
products would benefit all potential IA buying sectors.
    The Common Criteria and NIAP are not a panacea for all 
security issues and all information technology. We need 
complementary activities. Security needs to be baked into 
information systems starting with specification. It cannot just 
be evaluated in at the end nor sprinkled in after a system is 
fielded. And I think this is an important point in terms of 
improving the overall process. This is all about making sure 
that a security product is, in fact, secure and doing its job.
    It has certainly been my pleasure to discuss the Common 
Criteria and share the work of the NIAP with the subcommittee, 
and I thank you for the opportunity.
    Mr. Putnam. Thank you very much, Mr. Fleming.
    [The prepared statement of Mr. Fleming follows:]

    [GRAPHIC] [TIFF OMITTED] T2771.015
    
    [GRAPHIC] [TIFF OMITTED] T2771.016
    
    [GRAPHIC] [TIFF OMITTED] T2771.017
    
    [GRAPHIC] [TIFF OMITTED] T2771.018
    
    [GRAPHIC] [TIFF OMITTED] T2771.019
    
    [GRAPHIC] [TIFF OMITTED] T2771.020
    
    [GRAPHIC] [TIFF OMITTED] T2771.021
    
    [GRAPHIC] [TIFF OMITTED] T2771.022
    
    [GRAPHIC] [TIFF OMITTED] T2771.023
    
    [GRAPHIC] [TIFF OMITTED] T2771.024
    
    [GRAPHIC] [TIFF OMITTED] T2771.025
    
    [GRAPHIC] [TIFF OMITTED] T2771.026
    
    [GRAPHIC] [TIFF OMITTED] T2771.027
    
    [GRAPHIC] [TIFF OMITTED] T2771.028
    
    [GRAPHIC] [TIFF OMITTED] T2771.029
    
    [GRAPHIC] [TIFF OMITTED] T2771.030
    
    [GRAPHIC] [TIFF OMITTED] T2771.031
    
    [GRAPHIC] [TIFF OMITTED] T2771.032
    
    [GRAPHIC] [TIFF OMITTED] T2771.033
    
    [GRAPHIC] [TIFF OMITTED] T2771.034
    
    Mr. Putnam. Our third witness for this first panel is 
Robert Gorrie. Mr. Gorrie is the National Security Agency 
integree serving as the Deputy Director of the Defensewide 
Information Assurance Program [DIAP], office, Office of the 
Assistant Secretary of Defense for Networks and Information 
Integration. Prior to his retirement from the Army after a 26-
year career as a signal officer, he was Chief of the 
Information Assurance Division on the Joint Staff and Deputy 
Chief of NSA's Information Security Customer Support Office. 
Following his retirement, he is employed with Titan Systems 
Corp. as vice president of operations in its managed IT 
securities service group. He is a graduate of Gannon College 
and Penn State University in Pennsylvania as well as both the 
Naval and Air War Colleges.
    Welcome to the subcommittee. You're recognized.

  STATEMENT OF ROBERT G. GORRIE, DEPUTY DIRECTOR, DEFENSEWIDE 
 INFORMATION ASSURANCE PROGRAM OFFICE, OFFICE OF THE ASSISTANT 
SECRETARY OF DEFENSE FOR NETWORKS AND INFORMATION INTEGRATION, 
               AND DOD CHIEF INFORMATION OFFICER

    Mr. Gorrie. Thank you, sir, and I am honored to be here and 
pleased to have the opportunity to speak with your committee 
about some of the efforts DOD has initiated with respect to the 
evaluation of information assurance and information assurance-
enabled products.
    As demonstrated in recent operations, U.S. forces have been 
extremely successful in the battlefield. They have been able to 
translate IT into combat power. However, as our dependence on 
IT increases, it creates new vulnerabilities as adversaries 
develop new ways of attacking and disrupting U.S. forces.
    No one technology operation or person is capable of 
protecting the Department's vast networks. In October last 
year, the Department published its capstone information 
assurance policy. The policy establishes responsibilities and 
prescribes procedures for applying integrated, layered 
protection for DOD information systems and networks.
    The DOD's IA strategies and policies are central to the 
committee's Common Criteria question. As I stated, no one 
single person, technology or operation can assure DOD's vast 
global networks. The Common Criteria, the NIAP evaluation 
program, the national and DOD policy addressing IA evaluations 
and the evaluated products themselves are part of an integrated 
DOD IA strategy.
    Even with the solid defense-in-depth strategy in place, we 
must be confident in the security and trustworthiness of the 
products we use to implement that strategy. New vulnerabilities 
in the equipment we use are identified daily. Through the 
Department's IA Vulnerability Alert [IAVA], process, users are 
made aware of the vulnerabilities and associated fixes. The 
IAVA process serves us well, minimizing the effects of recent 
cyber incidents on DOD networks. The IAVA process has also 
highlighted the alarming rise in the number of vulnerabilities, 
the risks they represent and the cost of associated 
remediation. Although we continue to improve the efficiency and 
effectiveness of the IAVA process, unless we can take proactive 
measures to reduce the number of vulnerabilities in our systems 
and networks, our ability to respond will begin to degrade.
    Although no product will ever be totally secure, we can 
incorporate security into the design and, through testing, gain 
a reasonable sense of the risk we assume when we use them. 
However, that requires policy, enforcement and practice. The 
Committee on National Security Systems, their National 
Information Assurance Acquisition Policy directs the 
acquisition of all COTS IA and IA-enabled products to be used 
in national security systems be limited to those which have 
been evaluated. Our DOD policy goes further than that, 
requiring the evaluation of all IA and IA-enabled products.
    While vendors are primarily driven by product cost, 
functionality and time to market, security has also become a 
significant consideration. Recently the largest vendors have 
pledged to make security a priority. The decisions of those 
vendors are based on thorough business cases analyses. None can 
afford the continued cost of the race against the ``penetrate 
and patch'' approach to deal with latent vulnerabilities in 
software packages. The economic cost of that approach is 
enormous and does not result in a higher level of security. 
Sound software engineering practices like those tested in a 
NIAP evaluation are an essential element in the elimination of 
vulnerabilities and critical to the reduction of postdeployment 
patching.
    Still there remains the cost of evaluation and time of 
evaluation. Both are functions of the complexity of a product, 
the level of evaluation, the quality of a vendor's product and 
the vendor's preparation for evaluation. Product complexity in 
the evaluation level is directly proportional to the amount of 
testing required, and the amount of testing is directly 
proportional to the time and cost. A quality product may not 
require repeat testing. However, products that do get into a 
test, fix and test cycle incur additional cost not only for 
testing, but also for product modification.
    Some vendors, especially small vendors, are concerned about 
the cost and time of evaluation regardless of the product's 
complexity. During the development of DOD policy, we met with 
small businesses individually and in multivendor forums, and, 
based on their input, we developed policy that attempts to 
remedy some of their concerns.
    The evaluation process does what it was designed to do. It 
provides standardized evaluation reports that help make--help 
us make informed risk management decisions with respect to the 
security of our networks and systems. Expectations of evaluated 
products should not exceed what the evaluations are designed to 
provide. The type of testing that uncovers vulnerability can be 
done by the NIAP laboratories and will be done if required. The 
depth of evaluation depends on how much time and how much money 
we are willing to pay, as well as how much risk we are willing 
to accept. Evaluations do not guarantee security. The security 
comes from sound systems engineering, the combination of 
technologies, operations and people.
    The President's recent National Strategy to Secure 
Cyberspace requires a comprehensive review of NIAP to examine 
its effectiveness and expansion potential. We are conducting 
that review in collaboration with the Department of Homeland 
Security. DOD is also investigating the issue of software 
assurance with respect to all software, not just IA and IA-
enabled products, again working with the Department of Homeland 
Security.
    The challenges we face are the same challenges found 
throughout government and industry, challenges we are 
addressing in our IA strategic plan. DOD is making progress 
managing the risks successfully across all of our national 
security and defense missions. That success is documented in 
our FISMA reports as well as our annual IA report to Congress. 
Most importantly, however, it's reflected in our ability to act 
as an enabler and not as an impediment in the conduct of 
networkcentric operations in several theaters across the globe.
    I appreciate the opportunity to appear before your 
committee and look forward to your continuing support on this 
very critical issue. Thank you very much.
    [The prepared statement of Mr. Gorrie follows:]

    [GRAPHIC] [TIFF OMITTED] T2771.035
    
    [GRAPHIC] [TIFF OMITTED] T2771.036
    
    [GRAPHIC] [TIFF OMITTED] T2771.037
    
    [GRAPHIC] [TIFF OMITTED] T2771.038
    
    [GRAPHIC] [TIFF OMITTED] T2771.039
    
    [GRAPHIC] [TIFF OMITTED] T2771.040
    
    [GRAPHIC] [TIFF OMITTED] T2771.041
    
    [GRAPHIC] [TIFF OMITTED] T2771.042
    
    [GRAPHIC] [TIFF OMITTED] T2771.043
    
    Mr. Putnam. Thank you very much. And we are delighted to 
have been joined by the ranking member of the subcommittee, the 
gentleman from Missouri Mr. Clay, and the distinguished 
gentlelady from California Ms. Watson. And at this time I will 
recognize the ranking member for his opening statement.
    Mr. Clay. Thank you, Mr. Chairman, and especially for 
calling this hearing.
    I'd like to reiterate two points that I made at last week's 
hearing. First, the government should use its power in the 
computer software marketplace to acquire safer software. 
Second, software vendors should be more aware of the security 
configuration of the software they produce. Let me briefly 
elaborate on these two points.
    The Federal Government spends billions each year on 
computer hardware and software. Those purchases have a strong 
influence on what gets produced and sold to the public. The 
Federal Government can use its market power to change the 
quality of software produced by only buying software that meets 
security standards. The result will be an increase in the 
security of all software and better protection for the public. 
This is a simple formula. The government doesn't have to 
regulate software manufacturers, it only has to use its 
position in the marketplace.
    Mark Forman, the former Federal CIO and regular witness 
before this subcommittee, incorporated an idea similar to this 
when he developed the Smart-Buy program. Mr. Forman realized 
that Federal agencies were buying the same software over and 
over again. Each agency was paying a different price for the 
same software, and the Federal Government was getting little or 
no leverage out of its position in the marketplace. No business 
would operate like that.
    I believe we should build on Mr. Forman's idea to buy not 
cheaper software, but better software. I hope the new CIO, 
Karen Evans, will work with the subcommittee to incorporate 
this concept into the Smart-Buy program. We don't have to wait 
for computer companies to develop new security procedures. 
There are some steps that can be taken very quickly to improve 
computer security. We saw this earlier this year when Microsoft 
began shipping software that was configured differently.
    The story Microsoft tells is that the company realized that 
it was shipping software with all the gates opened. A good 
computer manager systematically went through the software, 
closing gate after gate. Those with less training left the 
gates open, and the hackers walked in.
    Shipping software with secure configurations should be a 
first priority of all computer companies.
    I look forward to the testimony today of these witnesses, 
and I hope that our witnesses will consider my suggestions and 
provide the committee with their comments on it.
    Thank you, Mr. Chairman, for yielding.
    Mr. Putnam. Thank you very much.
    [The prepared statement of Hon. Wm. Lacy Clay follows:]

    [GRAPHIC] [TIFF OMITTED] T2771.044
    
    [GRAPHIC] [TIFF OMITTED] T2771.045
    
    Mr. Putnam. At this time we will recognize the gentlelady 
from California Ms. Watson for her remarks.
    Ms. Watson. Thank you so much, Mr. Chairman. I do 
appreciate this opportunity.
    Over the last decade or so, the Internet has become a force 
in our society that it is difficult to identify critical 
networks in our Nation that are not connected to the Internet. 
Electricity, traffic, water, freight, all these systems rely on 
the Internet for their function. This reliance on the Internet 
has yielded tremendous gains in efficiency, yet we are 
constantly reminded of the vulnerabilities inherent in such 
reliance on the Internet.
    Most recently, the Blaster and the SoBig viruses posed 
major challenges to the integrity of America's infrastructure. 
Thankfully, none of the cyber attacks known to us have resulted 
in cataclysmic damage to the United States, or to our people, 
or to our infrastructure, at least not yet. We have had many 
close calls. And in the wake of September 11, many analysts 
familiar with global terrorism blame America's leaders for 
missing the signs that we were vulnerable to conventional 
terrorism. If we in Congress do not wake up to the clear 
warning signs of our vulnerability, we would be committing just 
as grave a mistake.
    In my experience in Micronesia, in my embassy, is that we 
were getting warnings by cable from the State Department on a 
daily basis of a virus that ran through our most sensitive 
computers in the embassy. That's a very scary notion when you 
depend on the Internet 24/7 to communicate.
    And so this hearing is very, very valuable to the basic 
security of our country, and I really would like to be here to 
hear every bit of the comments that are being made by the panel 
with such expertise. But we have a hearing on terrorism, and I 
do hope our enemies around the globe do not--are not able to 
master the Internet to the extent that they know more than we 
do and they can get our country's secrets.
    So thank you so much, panelists, in bringing your expertise 
to us. Thank you, Mr. Chairman. And I'm going to run down to 
that classified hearing.
    Mr. Putnam. You just throw us off like a bad habit.
    Ms. Watson. I want to find out what those real secrets are.
    Mr. Putnam. We will begin with the questions for the first 
panel.
    Mr. Gorrie, could you explain why DOD decided to adopt the 
Common Criteria requirement for all DOD procurement. What led 
to that decision?
    Mr. Gorrie. The original NSTISSP 11 requirement was for--
only for national security systems, and if you look at the term 
``national security systems,'' that's a legislative construct 
that was brought into being in I believe it was the Nunn-Warner 
amendment to the Brooks Act. The Brooks Act established that 
all ADPE, automatic data processing equipment, would be bought 
through GSA. The Department of Defense found that GSA wasn't 
really responsive to that. This was in 1986. And the Warner 
amendment, as it was known, changed that to reflect the term 
``national security systems.'' so it said that--and I have it 
here somewhere, but--and I'll get it for you later, but it says 
all national security systems--and it went on to list what a 
national security system was: anything that handled classified 
information, did intelligence, did cryptologic work, did 
weapons systems, were national security systems, with the 
exception of what they term support systems, which were 
personnel systems, logistics systems and things of that nature.
    If you went out and talked to any commander around the 
globe and asked them if their personnel system or their 
logistic system was a critical part of their warfighting 
capability, they would undoubtedly all say yes, and so that is 
why--the reason we in DOD said you need not only security in 
your national security systems, but in all other systems that 
we use, because they all touch one another, and the potential 
for a security flaw in one could spill over to other ones.
    Mr. Putnam. How would you evaluate your experience thus far 
in terms of the weaknesses of it, the strengths of it, lessons 
we can derive as we contemplate its usage beyond DOD?
    Mr. Gorrie. First, the weaknesses of it. A lot of it has to 
do with our interaction with our vendors. Some of the vendors 
are not--and even some of the people, the users in DOD who have 
to follow the rule, are not familiar exactly with what the rule 
entails. Some of the criticisms from small businesses that they 
can't realize a return on investment are borne out of ignorance 
of what the policy says, because the policy does provide them 
the opportunity, when making a contract with the government to 
sell their particular product, that the only thing that they 
need to do is to stipulate in the contract that they will have 
the product evaluated, not that they have to evaluate the 
product prior to establishing the contract, which gives them 
the opportunity, if selected, to realize a return on that 
investment because they can include that cost in the cost to 
the government.
    The number of systems which are being evaluated, although 
adequate right now, needs to be much, much higher, and the 
types of systems that are being evaluated need to be expanded. 
Those are our problems.
    Benefits, as I said in my testimony, the ability to know 
what a product will do is one of the biggest benefits we can 
have. You can get the glossy brochure from a vendor that says 
this is the best thing since sliced bread, but until you put it 
to the test, you don't know what that product will do. And an 
independent evaluation such as that provided by the NIAP is 
invaluable not only because you know what it will do, but when 
you certify and accredit a particular system to be able to be 
connected to our networks, you have to make a risk decision 
whether or not that system is safe enough. If you know exactly 
what those products are doing, then you can craft other things 
around that particular product to circumvent any shortcomings 
it may have, things like an operational procedure or some kind 
of policy control or other things. So in that particular sense, 
the reports that we get out of NIAP are invaluable in order to 
make our systems safe.
    Mr. Putnam. Thank you.
    Mr. Fleming, when you developed NSTISSP 11, the requirement 
that national security systems purchase software certified 
after the Common Criteria, what consideration was given to its 
impact on small business?
    Mr. Fleming. NSTISSP 11, first of all, comes out of the 
national security community, which comprises of some 21 or 22 
Federal departments and agencies and sort of the national 
security slice across those agencies, where in DOD it might go 
deeper than it would in some of the other agencies. It also 
requires an evaluation of all information assurance products.
    The NIAP process is only one of the schemers. NSA does 
evaluations for high-grade cryptography. So the NSTISSP 11 
applies to a broader thing than just NIAP.
    As far as small businesses are concerned, the cost of 
evaluation, as I mentioned in the testimony, varies 
considerably depending on the assurance level. And when NSTISSP 
11 was originally issued, it did not specify that all products 
had to be evaluated in the beginning. It put a date in there of 
July 2002. It came out in 1999. There was a period in there 
where it was something to be considered. And the idea there was 
to allow companies to get used to both the process and the 
profiles that were coming out. So the mandate did not start 
until, in fact, almost 2 years later in 2002. So the idea was 
to allow companies to grow toward what this was.
    The second thing was it didn't specify any particular 
evaluation level. The beginning thinking was any evaluation 
level is better than none. And so the cost is, in fact, 
considerably lower at the lower levels than it would be at the 
higher levels because of the demand for generating evidence. So 
the idea was to ramp this process up to allow companies to grow 
with it and, over time, ever increase the assurance level in 
these products.
    So that's how we wanted to consider, in fact, all vendors, 
but in particular the small companies.
    Mr. Putnam. In the beginning of your answer, you mentioned 
that this was to cut across national security systems. Does the 
Justice Department and Homeland Security and State also utilize 
government criteria?
    Mr. Fleming. Yes. NSTISSP 11 includes all those agencies 
and the opportunity for them--obviously NSTISSP 11 applies at 
that level the opportunity to use the Common Criteria, and the 
NIAP process is there for any buyer. But, yes, NSTISSP 11 
covers those kinds of agencies for their national security 
systems.
    Mr. Putnam. Mr. Roback, as all of us are aware, and we have 
held hearings related to this, the Blaster worm exploited a 
flaw in Microsoft's operating systems to infect thousands of 
computers. Since that system was certified, why wasn't the flaw 
found? What is the weakness in the evaluation that does not get 
at code flaws?
    Mr. Roback. I think you have to look at the range of 
possibilities that the NIAP testing program offers. At the low 
end, where you are looking at things like documentation of how 
the product was developed, you are not getting into the very 
detailed code review that you get at the very, very high levels 
of assurance. So it sort of depends on what level you want to 
pick for your evaluation, which is the flexibility of it. A 
vendor can bring in product and target any one of the seven 
levels or create their own. So unless they target something at 
the very high level, which, by the way, costs a lot more and 
takes a lot longer, you are not going to get that level of 
review. And even if you do, it's subject to human subjectivity 
in the review.
    So you may not get--because we don't have very specific 
standards for this, and you probably couldn't at that level for 
millions of lines of code--standards you can do very quick, 
very exact testing. So there's' some art in here, too.
    Mr. Putnam. Thank you very much.
    Mr. Clay.
    Mr. Clay. To all of the witnesses, I would just like to 
hear from you or hear your comments on the proposal to add 
secure configurations as another dimension of a Common 
Criteria. Is this feasible, and how long would it take? We'll 
start with you, Mr. Roback.
    Mr. Roback. Actually under the Cybersecurity R&D Act that 
was passed by the Congress late last fall, they assigned to 
NIST the task of developing security configurations for 
specific IT products. And so we are holding a workshop later in 
September trying to invite the vendors in, and other Federal 
agencies and NSA and others have already developed some of 
these checklists for some very specific products. So some of 
these do exist.
    Actually I think it would be a very good thing, because if 
you look at a spectrum, first you want to have very strong 
standards. Then you want to have some testing program that 
tells you whether the standard was correctly implemented. And 
third, you want to have those configurations so that when a 
system gets one of those products, they know where to set the 
settings, because even if they are shipped from the vendor with 
security turned on, which is not always the case, but sometimes 
it is, it is not necessarily always right for the environment 
that it's being put into.
    Configuration guidance is a very good thing. It's also 
important to remember there's a range of potential 
environments; that is, the security you would have for a home 
user might be very different from the security at NSA or the 
security of a large, centrally managed enterprise. So you have 
to keep that in mind, too, there's a range there, because 
there's a range of risks in the type of information that's 
being exposed.
    So it does get complicated, but I think checklists of that 
sort are very useful.
    Mr. Fleming. I would agree with everything Mr. Roback said. 
This is a life cycle. Security is a life cycle endeavor. It 
just doesn't stop when the product is certified and goes out 
into the field security. Every day you've got to watch these 
products, particularly security products that sit sometimes in 
the way of system performance, and it is so often tempting to 
tweak that firewall a little bit to allow the bandwidth to get 
greater, but you may have, in fact, left open the door you 
don't want to open.
    So I would add to Mr. Roback's points the human dimensions 
of this. It boils down to how well trained is that system 
administrator or that security administrator; how well do they 
understand the multitude of configurations that these products 
can, in fact, take, and which ones are the good ones and the 
bad ones. So there's a dimension in this of awareness and 
training of individuals along with the ideas that Mr. Roback 
put forth in terms of having configuration guides. And we have 
been a very, very strong partner in the generation of these 
configuration guides for major IT systems, but there are many 
other technologies that need a similar kind of guide for a 
well-meaning, but sometimes difficult job called system 
administration, security administration.
    Mr. Clay. Mr. Gorrie, anything to add?
    Mr. Gorrie. Well, I could attest that it does work. In DOD 
we have been using secure technical implementation guides 
[STIGs], for our products for our operating systems and other 
things for a long time. We have a process that is known as gold 
disk, where we will go out and put particular security settings 
on operating systems.
    STIGs, security technical implementation guidance. We have 
them in DOD. They work, and it depends on, again, as Mr. Roback 
stated, what environment you want to use them in. If you are 
using them for an inventory system in a gym, no sense in 
tightening it all down because you want to be more open and 
share those sorts of things. If you have a critical system that 
you need protected, it needs to be ratcheted down. And all the 
people who participate in that network have to have it 
ratcheted down to the same degree.
    Mr. Clay. In your opinion, would it be possible to certify 
software configurations separate from the Common Criteria 
evaluation?
    Mr. Gorrie. I don't know. I would have to defer on that.
    Mr. Clay. Anybody?
    Mr. Roback. Well, I am not sure if certification is the 
precise word, but I think that there are indeed--you can 
separate the two. Whether a product has been tested to know 
whether the security features work correctly is a separate 
question from where to turn on and turn off the security 
features.
    However, if you haven't gone through certification, the 
testing process, you are not going to have a great deal of 
assurance that even if you turn something on, the security is 
working. And the example I like to give is you go to a Web 
site, and you get the little lock in the corner on your 
browser. Well, why do you have any confidence whatsoever that 
it is doing anything other than showing you a little picture of 
a lock? If it hasn't gone through testing, you really don't 
know, other than it makes a nice little picture in the corner.
    So that is why testing is so important in addition to 
turning on the security.
    Mr. Clay. For all of the witnesses, again, I would like 
each of you to comment on my proposal that the Federal 
Government use its market power to improve the level of 
security for our purchasers. Do you believe this is feasible?
    Mr. Fleming. I will start. During the testimony I used a 
phrase called converged market, and I think it is along the 
lines of your reference back to the Smart Buy Program that Mr. 
Foreman put in place.
    The idea of a converged market would be find that level of 
security goodness, that assurance level and that set of 
security mechanisms that a large buying sector could agree to, 
the DOD, the national security community, the other Federal 
agencies, the critical infrastructure marketplace that Ms. 
Watson referred to, such that a vendor would see a return on 
investment good enough for them to shoot for that level.
    And so this idea of getting a common level that all would 
buy into would, I think, be a good incentive for vendors. Make 
it appropriate so it is not a bridge too far, and then 
standardize on that level and let vendors shoot for that level 
so you can get this economy of scale.
    Mr. Clay. Thank you.
    Mr. Gorrie, one question from our other Member who had to 
leave. She says: It is good to hear that you understand the 
business costs of the reactive plug and patch approach, but how 
widespread do you feel this view has been accepted throughout 
the technology industry? What can we do to spread this message 
and change the approach?
    Mr. Gorrie. I think if you will ask the panel members that 
follow us, those are the words that were given to me by them. I 
mean, they were the ones who told me those things. I didn't 
make that up.
    How can we spread it? I think it will spread itself. 
Vendors whose products are well developed and have fewer 
problems as far as having to go out and patch them and things 
of that nature will be bought more. People will see the benefit 
of buying them, not being able to be hacked, not having to go 
in and reengineer their systems every time a patch comes out.
    Those who have products which are constantly being patched 
will find their position in the marketplace becoming lower. It 
is a self-regulating system, and it will become more so in the 
future as more and more patches have to be made to accommodate 
shortcomings in software.
    Mr. Clay. OK. I thank the panel.
    Thank you, Mr. Chairman.
    Mr. Putnam. Thank you, Mr. Clay.
    Mr. Fleming, under Common Criteria evaluation, the product 
is tested by itself. Obviously it will be used in conjunction 
with a variety of other products. Is that taken into 
consideration at all? And how is that issue resolved in terms 
of the impacts or the problems that can occur with the 
connectivity?
    Mr. Fleming. Good. First of all, the product is typically 
tested in an environment. In order to make the test meaningful, 
there is an assumed environment. However, that may not 
represent all possible environments for the application in the 
real world.
    So there is--and these are my terms. There is this ``little 
c'' certification, which is certifying that the product is 
doing what its claim is. Then there is the application of that 
product, along with other products, into a larger system. So 
there needs to be another certification, which is at what I 
will call the ``big C,'' at the system level. There are 
processes in place in the DOD and across the Federal 
communities that go by somewhat different names, and they are 
like kind of complicated names like DITSCAP, Defense 
Information Security Accreditation Program, but that is a 
system-level look. So I see the Common Criteria and any other 
evaluation program at the product level generating evidence 
about the performance of the various components. Then there 
needs to be a separate look at the much larger level, for what 
the total system security certification is.
    Now, I will state that the calculus for that is somewhat 
difficult, because it is not just saying product A has this 
level of goodness, product B has this level of goodness. You 
just don't add A and B and get C. In fact, it is a much more 
complex relationship when you start bringing many products 
together. But, nevertheless, there are processes in place in 
the DOD and beyond the DOD to bring this larger certification 
into play toward the ultimate accreditation of that system to 
operate in a real environment.
    Mr. Roback. If I can just add to that, that question of 
when you understand the property of one component and then the 
property of another, and you put them together, that is what 
the researchers call the composability issue, trying to 
understand in a rigorous way what you have when you put them 
together and add them up.
    If I could just add to Mr. Gorrie's comment earlier about 
the software quality and patching, I think one of the problems 
we face is the whole developmental cycle in the industry of 
software products. And you have to really look at how software 
products are developed in a rigorous sense of specifications 
and so forth.
    If you really want to improve the overall security and get 
away from this problem of continually chasing our own tail and 
trying to patch, this is a Web site where we put up 
vulnerabilities in commercial products so people can learn 
about them and go find fixes for them. Right now we have over 
6,000 vulnerabilities up there.
    And, you know, the tolerance of the marketplace for these 
products that come out with flaws is just astounding. We really 
need to look at the overall quality issue of the products; not 
just the security, but the overall quality. Do they do what 
they are intended to do? It is a challenge.
    Mr. Putnam. Mr. Gorrie, several testimonies mentioned a 
waiver process for the Common Criteria. Under what 
circumstances would a waiver be granted?
    Mr. Gorrie. With the way that we have constructed policy 
within the DOD, I would find it very rare that you would have a 
request for a waiver. The only one that I could think of would 
be in a situation if we were going to war, we needed a 
particular product because its security features were just so 
obviously great that we could not not afford to have it in our 
system. But even then, the way that we have policy built, it 
says that you need not to contract to purchase that piece of 
equipment, you need not have it evaluated, only have it in the 
contract that you will have it evaluated as a condition of 
purchase.
    However, the vendor always has the option to say, you may 
need this, this may be the best thing since sliced bread, but 
we are not going to have it evaluated. If we needed that 
product that bad, then the user of that product, or the person 
who wanted to put that product into their system, would have to 
petition for a waiver, and then we would either have it 
evaluated internally within DOD through some process, or just 
use it because it was so important to use. But in any regular 
process I--because of the way policy is written, I do not see 
the need for waivers.
    Mr. Putnam. My final question for this panel will be this, 
and we will begin with Mr. Roback: Should the Common Criteria 
certification be extended to cover the entire Federal 
Government?
    Mr. Roback. That is a good question, one we are often 
asked. Let me just start by mentioning that it is policy for 
the nonnational security side of government that cryptographic 
products have to go through testing, and there is no waivers 
allowed for that under FISMA.
    I don't think the question is necessarily should we adopt 
that policy, yes or no. There is actually quite a range of 
options between doing nothing and adopting that policy, and 
even things beyond that policy. So you might ask yourself: 
Well, maybe it doesn't make sense to say something can be 
certified against any specification that is brought forward, 
but maybe what we need to do is look at things like once we 
have good specifications for specific technology, that if an 
agency is buying that technology, they should be buying 
something that has been evaluated against those specs.
    So not just that you can bring in--I think someone in their 
testimony talked about a product that paints the screen blue, 
and it can go through and get a certification. Well, I don't 
know if those products are going to do us any good. So I think 
there is some range of options we have here, and we really need 
to look at those. Rather than just say, that is the policy for 
national security; we should simply adopt it.
    I think we need to learn more from the experience as well. 
Is it really pushing the vendors toward more security or not?
    Mr. Putnam. Mr. Fleming.
    Mr. Fleming. We are putting our trust in networks, in 
things called security products. They have become sort of a 
foundation piece, a trust anchor, if you like. And so it would 
seem to me we should take extraordinary measures, not 
necessarily expensive, but take extra measures to ensure that, 
in fact, that trust is well founded.
    So having some rigor in how we look at security products 
is, I think, important. Independent evaluation is an important 
piece of that rigor. That is something different than the 
vendor claims. So where does one get this independent look? 
What is the most efficient way to get that independent look 
that in and of itself can be trusted by people who use these 
systems?
    So whether it is a Common Criteria-based system that we 
use, whether it is some derivative that may be the result of an 
evolution of the process, I believe that we need to put honest 
faith in our security products through some independent 
specification evaluation process. It is too important just to 
sort of leave to the normal process.
    Mr. Putnam. Mr. Gorrie.
    Mr. Gorrie. There is two parts to this, as Mr. Fleming 
said. There is the independent evaluation portion of it, the 
Underwriters Laboratory, if you will. Should that be extended 
to the rest of the Federal Government? The Department of 
Homeland Security thinks it is. That is why they want us with 
them to do a review of the NIAP process, to see what that 
extensibility of the process is to the rest of the Federal 
Government.
    Is it extensible to the rest of the civil population? No 
one forces a consumer to buy a lamp that has the Underwriters 
Laboratory stamp on the cord, and perhaps no one should be 
forced to buy a piece of IT security equipment with a NIAP 
certificate associated with it.
    There is the evaluation program itself, and then there is 
the regulatory and policy piece that goes along with it. And 
although I think the evaluation portion of it can go forward, 
because knowledge is power, the--how you instantiate that in 
regulation and in national policy is a different matter 
altogether.
    Mr. Putnam. Thank you very much. And I want to thank all of 
our witnesses on this first panel and encourage you to stay and 
listen to the second panel, if your time and schedule allow.
    With that, the committee will stand in recess for 2 minutes 
while the first panel is dismissed and the second panel is 
seated.
    [Recess.]
    Mr. Putnam. The committee will reconvene. Before we swear 
in the second panel, I did want to announce publicly that the 
executive session on SCADA, which was scheduled for tomorrow by 
the subcommittee, has been postponed thanks to Hurricane 
Isabel.
    And with that, I would ask panel two to please rise and 
raise your right hands for swearing in.
    [Witnesses sworn.]
    Mr. Putnam. Note for the record that all of the witnesses 
responded in the affirmative, and we will move immediately to 
their testimony. Again, I would ask that you limit your remarks 
to 5 minutes, and your entire written statement will be 
submitted for the record.
    Our first witness is David Thompson. Mr. Thompson directs 
the CygnaCom Security Evaluation Laboratory. He has led a team 
to support certification for the Air Force Scope Command's High 
Frequency voice and data communications system, and managed 
Public Key Infrastructure products at several Department of 
Energy National Labs. He led a team to write a Common Criteria 
security target for Red Hat Linux 5.1, and helped translate 
high-assurance criteria into Common Criteria protection 
profiles.
    Previously Mr. Thompson evaluated the security of network 
and computing configurations for the space station and space 
shuttle, and assessed proposed uses of cryptography and 
distributed authentication at NASA. He was session chairman for 
the 1993 AIS Security Technology for Space Operations 
conference, and served on a board investigating a software 
configuration management failure in a space shuttle mission.
    Welcome to the subcommittee. You are recognized, Mr. 
Thompson.

 STATEMENT OF J. DAVID THOMPSON, DIRECTOR, SECURITY EVALUATION 
                 LABORATORY, CYGNACOM SOLUTIONS

    Mr. Thompson. Thank you. I would like to thank the 
committee Chair and all of its members for their interest in 
this issue and their leadership.
    The motivation for product testing that led to the creation 
of the Common Criteria came from the U.S. Government's 
certification and accreditation process for systems. Most 
systems included at least one computer with operating systems 
that needed a security functionality identified and assessed. 
Since operating systems are quite complex and have many key 
security functions, considerable effort is required to do an 
appropriate security assessment.
    As computers became more commodities, the notion of 
performing these difficult evaluations once and using the 
results in repeated CNAs took hold. In the early 1990's, as the 
expense of having products evaluated to different security 
criteria in different countries increased, Western governments 
began to seek a set of Common Criteria that they could endorse. 
We are still in the early stages of implementing the resulting 
Common Criteria. But the original government participants are 
still actively engaged, and additional governments are getting 
involved.
    Industry also begun to see the value of a common security 
performance process. The CC defined seven sets of security 
assurances called evaluation assurance levels. EAL1 has the 
least assurance, and the EAL7 the most. The most commonly used 
assurance levels are EAL2 and EAL4. The EAL2 is an acceptable 
assurance level for most products, and EAL4 is often specified 
for products that are employed in the first line of defense, 
such as firewalls and operating systems.
    Custom sets of CC assurances can also be chosen when one of 
the seven EALs is not precisely suitable. The result of a 
successful CC evaluation is a published security target that 
precisely documents the security functions that the product 
claims to meet and establishes in precise terms whether these 
claims are true, the security target to be used to determine 
the product's suitability for a particular use and to compare 
its security functionality with that of other products.
    It is practically impossible to determine that a product of 
any complexity will be secure regardless of its configuration 
or that security will mean the same thing in all the situations 
in which the product is used. What CC testing does show is that 
to the specified level of assurance, the security functions the 
vendor claims the product has work as described, and that a 
coherent and mutually supported set of security functions is 
available.
    Just because a product has been successfully evaluated 
under the Common Criteria does not mean that it has no 
vulnerabilities. Instead, it shows that the product is suitable 
for use as a component of a secure system. It is primarily 
focused on design and development process issues.
    Although higher CC assurances, such as the EAL7, also can 
significantly reduce the possibility of bugs, the Common 
Criteria evaluation process has several strengths. It provides 
consumers with an independent and well-monitored assessment of 
vendor security claims. It provides a precise description of a 
product's security features that is readily comparable to those 
of other evaluated products. It assesses the ability of a 
product to be used to build secure systems. It demonstrates 
that at least one configuration of a product meets the claimed 
security requirements. It allows precise tailoring of the 
security criteria to the capabilities of products. It uncovers 
design flaws and sometimes software bugs. It focuses vendors on 
security issues. It constitutes the most rigorous and thorough 
independent product testing process commercially available. It 
provides international mutual recognition so that vendors have 
to pursue only one evaluation against a single criteria.
    The Common Criteria evaluation process also has some 
drawbacks. It creates additional expense for product vendors. 
CC evaluation is applied to an exact version of a product in a 
precise hardware environment, making it sometimes hard to field 
a product that is strictly conformant. As consumer protection 
profiles evolve, as vendor products are revised, they must be 
reevaluated. The evaluation process is complex and time-
consuming, which means it requires a lot of vendor resources, 
understanding and participation. Some of these are conflicting. 
For example, establishing the security of a product across a 
broad range of its configurations among many versions is more 
difficult and would further increase the expense.
    While large vendors are more easily able to absorb the cost 
of an evaluation than smaller vendors, small vendors benefit 
more from an independent product assessment that makes it 
easier for customers to compare its products' security features 
to those of its better-known competitors.
    The CCE offers a broad range of assurances and a 
corresponding broad range of costs. The EAL1 evaluation costs 
are in the low tens of thousands of dollars. EAL1 is adequate 
for many applications. Higher assurance has higher cost and is 
appropriate where the security risks are higher. The problem of 
eliminating security bugs from complex systems, such as those 
we read about regularly in the news, requires resources many 
orders of magnitude greater than those required for CC 
evaluations. There is little theory to support solutions to 
this problem, and it remains an art form.
    The most productive approaches to bug elimination involve 
improved software engineering practices to prevent the 
introduction of bugs in the first place. Finding and fixing 
bugs in existing products is always more expensive.
    The CC product evaluation process is a very effective tool 
when used in the right context. The international support and 
precise specification of security attributes minimizes the 
problems inherent in integrating diverse systems and components 
built in different countries and secure systems whose security 
attributes are well understood.
    The Common Criteria evaluation, however, does not serve 
every purpose. The fact that attempts are made to apply it to 
situations for which it was not designed shows how great is the 
need for other kinds of security testing and the challenges 
facing the available security evaluation services.
    Thank you for this opportunity to address you today.
    Mr. Putnam. Thank you very much, Mr. Thompson. We are glad 
to have you.
    [The prepared statement of Mr. Thompson follows:]

    [GRAPHIC] [TIFF OMITTED] T2771.046
    
    [GRAPHIC] [TIFF OMITTED] T2771.047
    
    [GRAPHIC] [TIFF OMITTED] T2771.048
    
    [GRAPHIC] [TIFF OMITTED] T2771.049
    
    Mr. Putnam. Our next witness is Mary Anne Davidson. Ms. 
Davidson is the chief security officer at Oracle Corp., where 
she has been for the last 14 years. As Oracle's CSO, she is 
responsible for Oracle product security, corporate 
infrastructure security and security policies, as well as 
security evaluations, assessments and incident handling.
    Ms. Davidson also represents Oracle on the Board of 
Directors of the Information Technology Information Security 
Analysis Center, and is on the editorial review board of the 
Secure Business Quarterly.
    Prior to joining Oracle in 1988, Ms. Davidson served as a 
commissioned officer in U.S. Navy Civil Engineer Corps, during 
which time she was awarded the Navy Achievement Medal. She has 
a BSME from the University of Virginia, and an MBA from Wharton 
at the University of Pennsylvania.
    We always appreciate your interaction with this 
subcommittee and your direct and candid remarks. Welcome. You 
are recognized.

STATEMENT OF MARY ANN DAVIDSON, CHIEF SECURITY OFFICER, SERVER 
                  TECHNOLOGY PLATFORMS, ORACLE

    Ms. Davidson. Mr. Chairman, Ranking Member Clay, on behalf 
of Oracle, I appreciate the opportunity to be here today to 
offer Oracle's perspective on the Common Criteria.
    Oracle is uniquely qualified to comment on information 
assurance policies. We have spent more than 25 years building 
information management systems for customers that I 
affectionately call the professional paranoids, which include 
U.S. intelligence agencies and the Defense Department.
    To gain and maintain the business of the most security-
conscious customers on the planet, we have made extraordinary 
investment in information assurance and have 17 independent 
security evaluations to show for it, with 4 more in process.
    The collective impact of Code Red, Blaster and SoBig to our 
economy, which amounts to billions of dollars in repairs and 
down time, have worked to send a sobering message: It is long 
past time for the entire Federal Government to get serious 
about information assurance. The benefits go beyond secure 
Federal information systems. A strong Federal information 
assurance policy has a potential to change the entire software 
industry for the better. Let me tell you there is no vendor 
when faced with this requirement who will build two versions of 
software, one that is strong, robust and well-engineered, and a 
buggy, crummy version for the commercial sector.
    Fortunately, some Federal agencies are listening. NSTISSP 
11 and DOD Directive 8500.1 draw a constructive, clear 
prosecurity line in the sand. The question is not whether 
NSTISSP 11 makes sense or not. We have had that debate, and it 
is over.
    The NSTISSP 11, with the linkage to the Common Criteria, 
the de facto worldwide evaluation standard, is making a 
positive, constructive difference in software development. The 
Common Criteria has three key benefits for vendors who do 
evaluations: You have more secure products. Evaluators find 
security vulnerabilities which must be remedied prior to 
receive a certificate. There has been a lot of discussion about 
the cost of evaluations, but I have done the analysis, and if 
we find or prevent even one significant security flaw in our 
products going through the evaluation, it more than pays for 
the cost of the evaluation, even at the highest assurance 
levels that are viable for commercial software, which says 
nothing about the expense to customers that we prevent by 
getting it right the first time.
    Second, a more secure development process. Evaluations 
actually force you to have a secure development process 
throughout the entire development process. Security can't be 
something that is thrown on in the last 2 weeks of a cycle and 
has to be baked into the development process. That is what the 
evaluators are looking at.
    Third, and probably most important, a strong culture of 
security. If you do evaluations as part of development, then 
security becomes baked into your corporate culture. That is the 
biggest problem that we have in the industry: Security always 
seems to be someone else's job. At Oracle I tell our 
developers, you are personally responsible and accountable for 
every single line of code you write.
    Since NSTISSP 11 has gone into effect, we have seen very 
positive developments. More firms are doing evaluations. Firms, 
including Oracle, are sponsoring open-source evaluations. Many 
other industries are looking at certification efforts along the 
same lines as the Common Criteria. This has been successful 
because industry believes the Federal Government is serious 
this time, and that is a major victory. And thanks goes to 
people within DOD, the Intel Community, and Congress, who are 
making an effort to make the process work.
    So what can we do to make this process work better? You can 
hold the line by maintaining an eternal but pragmatic vigilance 
through the no-waivers policy. I said a year ago that it was 
time for the government to chirp or get off the twig on 
information assurance, and there has been a lot of chirping 
going on.
    But there are still those who want to get off the twig by 
getting a waiver or seeking opt-outs. It sends a bad message to 
the marketplace to say that NSTISSP 11 does not apply to us. It 
really needs to apply to intelligence across the board.
    NSTISSP 11 should be extended beyond traditional national 
security systems, and specifically, I think DHS should look at 
applying this to their own systems. Clearly they have a mission 
of national security.
    NSTISSP 11 shouldn't be allowed to--protection profiles 
should not be agency-specific wish lists. I think vendors are 
willing to do an evaluation against a common protection 
profile, but they are not willing to do three of them per each 
special agency.
    Country independence of laboratories should be maintained. 
We do our evaluations in the United Kingdom, because the cost 
is lower and the expertise is actually far higher than we have 
found in the United States for our particular product set. We 
still get resistance to foreign evaluations, and this is 
ridiculous. We are very happy to support U.S. labs as a 
competitive alternative, but competence knows no national 
boundaries.
    A couple of final points. There are three things that the 
government can do to foster better security beyond evaluations. 
We know that it does not provide a silver bullet or perfect 
products. The Federal Government should require that products 
have a default setting that is secure out of the box. I think 
NIST can do a lot of work here. This would also provide a lot 
of immunity to a number of viruses and worms, because more 
systems would be locked down by default. It would lower the 
cost of operations for the government and other customers.
    The government should invest in cybersecurity research. 
Quite honestly, the reason vendors cannot find more faults in 
the products in development is because the tools do not exist 
to do so, and the venture capital community will not fund it 
because there is no way to make money on it. If we can stomp 
out smallpox through investing in medical research, we can 
certainly get rid of buffer overflows. It is just code.
    Finally, industry can do more to improve the security 
profession. I fully support an alternative to Common Criteria 
evaluations, for example, for consumer products where it is 
perhaps inappropriate to do a Common Criteria evaluation. And 
an example would be the Underwriters Lab. Most products are 
just designed to be secure. And Cuisinarts are designed, for 
example, that you can't lose your fingers by sticking them in 
while the blades are whirring. They are just secure. Consumers 
don't have to do something special to make them operate 
securely.
    NSTISSP 11, DOD 8500.1, and the national strategy are 
welcome developments because they are moving the debate to the 
expectation that everything will be secure. I believe we have 
turned a corner, but it took 10 years and numerous sobering 
events to get us here. It will take continued vigilance and 
continued leadership here in Congress to keep us on this road.
    Thank you again, Mr. Chairman, for the opportunity to 
testify today.
    Mr. Putnam. Thank you very much, Ms. Davidson.
    [The prepared statement of Ms. Davidson follows:]

    [GRAPHIC] [TIFF OMITTED] T2771.050
    
    [GRAPHIC] [TIFF OMITTED] T2771.051
    
    [GRAPHIC] [TIFF OMITTED] T2771.052
    
    [GRAPHIC] [TIFF OMITTED] T2771.053
    
    [GRAPHIC] [TIFF OMITTED] T2771.054
    
    [GRAPHIC] [TIFF OMITTED] T2771.055
    
    [GRAPHIC] [TIFF OMITTED] T2771.056
    
    Mr. Putnam. Mr. Clay, if I heard her correctly, she said 
that she tells her developers that they are personally 
responsible for every line of code that they write. It is a 
good thing nobody holds us to that standard on the U.S. Code.
    Our next witness is Mr. Klaus. Christopher W. Klaus is the 
founder and chief technology officer of Internet Security 
Systems, Inc., a leading global provider of information 
protection solutions that secure IT infrastructure and defend 
key online assets from attack and misuse. Prior to founding 
Internet Security Systems, Mr. Klaus developed the Internet 
Scanner, the first vulnerability scanner, while attending the 
Georgia Institute of Technology.
    Mr. Klaus was honored in MIT's magazine, Technology Review. 
In addition, he received the award for Ernst & Young's 
Entrepreneur of the Year in 1999, in the category of Internet 
products and services.
    Welcome to the subcommittee. You are recognized.

 STATEMENT OF CHRISTOPHER W. KLAUS, CHIEF TECHNOLOGY OFFICER, 
                INTERNET SECURITY SYSTEMS, INC.

    Mr. Klaus. Thank you, Mr. Chairman. It is an honor to 
testify today. And I am representing Internet Security Systems 
from a small company's point of view that builds the security 
products, and we are in the process of going through the Common 
Criteria and NIAP certification and wanted to share some of our 
experiences as a company going through it, and what are some of 
the benefits and some of the failures of the process that we 
see today.
    We do believe the overall goal and the intent of the Common 
Criteria and going through NIAP certification is a positive 
goal, but we see that there is at least three areas of major 
improvement that need to happen. And if they do not get 
addressed, we believe that following this path of requiring the 
government to follow the guidelines of NIAP certification 
actually makes the government less secure. And we will go 
through these three reasons and talk to why do they make both 
the government and others that follow the certification less 
secure.
    No. 1 would be the accuracy. The current different levels 
of evaluation do not reflect whether the security product is 
actually more accurate in protecting against vulnerabilities 
and exposures. To take a step back, let me explain two goals 
within security, so you understand what we are measuring.
    There is two major goals in security. One is to allow good 
people into the network, or into an operating system, into an 
application. And what we typically think of good guys in 
technology is like your user name and password that allows you 
to get into the system. There is biometrics, fingerprinting, 
VPN, virtual private networks. All of those technologies are 
great for--help certify the right people get into the system.
    And one of the problems, though, is it assumes that the 
infrastructure stops the bad guys out. So the second goal of 
security is keeping bad guys out. The problem we find is that 
the assumption that the infrastructure keeps the bad guys out 
is false. We know there is everyday bugs in the code. These 
bugs lead to vulnerabilities that then allow intruders, worms, 
viruses to leverage that.
    So on the second goal of keeping the bad guys out, that is 
a major area of measurement. And one of the things that we 
track very closely as a company that produces security 
products, we are tracking over 200 plus vulnerabilities every 3 
months, every quarter, as we measure that, and what is 
interesting about this is as we go through this process, 
products that are less accurate in finding those 
vulnerabilities have the same certification as the companies 
that have much more accurate products. And if you likened it to 
antivirus, which most people are familiar with antivirus 
software, if only 10 percent of the vulnerabilities were--or 10 
percent of the viruses were found with one product, and 99 
percent were found with another product, today they would be 
measured equal in terms of the certification level. And that is 
one of the major reasons why government agencies that believe 
they are getting a more robust product may end up--just because 
they are purchasing a higher level certified product may 
actually end up with a less robust and less accurate product.
    The next major area is speed. The current evaluation 
process is extremely slow and bureaucratic. It can take over a 
year to become certified. By the time it does become certified, 
it is outdated and behind the latest version of protection. The 
commercial sector could apply the latest version, and while the 
government would lag behind in security, in the race against 
cybercrimes threats, all organizations need to apply the 
current, most up-to-date security protection products.
    I just would add that there is over 40 IDs or intrusion 
detection companies in the process today, but only two of them 
have actually been certified. So we have a long way to go 
before all of them have gone through this process.
    The issue, though, also with security products, we are in a 
much different stage in the technology industry where we are 
rapidly evolving the technology to keep up with the 
cyberthreats. A lot of older technologies, like operating 
systems, data bases, Web servers, those technologies have been 
around longer, more mature, have a much longer development 
cycle. So in many cases the larger the application, and the 
larger the deployment cycle, the more likely you can keep in 
pace with the certification. In the security circles it is at a 
much faster rate.
    And finally, the cost part of this is that the current 
evaluation process is extremely burdensome and costly for 
security vendors to follow. And after following the process, 
the expense does not--for us has not resulted in any security 
improvement. It has not found any buffer overflows. It has not 
found anything that many of the hackers and worms and viruses 
take advantage of. So, therefore, many of the resources and 
capital that we are spending on this, if it was doing something 
to make our products a lot better, and more protected, and more 
robust, and more accurate, we would be in favor of this.
    So those three things, accuracy, speed and cost, are 
critical to improving, to make this thing worthwhile.
    Mr. Putnam. Thank you very much.
    [The prepared statement of Mr. Klaus follows:]

    [GRAPHIC] [TIFF OMITTED] T2771.057
    
    [GRAPHIC] [TIFF OMITTED] T2771.058
    
    [GRAPHIC] [TIFF OMITTED] T2771.059
    
    Mr. Putnam. Our next witness, our last witness on this 
panel, is Eugene Spafford. Dr. Spafford currently serves as the 
director of the Center for Education and Research in 
Information Assurance and Security at Purdue University, a 
position he has held for 5 years.
    He has written and spoken extensively on the topic of 
information security. His research focuses on the prevention, 
detection, and remediation of information security failures and 
misuse, including fault tolerance, software testing and 
debugging, and security policies.
    He holds a Ph.D. in information computer science from the 
Georgia Institute of Technology.
    We are delighted to have this level of expertise on the 
panel. And you are recognized for 5 minutes.

STATEMENT OF EUGENE H. SPAFFORD, PROFESSOR AND DIRECTOR, CENTER 
    FOR EDUCATION AND RESEARCH IN INFORMATION ASSURANCE AND 
                  SECURITY, PURDUE UNIVERSITY

    Mr. Spafford. Thank you, Mr. Chairman. And thank you also, 
Ranking Member Clay and members of the committee.
    The question posed to us for this hearing was can the 
Common Criteria ensure security for the Federal Government? And 
my answer to that is very definitely not. It will not.
    And that is not to say that the Common Criteria is not a 
valuable instrument. The many thousands of man-years of effort 
by experts around the world putting it together has resulted in 
a procedure and set of documents that have great value as 
guidance for those building systems and for a means to compare 
systems as to their level of quality. However, it does not 
actually address the problem of ensuring that the government 
systems or any systems that possess the certification are 
themselves secure. It is in some sense, if I may use the 
analogy, similar to wanting to be sure that your house will not 
burn down and believing that the Underwriters Laboratory seal 
on the cord of your toaster will ensure that. It is not the 
case. What it does do is it gives you a small added measure 
that one item involved is less likely to cause you damage, but 
it is certainly no guarantee for the whole enterprise.
    We can see that with an example that has been cited by many 
others. If we look at the Windows 2000 operating system, it is 
certified at the highest currently available level available 
under the Common Criteria, and yet it was a target. It was 
vulnerable to the Blaster worm, the Natchi Worm, dozens if not 
hundreds of current viruses, and has had nearly 100 patches 
issued for it--those are security patches, not functionality 
patches--since it was released. And that is something that is 
certified at the highest level.
    There are other examples. I have given a detailed list in 
my written testimony as to why I do not believe that the Common 
Criteria is going to give us the level of security that we 
want. And in the very limited time available here, what I am 
going to do is give a different approach to this, and I am 
going to do it by analogy.
    Let's take that toaster example that I was talking about. 
Suppose that you were the vendor of that toaster, and you 
wanted to compete on the market and decided that an evaluation 
was something that would give you a competitive advantage. So 
you submit it to a consumer testing lab. However, when you 
submit it to the testing lab, you submit it without a cord, and 
you tell the testing agency that you want to submit it as a 
bread storage device.
    Well, the agency is required to test it against the 
requirements that you gave them. So they will test it as a 
bread storage device. They will go against the checklist for 
all the devices in the kitchen, and they will discover there 
are no radioactive materials or explosives embedded in it, and 
that, in fact, it does meet all of the documentation that you 
provided, it was built by the engineers in the appropriate way, 
and it does store slices of bread. So they give you their 
highest level of certification.
    You then turn around and put it in the marketplace with a 
cord and with a consumer option to include a speaker phone, 
because while you are making toast, you want to call your 
neighbors and tell them to come over and have some of the 
toast. The problem there is that about every tenth piece of 
toast that you put in burns and possibly starts a fire. What is 
more, the speaker phone is defective and calls up your 
neighbors and starts fires in their toasters. And because parts 
of the toaster were built overseas in a country that was 
unfriendly to the United States, if you use the toaster in any 
government kitchen, it simply doesn't work. On top of that, the 
manual is badly written. Customers who buy it don't really 
understand how to use it. They attempt to toast jello, they use 
it in the bathtub. And when all of the various disasters occur, 
the fires and deaths, they find that the disclaimer that 
shipped with the toaster says that the vendor has no 
responsibility for anything that the user may do with the 
toaster, and therefore they have no legal recourse, and there 
is no penalty that comes back on the vendor.
    However, the toaster does very well in the marketplace 
because it is cheaper than the other toasters that aren't 
certified and happen to work without fault. Those who go out 
and buy in large quantity, using the lowest bid, proceed to 
make you the market leader.
    Certification does not guarantee that what you have is 
safe. It says that it meets the standards for the 
certification. It also does not tell you that it is going to be 
used safely or in an environment where it is appropriate to use 
it. That is why the Common Criteria is not appropriate.
    Quality needs to be built in from the very beginning. As 
has already been noted by others, we don't know how to do that 
well, because this is an area that has been underfunded. It is 
an area where we need more research. We need more resources put 
into the agencies that are involved in this, particularly NIST.
    This is a problem that we are going to continue to face for 
many years because we have such a large base of legacy code 
that is already in place and is too expensive to replace with 
something, even if we developed it tomorrow, that was much 
better.
    Thank you for--the committee for listening to us on this 
today, and I stand ready to answer questions.
    [The prepared statement of Mr. Spafford follows:]

    [GRAPHIC] [TIFF OMITTED] T2771.060
    
    [GRAPHIC] [TIFF OMITTED] T2771.061
    
    [GRAPHIC] [TIFF OMITTED] T2771.062
    
    [GRAPHIC] [TIFF OMITTED] T2771.063
    
    [GRAPHIC] [TIFF OMITTED] T2771.064
    
    [GRAPHIC] [TIFF OMITTED] T2771.065
    
    [GRAPHIC] [TIFF OMITTED] T2771.066
    
    [GRAPHIC] [TIFF OMITTED] T2771.067
    
    [GRAPHIC] [TIFF OMITTED] T2771.068
    
    [GRAPHIC] [TIFF OMITTED] T2771.069
    
    [GRAPHIC] [TIFF OMITTED] T2771.070
    
    [GRAPHIC] [TIFF OMITTED] T2771.071
    
    [GRAPHIC] [TIFF OMITTED] T2771.072
    
    Mr. Putnam. Sounds like this could be a fun panel. Mr. Clay 
will lead off this round of questions.
    Mr. Clay. Thank you, Mr. Chairman. And I will start with 
Mr. Spafford. You said your research center has a data base of 
computer vulnerabilities. In your search of that data base, how 
many of the software products identified were certified under 
the Common Criteria?
    Mr. Spafford. Sir, I didn't do a search specifically on 
that criteria, but from the numbers that I know from looking at 
similar searches, I would suspect that we have several hundred 
that apply to certified products. As I noted, there are over 
100 for Windows. A few of the other products, the firewalls and 
intrusion detection systems, the Oracle data base system has a 
few as well. We know that there are several hundred 
vulnerabilities for the bulk of certified products.
    Mr. Clay. You also point out that software certified at the 
highest level of the Common Criteria is subject to the same 
worms and viruses as software that has not been certified. The 
certification process is already a long and costly process. 
What would have to be changed in the Common Criteria to address 
these problems, and what would that do to the cost and time for 
certification?
    Mr. Spafford. Sir, the certification process is against the 
documentation that is provided by the vendors and against a set 
of specifications that have been put out by the groups that 
have set the Common Criteria, and those do not include issues 
such as resistance to malicious software.
    There are architectural issues to the way that the code is 
actually written that would need to be changed in the products 
fundamentally. So, for instance, taking macros out of word 
processing and spreadsheets, preventing e-mail programs from 
automatically executing attachments are ways to stop viruses, 
but they are not the only ways to stop those kinds of software 
problems.
    And those are not issues that are tested under the Common 
Criteria. Those are architectural features that are actually 
part of the product and the reason it is sold as it is.
    Mr. Clay. Mr. Klaus, you indicate that you do not believe 
that Common Criteria evaluation improves security. How would 
you propose the government determine in the procurement process 
that the software it buys is secure?
    Mr. Klaus. I think the--in talking about the security 
products, say, for example, the firewalls and the intrusion 
detection systems, today there is no certification that I am 
aware of that says, you know, take Gene Spafford's data base of 
thousands of vulnerabilities and exploits and different ways 
hackers get in today and evaluate whether a firewall or IDS 
system stops all of these vulnerabilities and all of these 
attacks.
    So today there is no benchmark or operating system among 
the security products to say which one is most robust against 
these types of attacks; these are the known attacks much less 
the unknown attacks that are continually evolving.
    If we can just measure how good are security vendors 
keeping up with the current pace of vulnerabilities, because 
there is a lot. There are over 200 vulnerabilities, like I 
said, every month, where we are tracking and need to keep 
measuring the quality of the security vendors' products.
    It is a little bit counterintuitive. I think if we look at 
some of the commercial certification companies out there, they 
have been able to hit the goals. When you look at the companies 
that certify the antivirus companies, they meet 99.9 percent of 
all viruses. They have been able to hit it. So they are testing 
what is out in the wild, what are the latest things that are 
happening, so they can quickly, at the end of the product, 
measure did the security company keep in pace at the very end? 
Did they hit the end result of what they said that they would 
do for protecting against those threats?
    And then the onus of having a very robust security product 
and the processes are still left on the security vendor to 
follow. And from a speed and cost perspective, because it is 
only testing at the very end, did this thing catch all of the 
hacker exploits, all of the worm exploits, all of the different 
ways that systems get compromised, it is a much more 
lightweight process. You can accomplish it in a month. It 
doesn't take a year to go through that, and therefore the 
turnaround is much faster.
    Rather than trying to be completely overcomprehensive in 
your evaluation of every detail and aspect of the security 
design and architecture, I think that needs to be held to 
probably the security vendor themself making sure that they end 
up with that, because otherwise they themselves become part of 
Gene Spafford's data base of vulnerable systems.
    But, on the flip side, the most important thing in terms of 
protecting the government: Can they stop these risks? I think 
shrinking it down to a much more focused process would help 
drive lower costs, faster speed and a much more accurate 
measurement of is this product more secure or less secure for 
the government.
    Mr. Clay. Ms. Davidson, did you have something to add?
    Ms. Davidson. I did. Actually I had a couple of responses 
to that, one of them a personal anecdote. My company did look 
at deploying one of the hottest new security products in a 
particular sector, which is supposed to defend against certain 
classes of application vulnerabilities. This is something, a 
specialty firewall, that you would put out to protect yourself 
against various types of attacks. It claimed to be the one of 
the market-leading products. My hacking team broke it in less 
than an hour using an attack that product was supposed to 
prevent.
    It is important that security products, because they are 
the early warning system, have some type of independent 
assessment of security worthiness.
    I am also aware that people's intrusion detection systems 
failed when Slammer was going around because of the 
composability of the systems. They were running things back end 
which themselves were not secure. I would certainly be open to 
flexible ways of validating the security worthiness of security 
products, but it is not all about feature function. It does one 
no good to protect, allegedly protect, against certain classes 
of attacks only to find that the security system itself is 
badly flawed, and that in at least two cases has been our 
experience.
    Mr. Clay. Thank you.
    Mr. Thompson, did you want to add something?
    Mr. Thompson. I just wanted to point out that the security 
evaluation process is designed to verify what a vendor claims, 
but it does that--there is a very publicly available statement 
of what the vendor claims. For example, if--you know, if the 
toaster manufacturer says that--has this evaluated as a bread 
storage device, it would be evaluated as a bread storage 
device. And if the government wanted to buy toasters, and they 
wrote a protection profile that specified toasters, the bread 
storage device probably wouldn't meet the projection profile 
for toasters. And somebody could write a security target for a 
bread storage device, and it would be--you know, it would be 
classified as a bread storage device.
    The confusion--the CC process allows products to be 
compared by specifying their security criteria in a semiformal 
language that is easily comparable.
    Mr. Clay. Well, the security issue sounds more like a 
moving target, you know, as people come up every day with new 
viruses, new worms, new ways to penetrate computers.
    Mr. Thompson. Well, that is sort of a----
    Mr. Clay. Can we win? Can we win the battle of securing 
these computers?
    Mr. Thompson. That is a different--finding patches and 
fixing them is a very difficult process, very expensive 
process, and I don't think that is the way we are going to win 
in the end. There is certainly things we can do--to find a 
patch is something we should do, as long as we have these 
vulnerabilities, but the kind of software development that the 
Common Criteria is encouraging is using sound engineering 
principles and design life cycle processes.
    And that is with the higher assurances like EL6 and 7, 
encourage those kinds of things. In other words, you can't--if 
you are going to evaluate EL7, you have to develop it in the 
process of documenting. You have to formally prove that it 
meets its security policies and things like that. And those 
engineering principles have to be applied to the development 
process. And we think that is a more--in the long run the only 
way you are going to get secure products.
    Mr. Clay. Mr. Klaus.
    Mr. Klaus. I think there is--within the Common Criteria, 
one of the issues that I think that the previous panel had 
really pointed out repeatedly was that this is still an art 
form in terms of finding the vulnerabilities, more R&D money 
for automating the tools.
    But at the end of the day, what we are finding is, is it 
really requires subject matter experts to be able to--who 
understand how to find buffer overflows, how to find heap 
overflows, how to find--many of the techniques that hackers use 
to break into the systems.
    And what we find is, a lot of the approved testing labs 
don't have that expertise, to find these kinds of 
vulnerabilities; and from that perspective, we are not 
measuring whether the systems have those types of 
vulnerabilities. And I think if we can build a system that's 
measuring for ``can the security products find and identify 
attacks and stop them''--I think that's where Mary Anne 
Davidson was pointing out that security products need to get 
better at: on ``being evaluated,'' on ``can they identify the 
attacks?''
    Just as important as identifying the attacks, it is almost 
more important for enterprises to make sure that we're also not 
identifying false positives. This is where we falsely see, say, 
legitimate traffic and identify it as, ``oh, here's an 
attack,'' but the minute you do that, you start cutting off 
real business transactions and so on, and then your security 
product is no longer trusted; or you turn off that 
functionality within the security product, and you are now less 
secure.
    And the other thing that needs to be tested is for invasion 
techniques. A lot of the known hacker community has published--
there's lots of white papers on how to evade many of the 
security products, and many of the security product vendors 
behind that have not responded to those techniques. They are 
still valid and still work.
    And there's no, I guess, in the certification process, 
anything that reflects how good they identify the attacks, the 
false positives, the invasion techniques; and I think, to 
answer another question, I think would be critical for security 
companies to be measured on is the effect of ``zero-day'' 
exploits.
    What I mean by zero-day is, when the worm comes out or a 
virus comes out, the most impact, full-time, is within the 
first 24 hours, in that it's spreading and nobody has 
protection. All of the security vendors are trying to respond 
to get the latest, ``What is that attack; OK, let's update our 
security products.''
    There's a concept of--within the security industry that 
we're moving toward behavior-based security models, where I 
don't have to take a fingerprint of every virus, every worm. 
I'm actually looking at the behavior of that program so that if 
something tries to compromise a system and acts to propagate 
and format your hard drive and change your registries and other 
things on the system, those are all bad behaviors, and it gets 
flagged; and you could stop it without even knowing the--what 
was the virus before you saw it.
    And I think if we added some measurement to how good do 
security products deal with zero-day threats, all you have to 
do is test an old version--if it hasn't been updated, test 
against a new threat. Did it stop it? If it did, great, you get 
a point for that. If it didn't, you don't get a point and you 
can start measuring across a lot of security parts out there.
    Ms. Davidson. With all due respect, I think most of us 
believe in defense and depth and that security cannot be 
outsourced. If a vendor has a fault in their product, they 
cannot outsource the remedy for that, even to intrusion 
prevention.
    For example, the customer comes to me and says they found a 
fault in our software. I can't say to them, Do you have a fire 
wall? Do you have an intrusion prevention system? Because if 
you do, I won't fix it. They will have my head.
    So I have to get it right the first time anyway. And if I 
get it wrong, it will still cost me a million dollars to fix it 
if it's on every single version of product on every operating 
system.
    Everyone needs to write better code. In order to write 
better code, we need better tools. It's not just training, 
because developers are human; they make mistakes. One mistake 
and the hacker is in.
    Mr. Spafford. I wanted to add, we have mentioned that we 
need more research and tools. We need more personnel. This is 
an area where we have a very small pool of expertise. But one 
thing that would make a difference, I believe, is a matter of 
accountability. And the sentiment expressed by Ms. Davidson 
here has not been widespread enough within the industry, which 
is, if there is a problem in the code, then the people who 
wrote the code are held responsible.
    Currently, when the government buys systems, if they have a 
failure, then everybody rushes around and applies a patch and 
then goes on as if nothing else had happened until the next 
failure and the next patch.
    I really believe that if there's some negative feedback to 
the vendors involved, if they have a bad history of producing 
software that isn't reliable, then perhaps that should be 
figured into the next series of acquisitions. Perhaps there 
should be a penalty applied to some vendors if they 
consistently provide bad software. It's something worth 
considering because simply encouraging them by buying the next 
product cycle isn't resulting in the changes that we should be 
seeing.
    We are seeing vulnerabilities that have been known for 30 
years to be security problems and bad practice; and we are 
discovering that 50 percent of all the vulnerabilities that are 
being reported today, 2 or 3 a week, are those bad practices 
that are 30 years old and that my colleagues and I teach in the 
very first few weeks for students to avoid. There should be 
something back at--a negative pressure to have them start 
paying attention to better practice.
    Mr. Clay. Thank the panel for their responses.
    Mr. Putnam. Could you give us an example of a 30-year-old 
vulnerability?
    Mr. Spafford. In the very introductory programing classes 
we teach, we tell the students they should check the inputs. 
For instance, if it's requested that a number be provided 
between 1 and 10, we ask them to check that the value is 
between 1 and 10. If they are asked to provide a character 
string that is 10 characters long, then they should check to 
make sure they aren't provided with one that is 11 characters 
or 1,000 characters.
    When we talk about buffer overflows or when you've heard 
that mentioned by the panelists, that's a case where a program 
was expecting 20 characters and was given 2,000 and there was 
no check made to see that too many characters were provided. 
That is something that has been known for 30 years to be a 
problem. It has been exploited in many systems. We teach 
against it, and it's still occurring and being discovered at 
the rate of several a month.
    Mr. Putnam. Ms. Davidson, Oracle began certifying products 
very early in 1998. What led you to come to that conclusion and 
how has it affected your business?
    Ms. Davidson. We initially began doing evaluations, 
actually the pre-Common Criteria days, we did the orange book 
and IT section evaluations. Actually, we did four of them at 
once on two of our products. We did that because one of our 
core customer constituencies demanded it; at least we thought 
they demanded it, as I testified previously.
    They would occasionally wuss on the procurement 
requirement--that's a technical term--but we kept doing them 
anyway. We thought it was important. We found the benefits for 
us were substantial for the reasons that I previously laid out.
    I feel actually the cultural values--making security part 
of a corporate culture has been the biggest value. I don't have 
discussions or arguments about, are we going to hold the 
release, because there's a security fault. Of course, we do 
that. This is something that we are measuring on and it is 
something we are held accountable on.
    We certainly are not perfect. We have developers who have 
committed the sin of buffer overflows or not checking input 
conditions, but I would consider myself to be successful if I 
could stomp buffer overflows in our time, but we need help to 
do this. We spend a lot of money training people.
    When I was in the Navy, there was an expression, ``To err 
is human; to forgive is not Navy policy.'' People do make 
mistakes in programming. If there are 21 conditions that have 
to be validated, our developer checks 20 of them, the hacker 
only needs to find that one.
    If complexity is the enemy of security, so are manual 
processes. The more that we can automate some of these checks 
in addition to training people and holding them accountable, 
the easier it will be for people to do the right thing. And 
right now it is really hard, because you are only as good as 
every single person checking every single possible programming 
condition; and they're not perfect, and they never will be 
perfect. It's been good for us as a business. And I don't think 
the Common Criteria is a solution for all security ills, but I 
think if people don't bake security into their development 
processes, however we get there--just checking air conditioners 
will also not make us secure; you need both.
    Mr. Putnam. How would you respond to the toaster metaphor? 
You know your company is committed to it, you follow through, 
you are believers in Common Criteria. Mr. Spafford and to a 
certain degree Mr. Klaus have laid out a series of arguments 
why it will not get us where we hope that it will. How do you 
respond to that?
    Ms. Davidson. I think it is a great analogy on a lot of 
levels.
    The other counter-argument is, if you glue all the pieces 
together, you may not get a secure house, but if you don't 
start off with a secure foundation, you certainly will not get 
a secure house.
    Yes, evaluations do not make for perfect conditions. But 
would you really want to plug your toaster in, even with the 
cord, and have no idea how strong the building foundations 
were, whether the engineers had done their jobs, whether they 
had building inspectors in. You need to do a lot of things to 
have secure software.
    I think evaluations are part of the answer because it will 
change the way people build software. It changed the way we 
build software. I think have better validation. If we had 
automated tools--most security faults are not faults in the 
security mechanisms; they are as a result of bad programming. 
If you have better automated checks for good programming 
practice, you also will be able to add a level of robustness.
    And the third piece I mentioned earlier is, many vendors, 
despite our best efforts, don't deliver products that are 
secure enough out of the box. We give our customers long lists 
of things to do and to tweak to become secure. And most system 
administrators never have enough hours in the day to do that.
    You have to make it easy for people. You have to install 
your product so that ideally people don't have to do anything--
like the Cuisinart--to have it operate securely. If you do 
that, it not only lowers people-cost-of-operation and increases 
their security, I think you will get resistance to some viruses 
and worms that typically exploit lots of things that are left 
wide open on your system, or things that are left lying around 
in your system because a vendor shipped it and the customer 
didn't know how to secure it.
    Common Criteria is a strong necessity, but I would agree it 
is not sufficient.
    Mr. Klaus. I think from the house perspective, if you look 
at how--I just went through the process of finishing a house. 
The certification process and compliance is typically at the 
end of the process. You have the government come out and look 
at the house and make sure it's up to code and you meet that 
criteria, or for a building or for--you are looking at the, at 
the very end, did the House meet all the necessary standards. 
And some of the important--and it's looking for the critical 
issues. Are sprinklers in place, etc.
    What you don't see in the certification process--and this 
is where I think we are failing--is, the opposite is happening 
in the Common Criteria where if you had a document--as the 
architect, the designer of the house had to sit there and as it 
goes through the process add another year--I mean, it took a 
long enough time to finish the house. If you add another year 
to building the house, to make sure that everything was 
documented, and here was all--I trust my architect to make sure 
it's designed to be built strongly.
    The government shouldn't have to go in there and say, did 
you use all the metrics to make sure it's going to stand, and 
then at the end the government checks to makes sure the most 
important issues are addressed and certified. And to the 
extent--if we could move the Common Criteria more to, did the 
important issues get addressed?
    And I think you could look at it, hey, a lot of these 
applications, especially business applications, are very 
complex. Many ways--many lines of code, etc. But if you 
actually identify what are the most common ways that hackers, 
worms, viruses hack into a system, the majority of the risk is 
at the network protocol code. You know, if you look at, why did 
Blaster get into the operating system, well, it was because 
there was an RPC service running on every Windows box that had 
this vulnerability.
    You can say there's millions and millions and millions of 
lines of code within operating systems and these business 
applications, but the most important thing is to look at, what 
are the things that are exposed at the network level? That 
tremendously reduces what you have to evaluate.
    We do a lot of security penetration tests, a lot of 
security assessments trying to figure out how would a hacker 
break into a system, and we always start at the network layer. 
And I think if the certification process looked more at--the 
same way that the hackers, the worms and viruses looked at, how 
does somebody break into the system, you'd start saying, OK, do 
you want to check the doors and the windows? You don't want 
to--I mean, you don't try to evaluate every wall and floor, the 
whole house. You evaluate the areas that hackers get into.
    And that's kind of intuitive, but if we focused on the 
bigger issues measuring whether you have a good security 
product or not; less on, did the overall process get followed, 
because right now it's not helping us find the buffer overflows 
and other things within the product at the end of this 
certification process.
    Mr. Putnam. Dr. Spafford, you started this metaphor, and I 
would ask for you to talk a little bit about what the better 
alternative is. Software assurance, how do we get there if it's 
not Common Criteria?
    Mr. Spafford. I didn't mean my comments to mean that Common 
Criteria is not a value, because I believe it is. It provides 
guidance as to how go about building a quality product. But 
it's building that quality product that is the key to what we 
are talking about.
    It's not simply a matter of security. We want to have 
greater trust in our systems, but we also want it to be 
reliable in the face of failure, unexpected circumstances.
    It appears, for instance, that the blackout that occurred 
in the East Coast was as a result of unfortunate circumstances 
happening at once, without sufficient capacity and reserve to 
make up for the failure. We don't want that to occur with our 
computer systems either.
    That means going back and looking at fundamental 
assumptions that are made on how we build the systems. What are 
the features that we really want? How is it being built? Is it 
being built using good tools and by people who understand the 
technology? Are they putting in more features than are really 
necessary, which I believe is the root cause of a number of the 
problems that we see. Is the documentation in the interface? 
Are those two items put together in such a way that the average 
user is able to understand how to use the system and how to 
configure it?
    Again, I do not believe that is the case. The average user 
currently is very often someone at home who doesn't understand 
what a firewall is or a virus is or what it means to have their 
system connected all the time.
    Then we have to have better testing tools and some known 
reasons to test, some known test sets to work against. We have 
to be able to test in real environments, so that if we are 
going to deploy something in a large-scale system, we have to 
have testbeds to do that; and again, we have to have the people 
trained to do that.
    And last of all, we have to have a mechanism so that we 
understand if we need to apply the technology to new arenas, 
how we go about going back in the process and changing the 
technology rather than simply reusing the old technology 
because that's what we have a large investment in. We should be 
using the most appropriate tools for the tasks at hand.
    What has happened over the last 30 years for computing, if 
we look, there's been incredible strides from mainframes and 
small networks to where we are now with global, international 
activity with our systems. We don't even know where some of our 
software comes from because of the international trade and 
development that goes on.
    We spent those 30 years trying to make the technology work, 
and I think we have done a really admirable job of that. So 
much of our society, so much of our dominance in the world has 
come about through our ability to create good technology. But 
now we have to change our mind-set to think about how to most 
appropriately use that technology and how to make it safe, and 
that means really taking a step forward, leaving behind some of 
the technologies of the past.
    So again, to summarize, it's not a simple step. It's going 
to be a whole number of steps throughout the life cycle of 
building and designing software. And to revisit Mr. Klaus' 
comments about the architect, well, the architect has been 
through many years of professional training. They probably 
served an apprenticeship with a master architect to understand 
what they're doing. And if they have designed his house, and it 
ends up getting built and there's no doors and it collapses 
afterwards, he has some recourse. And it's possible that 
architect will not be able to sell a design again in the 
future.
    We haven't done that in the software arena. We need to 
start thinking in terms of how we're going to protect our 
future. Are we going to continue to reward bad performance?
    So it's a long answer--I apologize--but it's a very 
multifaceted problem.
    Mr. Putnam. The $64,000 question: Should we expand Common 
Criteria to civilian agencies?
    Mr. Spafford. I believe that, on balance, that should not 
be mandatory. As a voluntary step, it may be good, but 
mandatory, it will not solve the basic problems. There are 
certified products that won't work as required.
    The process is not easy to understand. The Common Criteria 
standard document is 700 pages long, and so many of the people 
are going to be buying and deploying these systems who won't 
understand what the certification means, which is why I used 
the analogy of the toaster. The average consumer won't 
understand what that means.
    It can help to get some vendors to pay more attention, but 
I believe that the additional overhead, time and costs that 
were discussed by the other panelists are probably 
counterproductive to government's needs. I believe there are 
other steps that should be taken first.
    Mr. Klaus. My answer would be, until the, at least the 3 
years we talked about, better measurement of what you are 
trying to do with the Common Criteria is met, meaning, is this 
product actually providing better protection against the known 
threats and making process later, because if it takes a year to 
get our products out the door to help the government, they're 
going to be a year behind the commercial sector. And the cost 
of it is--I think overall, the cost is expensive, so startups 
have--will have a hard time entering into the government 
sector.
    But most importantly, if the cost was moving toward making 
the products better, I'd be in favor of it. Today there's very 
little value in what it is today.
    Mr. Putnam. Ms. Davidson.
    Ms. Davidson. If it's not too expensive and doesn't take 
too long to do.
    With all due respect to Mr. Klaus' company and their fine 
products, we have more complex products. We get certificates 
out within 6 months of the production release, and we release 
major versions of product every year to 18 months. It is cheap 
compared with the alternative.
    We are already paying for bad security. I believe that it 
should be extended at least--clearly, to entities who have a 
national security focus. And if the Department of Homeland 
Security is not doing national security, what is it that 
they're doing?
    As I mentioned earlier, there are other things we can do, 
but unless we fundamentally, as an industry, change the way 
that we build product, nothing will ever change. And this is 
the government's last chance on this. If we abandon information 
assurance efforts and go only to a testing approach, you will 
never know whether someone developed good product.
    Testing alone, while I think an important add-on on top of 
Common Criteria evaluations, also will not solve the problem.
    Mr. Thompson. I think I agree with Gene that we have to 
encourage good software development, good software design, and 
encourage companies to develop products that are safe in the 
beginning; and not just throw them on the market and let the 
rest of the world find the bugs one at a time or let the 
hackers find the bugs. They need to produce good software and 
need to be held accountable when they don't.
    And the Common Criteria approach to evaluation encourages 
vendors to do that. It is not designed to find bugs in a 
particular release of a product, but designed to encourage 
vendors to use good software and build that house according to 
good architectural principles in the beginning and not to find, 
you know, where the beams have been left out or where the small 
beams were used.
    Expanding the market for evaluated products would encourage 
that, send a signal to industry that the government is serious 
about good software engineering and development; and the 
products should be, you know, secure from the get-go. And 
anything you can do to allow--make vendors accountable to 
their--for putting out bad software would further the 
government's ability to buy good software. Everyone would have 
better software available if vendors were held accountable.
    Mr. Putnam. Thank you, Mr. Thompson.
    I want to thank all of our witnesses today, particularly 
the second panel, for their efforts in helping us to better 
understand this very complicated issue.
    Gaining assurance that the software the government buys to 
protect itself actually can do the job is an important goal.
    I also want to thank Mr. Clay and Ms. Watson for their 
participation. In the event that there may be additional 
questions that we did not have time for, the record shall 
remain open for 2 weeks for submitted questions and answers.
    With that, the subcommittee stands adjourned.
    [Whereupon, at 12:15 p.m., the subcommittee was adjourned.]