text-only page produced automatically by LIFT Text TranscoderSkip all navigation and go to page content Skip top navigation and go to directorate navigation Skip top navigation and go to page navigation
National Science Foundation
InfoBriefs
Related Information
Detailed Statistical Tables
123
Survey
   bullet S&E Research Facilities
123
Past publications
National Center for Science and
  Engineering Statistics (NCSES)
NCSES Home
About NCSES
Topics: A to Z
View Staff Directory
Contact NCSES
Search NCSES

Institutions Increase Networking Capacity, Gap Between Doctorate- and Nondoctorate-Granting Institutions Widens

NSF 10-328 | September 2010 | PDF format. PDF  

by Leslie Christovich[1]

Cyberinfrastructure resources at doctorate-granting institutions are substantially greater than at institutions that do not grant doctorates, according to new data from the biennial Survey of Science and Engineering Research Facilities, sponsored by the National Science Foundation (NSF).[2] This is reflected both in networking capacity, reported here as network speed or bandwidth, and in computing capacity, characterized here by the number, type, and characteristics of the computing systems.

Over a decade ago, scientific and engineering research methods at U.S. academic institutions began experiencing fundamental change. Significant advancements in computing, communications, and information technology enabled traditional research methods to be expanded or even replaced.[3] Some of these technological advances allow the collection and storage of enormous amounts of data, and others have led to exponential increases in the speed with which data can be analyzed. Research-performing academic institutions continue to expand these technologies, including networking and computing capacities.

Networking

Academic institutions can have multiple networking resources. In FY 2007 all institutions had direct or indirect connections to the commodity internet—the public, multiuse network often called the Internet—but the speed of these connections varied across institutions.[4] In addition, many institutions had direct or indirect connections to high-performance networks that support the development and use of advanced applications and technologies.[5] Generally, in the academic community these high-performance networks are Internet2, the National LambdaRail (NLR), and connections to federal research networks.

Total institutional bandwidth may include access to both the commodity internet and to high-performance networks. In FY 2007 total bandwidth at 65% of all research-performing academic institutions ("institutions") was faster than 100 megabits per second (mb) (table 1). Over 34% of institutions had total bandwidth of 1 gigabits per second (gb) or faster. In FY 2005, just 2 years earlier, 21% had bandwidth of at least 1 gb. Forty-two percent of all institutions estimated they would achieve bandwidth of 1 gb in FY 2008. (The FY 2007 survey asked respondents to estimate what their capacities would be in FY 2008.)

TABLE 1. Total bandwidth to commodity internet and Internet2 at academic institutions, by type of institution: FY 2005–08.

  Table 1 Source Data: Excel file

The number of institutions with bandwidth at the very fastest speeds is also increasing. By FY 2007 the total bandwidth at 16 institutions (4% of all institutions) was faster than 10 gb (table 2). Six percent anticipated their institutions' bandwidth would be greater than 10 gb in FY 2008.

TABLE 2. Academic institutions with total bandwidth greater than 10 gigabits: FY 2007.

  Table 2 Source Data: Excel file

In FY 2007 a majority of institutions (70%) had access to Internet2, one of the two major high-performance networks (table 3). A growing number had multiple high-performance network access. Seventy-two percent of institutions anticipated access to Internet2 by FY 2008. Although a smaller number of institutions had connections to the NLR, the percentage of institutions with NLR connections more than doubled from 10% in FY 2005 to 25% in FY 2007. Thirty-one percent estimated that their institutions would have NLR access in FY 2008.

TABLE 3. Institutions with high-performance network connections, by type of institution: FY 2005–08.

  Table 3 Source Data: Excel file

Internal network speeds also increased at academic institutions (not shown in tables). The percentage of institutions with distribution speeds of at least 1 gb increased by 20 percentage points between FY 2005 and FY 2007, from 54% to 74% of all institutions. Seventy-eight percent anticipated having speeds of at least 1 gb by FY 2008.

The amount of dark fiber owned by institutions indicates an ability to expand existing network capabilities.[6] Dark fiber may exist between campus buildings or from the campus to the institution's internet service provider. The percentage of academic institutions with dark fiber to their internet service providers increased significantly from 29% in FY 2005 to 37% in FY 2007. The percentage of institutions with dark fiber between buildings remained stable.

Networking Capacity and Institutional Characteristics

Doctorate- and Nondoctorate-Granting Institutions

Bandwidth capacity at doctorate-granting institutions was significantly greater than at nondoctorate-granting institutions and was also more concentrated at the higher speeds (1 gb or more). In FY 2007 the percentage of doctorate-granting institutions with bandwidth of at least 1 gb (39%) was almost twice that of nondoctorate-granting institutions (20%) (table 1). Over half (62%) of the nondoctorate-granting institutions had bandwidth of 100 mb or less, compared with 24% of doctorate-granting institutions. In FY 2007 all of the 16 institutions with total bandwidth of at least 10 gb were doctorate-granting (table 2). The difference between doctorate and nondoctorate institutions likely increased in FY 2008, when almost half of doctorate-granting institutions estimated they would have bandwidth of 1 gb or faster, compared with 25% of nondoctorate-granting institutions.

The percentage of doctorate-granting institutions with access to Internet2 remained stable from FY 2005 to FY 2008: 82% in FY 2005 and 83% (anticipated) in FY 2008 (table 3). The percentage with NLR access increased significantly from 11% in FY 2005 to 32% in FY 2007, and 37% estimated having NLR access in FY 2008. The increase in NLR access was due to institutions adding multiple networking connections. Nondoctorate institutions increased their access to Internet2, from 38% in FY 2005 to 46% FY 2007. Access to NLR was expected to increase to 15% in FY 2008 for nondoctorate institutions.

Public and Private Institutions

The gap in bandwidth capabilities between public and private institutions was not as great as the gap between doctorate and nondoctorate institutions. Further, the differences between public and private institutions completely disappeared at the very highest bandwidth speeds. In FY 2007, 37% of public institutions had bandwidth of 1 gb or faster compared with 24% of private institutions (table 1). However, at the very fastest speeds of 10 gb or greater, the percentages were equal. Forty-eight percent of public institutions estimated their institutions would have bandwidth of 1 gb or faster in FY 2008, whereas 29% of private institutions did.

The percentages both of public and private institutions with access to Internet2 increased slightly from FY 2005 to FY 2007 (table 3). Another small increase was anticipated by private institutions for FY 2008. Both public and private institutions experienced significant increases in the percentage of institutions with NLR access from FY 2005 to FY 2007. Public institutions anticipated further significant increases in FY 2008, but private institutions did not.

High-Performance Computing

The survey requested that respondents provide information on their high-performance computing (HPC) systems, defined as computing systems operating at a speed of at least 1 teraflop. These systems could include a variety of computing architectures, such as computing clusters or symmetric multiprocessor systems.[7] Historically, academic HPC capability has been available to recipients of research grants, departments of research grant recipients, or academic departments that make heavy use of HPC, but it has not been available campus-wide. In recent years a greater number of academic institutions have been expanding the availability of HPC to their entire campus community by centralizing the ownership and management of these resources.

In FY 2007 approximately 22% of research performing academic institutions made at least some of their high-performance computing generally available to their campus community (table 4).[8] Of the institutions with HPC, clusters were the most prevalent (94%), followed by symmetric multiprocessor systems (21%). Of all institutions with clusters, the most common peak performance was 3 teraflops or faster, with 39 institutions reporting clusters at this performance. Almost all HPC resources were located at doctorate-granting institutions. The seven nondoctorate-granting institutions with HPC capability all reported having clusters.

TABLE 4. Centrally administered high-performance computing, by type of institution and computing architecture: FY 2007.

  Table 4 Source Data: Excel file

Colleges and universities tended to share their HPC resources with organizations outside their own institutions (table 5). In FY 2007 institutions with HPC capability were most likely to share with other colleges and universities (71%), with governments, industry, and nonprofit organizations constituting most of the other users (table 6). Doctorate granting institutions were more likely to have external users of their HPC than were nondoctorate granting institutions.

TABLE 5. Academic institutions that share their high performance computing with other organizations, by type of organization: FY 2007.

  Table 5 Source Data: Excel file

TABLE 6. Academic institutions with external users of their centrally administered high-performance computing, by type of user: FY 2007.

  Table 6 Source Data: Excel file

Data Sources

The data presented in this report were obtained from a census of colleges and universities that expended at least $1 million in science and engineering research and developments funds. Each institution's level of expenditures was obtained from the NSF FY 2006 Survey of Research and Development Expenditures at Universities and Colleges.

The full set of detailed tables from the FY 2007 Survey of Science and Engineering Research Facilities will be available at http://www.nsf.gov/statistics/facilities/. Individual detailed tables from the 2007 survey may be available in advance of publication of the full report. For further information, please contact the author. Current survey data for individual institutions are available from the Computer-Aided Science Policy Analysis and Research (WebCASPAR) database system, a Web tool for retrieval and analysis of statistical data on science and engineering resources (https://webcaspar.nsf.gov/).

Notes

[1]  Leslie Christovich, Research and Development Statistics Program, Division of Science Resources Statistics, National Science Foundation, 4201 Wilson Boulevard, Suite 965, Arlington, VA 22230 (lchristo@nsf.gov; 703-292-7782).

[2]  The Survey of Science and Engineering Research Facilities collects data from academic institutions and nonprofit biomedical research institutions (hospitals and research organizations) receiving research funds from the National Institutes of Health. Networking and computing capabilities at biomedical institutions are not reported here.

[3]  National Science Foundation. 2003. Revolutionizing Science and Engineering through Cyberinfrastructure: Report of the National Science Foundation Blue-Ribbon Advisory Panel on Cyberinfrastructure. Arlington, VA. Available at http://www.nsf.gov/od/oci/reports/atkins.pdf.

[4]  Fiscal year in this report refers to each institution's fiscal year and thus varies across institutions. For example, for some it may be 1 January to 31 December, and for others it may be 1 July to 30 June.

[5]  A high-performance network is characterized by high bandwidth, low latency, and low rates of packet loss. Additionally, a high-performance network is able to support delay-sensitive, bandwidth-intensive applications such as distributed computing, real-time access, and control of remote instrumentation.

[6]  Dark fiber is unused fiber within fiber optic cables that have already been laid; thus, it is available for future use.

[7]  Cluster architectures use multiple commodity systems with an Ethernet-based or high-performance interconnect network to perform as a single system. Symmetric multiprocessor systems use multiple processors sharing the same memory and operating system to simultaneously work on individual pieces of a program.

[8]  Survey respondents were asked to report only their centrally administered high-performance computing. Centrally administered was defined as high-performance computing that is located within a distinct organizational unit with a staff and a budget and is generally available to the campus community. The unit has a stated mission that includes supporting high-performance computing needs of faculty and researchers. This InfoBrief only reports on centrally administered HPC.


National Science Foundation, Division of Science Resources Statistics
Institutions Increase Networking Capacity, Gap Between Doctorate- and
Nondoctorate-Granting Institutions Widens

Arlington, VA (NSF 10-328) [September 2010]


Back to previous page. Back to previous page


Print this page

Back to Top of page