NERSC logo National Energy Research Scientific Computing Center
  A DOE Office of Science User Facility
  at Lawrence Berkeley National Laboratory
 

2006 User Survey Results

Many thanks to the 256 users who responded to this year's User Survey. This represents a response rate of about 13 percent of the active NERSC users. The respondents represent all six DOE Science Offices and a variety of home institutions: see Respondent Demographics.

The survey responses provide feedback about every aspect of NERSC's operation, help us judge the quality of our services, give DOE information on how well NERSC is doing, and point us to areas we can improve. The survey results are listed below.

You can see the 2006 User Survey text, in which users rated us on a 7-point satisfaction scale. Some areas were also rated on a 3-point importance scale or a 3-point usefulness scale.

Satisfaction
Score
Meaning Number of
Times Selected
7Very Satisfied 4,985
6Mostly Satisfied 3,748
5Somewhat Satisfied 832
4Neutral 584
3 Somewhat Dissatisfied 251
2 Mostly Dissatisfied 75
1 Very Dissatisfied 51
Importance ScoreMeaning
3Very Important
2Somewhat Important
1Not Important
Usefulness ScoreMeaning
3Very Useful
2Somewhat Useful
1Not at All Useful

The average satisfaction scores from this year's survey ranged from a high of 6.7 (very satisfied) to a low of 4.9 (somewhat satisfied). Across 111 questions, users chose the Very Satisfied rating 4,985 times, and the Very Dissatisfied rating only 51 times. The scores for all questions averaged 6.1, and the average score for overall satisfaction with NERSC was 6.3. See All Satisfaction Ratings.

For questions that spanned the 2006 through 2003 surveys, the change in rating was tested for significance (using the t test at the 90% confidence level). Significant increases in satisfaction are shown in blue; significant decreases in satisfaction are shown in red.

Significance of Change
significant increase (change from 2005)
significant decrease (changefrom 2005)
not significant

Areas with the highest user satisfaction include the HPSS mass storage system, account and consulting services, DaVinci C/C++ compilers, Jacquard uptime, network performance within the NERSC center, and Bassi Fortran compilers.

7=Very satisfied, 6=Mostly satisfied, 5=Somewhat satisfied, 4=Neutral, 3=Somewhat dissatisfied, 2=Mostly dissatisfied, 1=Very dissatisfied

Item Num who rated this item as: Total Responses Average Score Std. Dev. Change from 2005
1 2 3 4 5 6 7
HPSS: Reliability (data integrity) 2 22 69 936.700.59-0.03
Account support services 1 1 4 2 47 147 2026.640.76-0.09
HPSS: Uptime (Availability) 1 2 29 62 946.620.59-0.06
DaVinci SW: C/C++ compilers 1 3 9 136.620.65 
Jacquard: Uptime (Availability) 2 2 26 55 856.580.660.73
CONSULT: Timely initial response to consulting questions 1 3 2 6 50 136 1986.570.81-0.08
Network performance within NERSC (e.g. Seaborg to HPSS) 2 1 3 38 72 1166.530.75-0.08
OVERALL: Consulting and Support Services 4 8 7 58 159 2366.530.85-0.20
Bassi SW: Fortran compilers 1 1 3 18 50 736.521.02 

Areas with the lowest user satisfaction include Seaborg batch wait times; PDSF disk storage, interactive services and performance tools; Bassi and Seaborg visualization software; and analytics facilities.

7=Very satisfied, 6=Mostly satisfied, 5=Somewhat satisfied, 4=Neutral, 3=Somewhat dissatisfied, 2=Mostly dissatisfied, 1=Very dissatisfied

Item Num who rated this item as: Total Responses Average Score Std. Dev. Change from 2005
1 2 3 4 5 6 7
PDSF SW: Performance and debugging tools 1 3 3 5 10 9 315.481.52-0.52
Seaborg SW: Visualization software 1 12 5 15 9 425.451.19-0.08
PDSF: Ability to run interactively 1 1 1 4 11 17 6 415.391.30-0.40
OVERALL: Data analysis and visualization facilities 2 4 32 20 47 23 1285.371.22-0.28
Bassi SW: Visualization software 1 1 4 2 9 5 225.361.62 
PDSF: Disk configuration and I/O performance 1 7 5 6 13 7 395.101.54-0.04
Seaborg: Batch wait time 6 5 27 11 35 56 19 1594.941.570.99

The largest increases in satisfaction over last year's survey are for the Jacquard linux cluster; Seaborg batch wait times and queue structure; NERSC's available computing hardware; and the NERSC Information Management (NIM) system.

7=Very satisfied, 6=Mostly satisfied, 5=Somewhat satisfied, 4=Neutral, 3=Somewhat dissatisfied, 2=Mostly dissatisfied, 1=Very dissatisfied

Item Num who rated this item as: Total Responses Average Score Std. Dev. Change from 2005
1 2 3 4 5 6 7
Seaborg: Batch wait time 6 5 27 11 35 56 19 1594.941.570.99
Jacquard: Uptime (Availability) 2 2 26 55 856.580.660.73
Seaborg: Batch queue structure 1 4 5 13 21 61 48 1535.771.270.72
Jacquard: Batch wait time 1 3 5 10 40 23 825.871.130.71
Jacquard: overall 2 2 10 28 46 886.271.010.49
Jacquard: Batch queue structure 1 3 6 7 34 28 795.951.140.49
OVERALL: Available Computing Hardware 3 5 29 108 92 2376.190.820.30
NIM 3 2 19 76 102 2026.350.810.19

The largest decreases in satisfaction over last year's survey are shown below.

7=Very satisfied, 6=Mostly satisfied, 5=Somewhat satisfied, 4=Neutral, 3=Somewhat dissatisfied, 2=Mostly dissatisfied, 1=Very dissatisfied

Item Num who rated this item as: Total Responses Average Score Std. Dev. Change from 2005
1 2 3 4 5 6 7
PDSF SW: Programming libraries 1 3 7 9 11 315.841.13-0.62
PDSF SW: General tools and utilities 1 2 4 4 14 9 345.621.33-0.58
PDSF SW: Software environment 2 1 6 14 13 365.921.25-0.52
Seaborg: Uptime (Availability) 1 4 3 20 52 79 1596.230.99-0.33
NERSC security 2 1 7 9 10 72 134 2356.301.11-0.31
Seaborg SW: Performance and debugging tools 3 6 7 13 38 28 955.691.31-0.31
OVERALL: Available Software 6 24 22 85 82 2195.971.08-0.22
CONSULT: overall 1 1 2 3 9 59 124 1996.470.90-0.21
OVERALL: Consulting and Support Services 4 8 7 58 159 2366.530.85-0.20
OVERALL: Network connectivity 8 10 19 69 124 2306.271.02-0.18
CONSULT: Quality of technical advice 1 2 3 8 66 113 1936.460.84-0.16

Survey Results Lead to Changes at NERSC

Every year we institute changes based on the previous year survey. In 2006 NERSC took a number of actions in response to suggestions from the 2005 user survey.

  1. 2005 user survey: On the 2005 survey 24 users asked us to improve queue turnaround times. Seaborg wait time had the lowest satisfaction rating on the survey, with an average score of 3.95 (out of 7).

    NERSC response: In 2006, NERSC and DOE adjusted the duty cycle of NERSC systems to better balance throughput (reduced queue wait times) and overall utilization, and also agreed not to pre-allocate systems that are not yet in poroduction. This approach has paid off: on the 2006 survey only 5 users commented on poor turnaround times, and the average satisfaction score for Seaborg wait times increased by almost one point.

  2. 2005 user survey: On the 2005 survey three Jacquard ratings were amoung the lowest seven ratings.

    NERSC response: In 2006 NERSC staff worked hard to improve Jacquard's computing infrastructure:

    • We implemented the Maui scheduler in order to manage the queues more effectively.
    • The system was greatly stabilized by reducing the system memory clock speed from 400 MHz to 333 MHz (more nodes were added to Jacquard to compensate for the reduced clock sped).
    • We worked with Linux Networx and its third party vendors to improve MVAPICH.
    • We worked with Mellanox to debug and fix several problems with the Infiniband drivers and firmware on the Infiniband switches that were preventing successful runs of large-concurrency jobs.

    On the 2006 survey four Jacqaurd ratings were significantly higher: those for up time, wait time, and queue structure, as well as overall satisfaction with Jacquard.

  3. 2005 user survey: On the 2005 survey four users mentioned that moving data between machines was an inhibitor to doing visualization.

    NERSC response: In early 2006 the NERSC Global Filesystem was deployed to address this issue. It is a large, shared filesystem that can be accessed from all the computational systems at NERSC.

    Moving files between machines did not come up as an issue on the 2006 survey, and users were mostly satisfied with NGF reliability and performance.

  4. 2005 user survey: On the 2005 survey 17 users requested more hardware resources.

    NERSC response: In addition to deploying the Bassi POWER5 system in early 2006, NERSC has announced plans to deploy a 19,344 processor Cray XT4 systemin 2007. User satisfaction with available computing hardware at NERSC increased by 0.3 points on the 2006 survey, and only ten users requested additional computing resources in the Comments about NERSC section.

Users are invited to provide overall comments about NERSC:

113 users answered the question What does NERSC do well?   87 respondents stated that NERSC gives them access to powerful computing resources without which they could not do their science; 47 mentioned excellent support services and NERSC's responsive staff; 27 highlighted good software support or an easy to use user environment; 24 pointed to hardware stability and reliability. Some representative comments are:

The computers are stable and always up. The consultants are knowledgeable. The users are kept well informed about what's happening to the systems. The available software is complete. The NERSC people are friendly.
NERSC runs a reliable computing service with good documentation of resources. I especially like the way they have been able to strike a good balance between the sometimes conflicting goals of being at the "cutting edge" while maintaining a high degree of uptime and reliable access to their computers.
NERSC has a lot of computational power distributed in many different platforms (SP, Linux clusters, SMP machines) that can be tailored to all sorts of applications. I think that the DaVinci machine was a great addition to your resource pool, for quick and inexpensive OMP parallelization.
The preinstalled application packages are truly useful to me. Some of these applications are quite tricky to install by myself.
NERSC makes possible for me extensive numerical calculations that are a crucial part of my research program in environmental geophysics. I compute at NERSC to use fast machines with multiple processors that I can run simultaneously. It is a great resource.

72 users responded to What should NERSC do differently?.

In previous years the greatest areas of concern were dominated by queue turnaround and job scheduling issues. In 2004 , 45 users reported dissatisfaction with queue turnaround times. In 2005 this number dropped to 24 and this year only 5 users made such comments. NERSC has made many efforts to acquire new hardware, to implement equitable queueing policies across the NERSC machines and to address queue turnaround times by allocating fewer of the total available cycles, and this has clearly paid off. The top three areas of concern this year are job scheduling, more compute cycles, and software issues.

Some of the comments from this section are:

The move now is to large numbers of CPUs with relatively low amounts of RAM per CPU. My code is moving the opossite direction. While I can run larger problems with very large numbers of CPUs, for full 3-D simulations, large amounts of RAM per CPU are required. Thus NERSC should acquire a machine with say 1024 CPUs, but 16 or 32 GB RAM/CPU.
More adequate and equitable resources allocation based on what the user accomplished in the previous year.
Increased storage resources would be very helpful. Global file systems have been started and should be continued and improved.
The CPU limit on interactive testing is often restrictive, and a faster turnaround time for a test job queue (minutes, not hours) would help a lot.

67 users answered the question How does NERSC compare to other centers you have used?   41 users stated that NERSC was an excellent center or was better than other centers they have used. Reasons given for preferring NERSC include its consulting services and responsiveness, its hardware and software management and the stability of its systems.

Eleven users said that NERSC was comparable to other centers or gave a mixed review and only four said that NERSC was not as good as another center they had used. Some users expressed dissatisfaction with user support, turnaround time, Seaborg's slow processors, the lack of production (group) accounts, HPSS software, visualization and the allocations process.

Here are the survey results:

  1. Respondent Demographics
  2. Overall Satisfaction and Importance
  3. All Satisfaction, Importance and Usefulness Ratings
  4. Hardware Resources
  5. Software
  6. Visualization and Data Analysis
  7. HPC Consulting
  8. Services and Communications
  9. Web Interfaces
  10. Training
  11. Comments about NERSC

LBNL Home
Page last modified: Wed, 31 Jan 2007 01:05:13 GMT
Page URL: http://www.nersc.gov/news/survey/2006/
Web contact: webmaster@nersc.gov
Computing questions: consult@nersc.gov

Privacy and Security Notice
DOE Office of Science