NERSC logo National Energy Research Scientific Computing Center
  A DOE Office of Science User Facility
  at Lawrence Berkeley National Laboratory
 

2006 User Survey Results

Page Office Program
Science Category Project Class    
User checked this resource NERSC Experience Base Year
Results for only this resource

How satisfied are you with:

7=Very,6=Mostly,5=Somewhat,4=Neutral,3=Somewhat dissatisfied,2=Mostly dissatisfied,1=Very dissatisfied

Item 1 2 3 4 5 6 7 ResponsesAverageStd. Dev.Change (2005)
HPSS: Reliability (data integrity) 2 22 69 936.700.59-0.03
SERVICES: Account support 1 1 4 2 47 147 2026.640.76-0.09
HPSS: Uptime (Availability) 1 2 29 62 946.620.59-0.06
DaVinci SW: C/C++ compilers 1 3 9 136.620.65 
Jacquard: Uptime (Availability) 2 2 26 55 856.580.660.73
CONSULT: Timely initial response to consulting questions 1 3 2 6 50 136 1986.570.81-0.08
Network performance within NERSC (e.g. Seaborg to HPSS) 2 1 3 38 72 1166.530.75-0.08
OVERALL: Consulting and Support Services 4 8 7 58 159 2366.530.85-0.20
Bassi SW: Fortran compilers 1 1 3 18 50 736.521.02 
CONSULT: Followup to initial consulting questions 1 2 3 9 55 117 1876.490.86-0.08
CONSULT: overall 1 1 2 3 9 59 124 1996.470.90-0.21
CONSULT: Quality of technical advice 1 2 3 8 66 113 1936.460.84-0.16
Seaborg SW: Fortran compilers 1 1 7 2 35 80 1266.450.93-0.04
PDSF SW: C/C++ compilers 3 13 17 336.420.66-0.18
NGF: Reliability 3 6 17 266.420.99 
WEB: Accuracy of information 1 6 7 79 106 1996.420.750.02
DaVinci SW: Software environment 1 2 7 14 246.420.83 
NGF: File and Directory Operations 1 1 8 12 226.410.80 
Seaborg SW: Software environment 1 6 5 51 77 1406.410.810.02
Bassi: Uptime (Availability) 1 4 4 31 52 926.400.85 
Seaborg SW: C/C++ compilers 1 6 4 24 55 906.400.930.03
WEB: NERSC web site overall (www.nersc.gov) 2 4 8 92 105 2116.390.740.10
HPSS: Overall satisfaction 1 2 1 5 37 58 1046.380.96-0.13
NGF: Overall 1 1 2 5 17 266.381.06 
NIM 3 2 19 76 102 2026.350.810.19
NGF: Uptime 1 1 1 7 16 266.351.16 
TRAINING: New User's Guide 3 6 53 49 1116.330.70-0.04
Seaborg SW: Programming libraries 1 7 7 35 60 1106.320.96-0.09
GRID: Job Submission 1 1 1 2 14 196.321.53-0.21
OVERALL: Satisfaction with NERSC 2 9 3 8 99 128 2496.311.010.11
NERSC security 2 1 7 9 10 72 134 2356.301.11-0.31
Bassi SW: Software environment 2 2 4 29 44 816.301.17 
HPSS: Data transfer rates 1 1 2 2 5 33 52 966.291.10-0.11
SERVICES: Allocations process 1 1 5 12 70 76 1656.280.850.12
Jacquard SW: Software environment 4 6 29 34 736.270.840.15
Jacquard: overall 2 2 10 28 46 886.271.010.49
Bassi SW: C/C++ compilers 2 2 2 15 27 486.271.18 
Bassi SW: Programming libraries 1 1 4 3 15 36 606.271.25 
CONSULT: Amount of time to resolve your issue 2 6 6 11 68 103 1966.271.08-0.14
OVERALL: Network connectivity 8 10 19 69 124 2306.271.02-0.18
Bassi: overall 2 3 5 2 30 57 996.261.23 
GRID: Access and Authentication 1 2 6 14 236.261.29-0.16
Jacquard SW: C/C++ compilers 2 1 4 19 28 546.261.100.11
SERVICES: Response to special requests (e.g. disk quota increases, etc.) 1 4 3 4 36 50 986.241.08-0.11
Seaborg: Uptime (Availability) 1 4 3 20 52 79 1596.230.99-0.33
DaVinci SW: Fortran compilers 1 1 6 10 186.221.44 
On-line help desk 1 1 6 11 41 55 1156.211.020.04
WEB: Timeliness of information 1 13 21 69 90 1946.210.920.09
GRID: Job Monitoring 1 2 4 13 206.201.54-0.30
OVERALL: Available Computing Hardware 3 5 29 108 92 2376.190.820.30
GRID: File Transfer 2 1 1 5 13 226.181.30-0.10
SERVICES: E-mail lists 2 8 4 33 42 896.181.030.10
Seaborg SW: Applications software 1 8 6 35 41 916.180.970.01
NGF: I/O Bandwidth 1 3 9 10 236.170.98 
Jacquard SW: General tools and utilities 3 3 24 17 476.170.820.19
Jacquard SW: Programming libraries 2 3 5 18 28 566.161.170.24
OVERALL: Mass storage facilities 4 17 13 52 86 1726.161.08-0.16
TRAINING: Web tutorials 4 10 44 31 896.150.79-0.07
CONSULT: Software bug resolution 1 1 1 11 9 37 60 1206.141.160.04
PDSF SW: Fortran compilers 1 3 6 7 176.120.93-0.08
Jacquard SW: Visualization software 1 3 6 7 176.120.930.58
Jacquard SW: Fortran compilers 1 5 2 4 18 34 646.111.300.38
Seaborg: overall 1 4 7 17 75 64 1686.101.000.18
Seaborg SW: General tools and utilities 2 8 6 45 37 986.090.97-0.00
Jacquard SW: Applications software 1 3 2 21 17 446.091.140.31
Bassi: Disk configuration and I/O performance 1 1 1 5 2 36 30 766.081.16 
HPSS: Data access time 1 1 3 2 9 37 38 916.081.170.08
DaVinci: overall 2 1 3 9 15 306.071.360.42
OVERALL: Hardware management and configuration 2 2 3 17 14 86 89 2136.071.150.09
Jacquard: Disk configuration and I/O performance 1 1 8 3 25 30 686.061.160.18
OVERALL: Software management and configuration 8 19 17 65 89 1986.051.13-0.17
Bassi SW: General tools and utilities 1 1 5 3 20 22 526.041.17 
SERVICES: Computer and network operations support (24x7) 1 1 3 11 4 21 47 886.031.37-0.57
Bassi SW: Applications software 1 4 6 18 20 496.021.18 
WEB: Ease of finding information 1 4 9 33 90 72 2096.020.990.09
DaVinci SW: Visualization software 1 2 1 6 9 196.001.370.57
SERVICES: NERSC CVS services 4 2 8 10 246.001.10-0.21
PDSF: Batch queue structure 1 3 3 20 11 385.970.97-0.03
OVERALL: Available Software 6 24 22 85 82 2195.971.08-0.22
Jacquard: Batch queue structure 1 3 6 7 34 28 795.951.140.49
PDSF: Batch wait time 1 3 5 18 12 395.951.000.15
TRAINING: NERSC classes: in-person 4 2 3 9 185.941.26-0.18
Seaborg: Disk configuration and I/O performance 1 1 4 13 13 55 49 1365.921.19-0.14
Bassi: Batch queue structure 1 2 9 7 38 29 865.921.16 
PDSF SW: Software environment 2 1 6 14 13 365.921.25-0.52
Remote network performance to/from NERSC (e.g. Seaborg to your home institution) 1 5 10 4 19 64 63 1665.891.33-0.24
Jacquard: Batch wait time 1 3 5 10 40 23 825.871.130.71
PDSF SW: Applications software 2 2 4 10 10 285.861.21-0.28
Bassi: Batch wait time 3 7 16 40 25 915.851.02 
PDSF SW: Programming libraries 1 3 7 9 11 315.841.13-0.62
WEB: Searching 3 13 18 41 35 1105.841.090.14
PDSF: Uptime (availability) 4 3 6 12 17 425.831.31-0.06
HPSS: User interface (hsi, pftp, ftp) 1 1 4 7 14 35 33 955.831.26-0.29
PDSF: Overall satisfaction 1 3 1 4 23 11 435.811.20-0.19
Bassi SW: Performance and debugging tools 1 2 2 3 6 20 19 535.771.45 
Seaborg: Batch queue structure 1 4 5 13 21 61 48 1535.771.270.72
Jacquard: Ability to run interactively 2 2 9 7 25 23 685.761.290.20
Live classes on the web 5 1 9 6 215.761.140.04
Seaborg: Ability to run interactively 2 10 13 19 42 45 1315.711.320.18
Seaborg SW: Performance and debugging tools 3 6 7 13 38 28 955.691.31-0.31
SERVICES: Visualization services 1 7 3 7 11 295.691.31-0.14
PDSF SW: General tools and utilities 1 2 4 4 14 9 345.621.33-0.58
Jacquard SW: Performance and debugging tools 4 1 2 6 20 11 445.591.450.24
Bassi: Ability to run interactively 2 4 3 8 8 25 25 755.551.60 
PDSF SW: Performance and debugging tools 1 3 3 5 10 9 315.481.52-0.52
Seaborg SW: Visualization software 1 12 5 15 9 425.451.19-0.08
PDSF: Ability to run interactively 1 1 1 4 11 17 6 415.391.30-0.40
OVERALL: Data analysis and visualization facilities 2 4 32 20 47 23 1285.371.22-0.28
Bassi SW: Visualization software 1 1 4 2 9 5 225.361.62 
PDSF: Disk configuration and I/O performance 1 7 5 6 13 7 395.101.54-0.04
Seaborg: Batch wait time 6 5 27 11 35 56 19 1594.941.570.99

How important to you is:

3=Very,2=Somewhat,1=Not important

Item 1 2 3 ResponsesAverageStd. Dev.
OVERALL: Satisfaction with NERSC 31 202 2332.870.34
OVERALL: Available Computing Hardware 3 31 189 2232.830.41
SERVICES: Account support 2 40 144 1862.760.45
SERVICES: Allocations process 3 31 119 1532.760.47
OVERALL: Consulting and Support Services 4 52 167 2232.730.48
OVERALL: Network connectivity 5 49 159 2132.720.50
SERVICES: Response to special requests (e.g. disk quota increases, etc.) 4 26 59 892.620.57
OVERALL: Hardware management and configuration 7 68 122 1972.580.56
OVERALL: Software management and configuration 10 75 104 1892.500.60
OVERALL: Available Software 9 90 109 2082.480.58
NERSC security 21 83 121 2252.440.66
OVERALL: Mass storage facilities 31 71 80 1822.270.74
SERVICES: Computer and network operations support (24x7) 14 38 38 902.270.72
SERVICES: E-mail lists 19 40 24 832.060.72
OVERALL: Data analysis and visualization facilities 54 57 45 1561.940.80
SERVICES: Visualization services 26 15 13 541.760.82
SERVICES: NERSC CVS services 23 11 6 401.570.75

Usefulness

Item 1 2 3 AverageStd. Dev.
SERVICES: E-mail lists 1 38 156 2.790.42
TRAINING: New User's Guide 1 25 69 2.720.48
TRAINING: Web tutorials 5 27 59 2.590.60
MOTD (Message of the Day) 18 71 82 2.370.67
SERVICES: Announcements web archive 15 87 68 2.310.63
Live classes on the web 7 13 12 2.160.77
Phone calls from NERSC 34 43 50 2.130.81
TRAINING: NERSC classes: in-person 11 11 12 2.030.83

What NERSC resources do you use? (1108)Number
Seaborg170
NIM160
NERSC web site (www.nersc.gov)148
HPSS111
Jacquard109
Bassi107
Consulting services89
SERVICES: Account support85
PDSF49
DaVinci: overall 30
SERVICES: Computer and network operations support (24x7)26
Visualization services9
NGF8
SERVICES: NERSC CVS services4
Grid services3

Desktop Mac systems (93)Number
OS X77
MacOS15
Other1

Desktop UNIX systems (222)Number
Linux188
Sun Solaris21
IBM AIX7
SGI IRIX4
HP HPUX1
Other1

Desktop PC systems (130)Number
Windows XP113
Windows 200016
Other1

What training methods would you like NERSC to offer? (317)Number
Web documentation124
Web tutorials on specific topics114
Live web broadcasts with teleconference audio26
Live classes at LBNL20
In-person classes at your site20
Live classes on the web13

How long have you used NERSC?

less than 6 months3112.1%
6 months - 3 years11645.1%
more than 3 years11042.8%

Where do you perform data analysis and visualization of data produced at NERSC?

All at NERSC103.9%
Most at NERSC3513.8%
Half at NERSC, half elsewhere4015.7%
Most elsewhere9035.4%
All elsewhere7128.0%
I don't need data analysis or visualization83.1%

Do you feel you are adequately informed about NERSC changes?

Yes21596.8%
No73.2%
Not Sure0.0%

Are you aware of major changes at least one month in advance?

Yes20291.4%
No198.6%
Not Sure0.0%

Are you aware of software changes at least seven days in advance?

Yes19992.1%
No177.9%
Not Sure0.0%

Are you aware of planned outages 24 hours in advance?

Yes21498.2%
No41.8%
Not Sure0.0%

Are your data analysis and visualization needs being met? In what ways do you make use of NERSC data analysis and visualization resources (Escher, serial queues on Seaborg, visualization software, working with the visualization group, consulting help, etc). In what ways should NERSC add to or improve these resources?
   My needs are currently met well by the data analysis capabilities of DaVinci.
   NWChem is not working completely well. Certain modules like PMF does not work (atleast in Jacquard). The task shell command used in NWCHem does not work either in Jacquard.
   Not sure what 'data analysis' means in this context. 80% of what I do is called 'data analysis' and all is done on PDSF. And mostly fine, except slowness/outages.

I use ROOT at PDSF for 'visualisation' (making plots).
   DaVinci for large-scale data analysis
   yes, it is good. But, this year I experienced inefficiency of PDSF more often than last year, i.e., sometimes PDSF is terribly slow.
   I mainly use Matlab on Jacquard for data analysis. DaVinci's performance on running Matlab is very poor. I also tried to use Visit for data visualization but somehow the performance in speed is below my expectation.
   I am mostly glad with the data analysis and visualization support on nersc. 3D visualization might be a direction to pursue
   I do the data analysis on my own PC. I only use the IPM module and I m not really satisfied with it since the results are displayed on the net one day later
   I use my desktop for visualisation and data analysis
   No.
   N/A
   My basic problem is to get up to speed with what is available. I am reluctant to learn new things when I want to get something achieved. This is my problem and not NERSC. Of the three software problems I have had, the staff has been extremely competent and helpful on two of these. The current problem is still on-going and is something I need to better understand.

I guess to improve services, it would be difficult to identify what would be required. I have gone through the manuals but find it always easier when you talk to a human being. For visualization capabilities, I am unaware if a manual or sample case exists. This would be helpful. I have stored the IBM manual on my desktop to help debug problems and understand system usage. Does a similar capability exist for visualization?
   I'm satisfied with most of the service and hadwqre and sofware.

But I'm using my accountin in China mostly. sometime when I connect the pdsf through ssh, the transfers is so slow that I cann't work almost .

Does it can be improved ?
   I don't use this
   I do not use data analysis and visualization on NERSC machines
   I try to use Mathematica and Maple on DaVinci, but forwarding X-services is quite slow and tedius. Perhaps its my network connection as well, but using X-windows remotely is too slow for me.
   I am a new user and I have several students using the facilities. We are getting up to speed on the systems and that is taking somewhat longer than we thought.This problem is one that is local.
   I do all post-processing and visualization off-site.
   We export our produced data to other non-NERSC machines for final analysis and visualization where we have better X connections, better control of software configuration, better uptime, etc. I have not explored non-PDSF options at NERSC for these things; PDSF is simply not stable enough or designed for this kind of work. For the most part our final analysis and visualization needs are fairly modest and are well served by a mini cluster under our own control rather than having to submit proposals, share a cluster with other users, etc. to use NERSC resources for these needs.
   My viz needs at NERSC currently have to do with the LLNL VisIt tool. This past year we (LLNL researchers using VisIt to analyze data from NIMROD runs) came to the NERSC viz group requesting help interfacing NIMROD & VisIt and received *excellent* support.
   I do not use those tools
   Overall, our network connection is too slow to even use Xwindows easily, so I usually just use a dumb terminal window.
   Most of my visualization is done in Matlab, requiring moving large blocks of data to my local comupter. This can sometimes be time consuming.
   I know that the information is available but I don't have time to spend to learn new software. Then, my position is that there is not enough information available to easily access the software. It might have a tutorial but I am not aware; then having a tutorial with some examples how to use would help to start using such software and machines.
   n/a
   So far I have been enjoying the various options of visualization softwares (mostly AVS and IDL) available in DaVinci. However, one of the major simulation code I have been recently using, the NIMROD code, has been designed to have its data output format work mostly with the visualization package Tecplot. Tecplot is a commonly used commercial visualization package that is well known for its easy accessibility and short learning curve. Unfortunately it is not available on DaVinci. I requested the consideration of installation of Tecplot on DaVinci about a year ago, based on not only the need from my own project, but also from the more important fact that the installation of Tecplot will benefit a large pool of NERSC users who are also users of the NIMROD code, which is one of the two major fusion MHD codes supported by DOE Office of Fusion Science. Yet my request is still under "evaluation" after nearly a year. I would like to take the opportunity of this annual survey to reinstate my request and concern about this request.
   I have not worked with the visualization group yet. My approach so far has been to use IDL and python/gnuplot to run where the data is. I have not explored the use of DaVinci and if that will require moving large dump files (which will likely be less efficient than postprocessing where the data is).
   I usually do elsewhere, so not important to me.
   Satisfied.
   We would like to use these services more. Providing more information of the form 'Getting Started with Analyzing your Data on DaVinci, serial queues on Seaborg' would be helpful.
   I do not use the data analysis and visualization resources on NERSC. All of that is handled on local machines.
   I usually check the websites before submitting large numbers of jobs. Given that I don't submit jobs on a regular basis, this has been very helpful.

   I use visualization software and had collaborations with the visualization group who have always been very helpful
   I don't use data analysis and visualization resources on machines at NERSC. I use local machines instead.
   We do most of our visualization in house with IDL, on serial machines. We have begin working with the visualization group for advanced visualization. The best addition would be stabilization and increase in capacity of shared file systems to make interoperation between code running machines and analysis easier. Added capacity in scratch and other file systems would also be very helpful; we often need to store an analyze large data sets, which often requires special arangement.
   Improve the support of addon python libraries.
   I really should do more with visualization. It is becoming increasingly important
   would be nice to have R (open-source S-PLUS)
   Seems OK
   I do all visualization and data analysis elsewhere, because I have everything set up and I do not need a lot of resources.
   Sometime I just want to do simple visualization using tools such as matlab. But the connection is very slow from my pc.
   it is easier to process data on a local machine for me because for data analysis I don't have to wait in a queue. For visualization, manipulating x windows it is much better to be local
   I think the current way that queues are structured has a very significant and adverse effect on the ability of users to do vis/analysis on NERSC resources. The difficulty is that for large data sets, massive computational power is needed for analysis and vis. Currently the only way of getting that power is by using the production batch queues on the big machines. The problem with this is that it almost entirely eliminates the possibility of doing actual interactive viz and data analysis. In one recent set of run we were creating multiple 60 Gb data dumps and needed to run complicated algorithms to analyze the data and then we wanted to do viz. The problem is that we either have to run using:

1. Davinci

2. Interactive queues on on the big machines.

I realize that it is an extremely difficult problem to schedule jobs that are i) require many nodes i) need to be executed on demand. But, this is a huge limitation currently when it comes to data viz and analysis.
   I've noticed that network response for IDLDE (the graphical UI with IDL) is very slow. It's typically been quicker to just copy everything to my local machine and work here. This isn't any great inconvenience for me, since I have most of what I need here.
   My group relied on help from NERSC visualization consultants in the past.

But it seems too hard for us as regular users do all of it ourselves.
   At present, most of our visualization is done at DoD - but intend to switch to doing more at DaVinci. We will then request considerable help from the Visualization Group at NERSC
   More frequant on-site user training. The problem is that we do not have the resources to come to NERSC for such training. We love to use NERSC facility for visulization and data analysis.
   It is very cumbersome to use PDSF when you use modern tools. For instance, I edit files and want to use code management tools found on MacOSx However, I do not have enough disk space on AFS. Also, i cannot run batch jobs on PDSF using AFS.

What I want is to mount my PDSF files on my local computer. NERSC does not allow it. As a result, I use my own desktop most of the time. It is simply to hard to use NERSC.
   My analysis and visualization needs are being met. I use DaVinci a lot with very large data sets. Most often I use Matlab, Grads, and NCL. The only thing I can think of that I would like to see added is the mapping toolbox for matlab.
   I use matlab and mathematica, and I amd satisfied with the current level of resources.
   Yes. Serial queues on Jacquard or Bassi with my own software.
   Not familiar with the NERSC data analysis and visualization resources available to me. It would be helpful to better understand what resources are available to me .
   I do analysis and visualization at our facility. I'm not sure that's the best solution for us, but it's the way we do it now.
   N/A
   We analyze our data locally. Data analysis is inexpensive for our projects. We don't use data analysis and visualization software at NERSC.
   I use matlab on davinci almost daily. Occasionally, I can't start it because of

license shortage.

   3D visualization softwares such as AVS, Visit, etc., are hard to learn and to use for the typical researcher. That's why gnuplot is still the preferred tool for analysis and vis for many. NERSC's resources are extremely good for analysis and vis but the thing missing is a closer working relationship between members of the vis group and the researchers. Tailored analysis and visualization tools for specific applications would be great but researchers usually don't know what they want or what they are missing... Maybe the vis group should take the initiative of building a few of those vis tools for chosen applications and publicized them.
   They are met.
   N/A
   It would be great for the visualization resources to be more visible - eg I don't really know what is available for users. Maybe you should publicize this more?
   I use python, pytables to access HDF5 data, then gnuplotpy. I do

mostly batch generation of 2D plots, as network connectivity is

too poor to do more. Also, the idea of moving data around nersc

to get it on the right machine is clumsy.
   I write my own analysis codes which must be run on the large machines (mainly Bassi) since DaVinci is not large enough. I move results from these to local resources where I visualise them and process further. I am happy with the situation. It would be nice to have a larger post-processing machine though, since post-processing development is quite iterative for me and this doesn't fit with the long queue times on the production machines.
   I use simple visualization tools such as gnuplot to do quick checks of data. More complex visualization is performed elsewhere. Typically you do not want to attempt to perform complex visualizations on a remote resource at NERSC because of slow internet connectivity. You would not be able to interactively work with the visualization.

   I don't use the available visualization tools.
   I know NERSC works at improving viz (which for me means both data analysis

and visualization) but the codes we currently run at NERSC don't need any high

end viz. Someday we may be doing long MD (version QMC) calculations. Then

I will want NERSC to even have or install out real-time multi-resulation analysis

software that allows us to detect stable structures and transtions, currently over 2-to-the-12th time scales.
   I have not had time this year to really explore use of DaVinci --- in FY07 I hope to really get to use it
   I have checked out that the matlab graphics works on Jacquard. However

I have used the software for real work at OSC - it is closer and the

same time zone if I have to the phone consultation.

   I don't know how to use those softwares, so I have to download to my local computer and use some window softwares.
   Most of my data analysis and visualization take place off site.
   Data transfer from NERSC to NREL, Colorado is so slow that I cannot use any visualization software in production level. Sometimes I used XmakeMol compiled by myself at Home directory, which is light and good for atomistic structural analysis. Can you offer ligher visualization/analysis softwares likewise?
   To improve the speed of network connectivity so that remote visualization will be more convenient.
   It is important that visualization server is available for dedicated data analysis and visualization as well as software that can leverage the

server.
   We use our own software for data analysis and do not rely on external, commercial software, our needs are satisfied by using the CERN ROOT package.

However, we currently lack any basic graphics visualization tools on PDSF. By this I mean a tool to look at PDF, GIF, PNG etc. We often create graphs in batch mode and these can only be viewed by copying them back to the desktop machine. We would like to see some basic graphics package installed on SL302 on PDSF.
   I have not started to use the NERSC data analysis and visulization resources. But, these are very important and we will begin to realize and utilize these resources as much as possible.
   I would like to see the Climate Data Management System (CDAT) working on Seaborg (with the GUI)
   My needs are mostly satisfied. I use mostly IDL on daVinci or other platforms.

NGF made things easier in that respect for me.
   consulting help
   I'm happy with DaVinci.

   make some turtorial webpage on the using of visualization softwear.
   NA
   I use xmgr and gnuplot routinely. But that's about it.

If you would like to comment on NERSC hardware resources, please do so here:
   Hopefully franklin will fix the long queue wait times.
   Diskservers at PDSF are faring reasonably well, but occasional crahses/outages occur. The move to GFPS has made disk more erlaibale, but still occasional crashes occur. These sometimes mean that PDSF is unavaible for certain tasks for up to several days (depending on the sverity of the crash). This should be an area of continued focus and attention.
   I use multiple processors on DaVinci for computations with MATLAB. The multiple processors and rather fast computation are extremely useful for my research projects on climate and ice sheet dynamics. Via DaVinci NERSC has been a huge help to my research program.
   Please get more computers.
   I really like Bassi; however, the availability of Bassi for multiple small jobs is difficult, since only 3 jobs from a user can run at a time; this is difficult to deal with when I have many of these jobs, even when the queues are rather small.
   I run the CCSM model. The model runs a relatively small number of processors for a very long time. For example, we use 248 processors on bassi. On Seaborg, we could potentially get one model year/wallclock day. Since we usually run 130 year simulations, if we had 248 processors continuously, it would take 4.5 months to run the model. We didn't get even close to that. Our last seaborg run took 15 months real time, which is intolerably slow.

Bassi runs faster. On bassi, we get roughly 10 model years/wallclock day, a nice number. So it's cheaper for us to run on bassi, and better. bassi is down more frequently, and I get more machine related errors when running on bassi.

On both machines your queue structure does not give us the priority that we need to get the throughput that we have been allocated. For now it's working because bassi isn't heavily loaded. But as others leave seaborg behind and move onto bassi, the number of slots we get in the queue will go down, and we'll find ourselves unable to finish model runs in a timely fashion again.
   On jacquard, it might be nice to make it easier for users who want to submit a large number of single-processor jobs as opposed to a few massively parallel jobs. This is possible but in the current configuration, the user has to manually write code to submit a batch job, ssh to all the assigned nodes, and start the jobs manually. Perhaps that is intentional, but the need does arise, for instance when it is possible to divide a task such that it can be run as 1000 separate jobs which do not need to communicate.
   Seaborg is a little slow, but that is to be expected. The charge factors on the newer, faster machines are dauntingly high.
   The HPSS interface options are shockingly bad. e.g. Kamland had to resort to writing hsi wrappers to achieve reasonable performance. The SNfactory wrote a perl module to have a standard interface within perl scripts rather than spawning and parsing hsi and htar calls one by one. Even the interactive interface of hsi doesn't have basic command line editing. htar failures sometimes don't return error codes and leave 0 sized files in the destination locations. NERSC should provide C, C++, perl, and python libraries for HPSS access in addition to htar, hsi, etc. The HPSS hardware seems great, but the ability to access it is terrible.
   The queue structure could have some improvement, sometimes jobs requiring many nodes make the queue slow but I am sure that you are looking into this.
   My biggest problem with using PDSF has always been that regularly the nodes just freeze, even on something as simple as an ls command. Typically I think this is because some user is hammering a disk I am accessing. This effects the overall useability of the nodes and can be very frustrating. My officemates all use pdsf, and we regularly inform each other about the performance of PDSF to decide whether it is worth trying to connect to the system at all or if it would be better to wait until later.
   NERSC could have a more clear and fair computing time reimbursement/refund policy. For example (Reference Number 061107-000061 for online consulting), on 11/07/2006, I had a batch job on bassi interrupted by a node failure. The loadleveler automatically restarted the batch job from beginning, overwritting all the output files before the node failure. Later I requested refund of the 1896 MPP hours wasted in that incident due to the bassi node failure. But my request was denied, which I think is unfair.
   I have not done extensive comparison on I/O and network performance. Hopefully, next

year I'll be able to provide more useful information here.
   Our project relies primarily on our ability to submit parallel jobs to the batch queue on Seaborg. To that end, the current setup is more than adequate.
   Submission of batch jobs is not well documented.
   the charge factor of 6 for bassi is absolutely ridiculous compared to jacquard. it performs only half as good as jacquard.
   The largest restriction for us is usually disk and storage; we have been able to work with consulting to make special arangements for our needs, which have been very helpful.
   We have consistently found (and NERSC consultants have confirmed) a speedup factor of 2 for Bassi relative to Seaborg on our production code. Because the charge factor is 6, and because we see a speedup of 3 on Jacquard, Bassi is currently not an attractive platform for us, except for extremely large and time-sensitive jobs.

   I could not unzip library source code on Bassi because it limited the number of subdirectories I could create. That machine is useless to me unless I can get Boost installed.

The machine I have been able to effectively use is Davinci because it has Intel compilers. NERSC support has not been helpful at all in getting my software to run on various machines.
   Interactive use on PDSF is often too slow.
   The most unsatisfactory part for me is the confusing policy of queueing the submitted jobs. In an ideal world, it should be first come, first serve with some reasonable constraints. However, I frequently find my jobs waiting for days and weeks without knowing why. Other jobs of similar types or even those with low priority sometimes can jump ahead and run instantaneously. This make rational planing of the project and account managment almost impossible. I assume most of us are not trained as computer scientists with special skills who can find loop holes or know how to take advantages of the system. We only need our projects to proceed as planed.
   every nersc head node should be running grid ftp

every nersc queuing node should be running GT4 GRAM
   Network performance to HPSS seems a bit slower than to resources such as Jacquard. Not sure of how much of a hit this actually is. Just an impression.
   Logon behavior to Bassi can be inconsistent with good passwords sometimes being rejected, and then accepted at the next attempt. Molpro does not generally work well on multiple nodes. This is not too much of a problem on Bassi as there are 8 processors per node, but better scaling, with respect to number of nodes, is possible for this code.
   I don't understand why bassi has restriction on using

large number of nodes (i.e., > 48 nodes requires special arrangement.)

   It is quite possible that I am unaware of a better alternative, but using BBFTP to transfer files to/from Bassi from/to NSF centers I see data rates of only 30-40 MB/sec. This isn't really adequate for the volume of data that we need to move. For example, I can regularly achieve 10x this rate between major elements of the NSF Teragrid. And that isn't enough either!
   Scratch space is small. My quota is 256GB. Simulation we are currently running are on a 2048^3 and we solve for 3 real variables per grid point giving a total of 96 GB per restart dataset. After 6 hours of running (maixmum walltime on bassi), we continue from a restart dataset. But sometimes, we need to do checkpointing (i.e. generate the restart files) half way thorugh the simulation. This amount of being able to hold 3 datasets (initial conditions, half-way checkpoint and at the end) which is not possible. Moreover, for simulations of more scientific interest we solve for 5 variables per grid point. The restart dataset in this case is 160 GB. This means that we cannot run, checkpoint and continue. This quota also prevents fast postprocessing of the data when several realizations of the fields (many datasets) are needed to get reliable statistical results.
   we produce output files faster than we can transfer them to our home institution, even using compression technques. this is usually not an issue, but it has been recently.
   Bassi is great. The good network connectivity within NERSC and to the outside world and the reliability of HPSS make NERSC my preferred platform for post-processing very large scale runs.
   Jacquard is much harder ot use than the IBM-SP's...
   

There have been persistent problems with passwords (to seaborg) being reset or deactivated. In one case my password was deactivated but I was not informed (via email or otherwise. This may have been the result of a security breach at our home institute.

Several hours are lost trying to regain access to NERSC.
   Please assign more space of hardware to its users.
   I think that the NGF and the general choice for GPFS is a great improvement over the previous NFS-based systems.

I am worried that in recent months we have seen the performance of the PDSF home FS drop significantly.
   The switch to GPFS from NFS on PDSF seems to be overall a good thing, but there are now occasional long delays or unavailability of the home disks that I don't like and don't understand...
   PDSF has got way too few SSH gateway systems, plus they seem to be selected by round-robin DNS aliasing and thus it is entirely possible to end up on a host with load already approaching 10 while there are still machines availaible doing absolutely nothing; what I normally do nowadays is look at the Ganglia interface of PDSF and manually log in to the machine with smallest load. There is a definite need for proper load balancing here! Also, it may make sense to separate interactive machines into strict gateways (oriented on minimal latency of connections, with very limited number-crunching privileges) and interactive-job boxes (the opposite).
   The low inodes quota is a real pain.

    Hardware resources at NERSC are the best I have used anywhere. NERSC and

in particular Dr. Simon Horst should be congratulated for setting up and running

certainly one of the best supercomputing facilities in the world.

If you would like to comment on NERSC's software resources, suggest improvements, or list future needs, please do so here:
   Need latest version of NWChem to be installed in Jacquard.
   On Davinci, some Fortran library is needed
   The only software I use a lot on NERSC machines is MATLAB. Being able to run multiple MATLABs simultaneously on DaVinci, and fairly quickly, has been a huge help to my research program. If there is any way to run MATLAB code (regular code, not parallelized in any way) faster I would like to know about it. Overall I am very satisfied with the resource as it has allowed me to do computations that I would not otherwise have been able to do.
   VASP
   CHARMM performance on bassi is worse than that on jacquard, though bassi charges more than jacquard. I don't know whether it is bassi's problem or CHARMM's problem.
   I understand the eason why nersc has to remove imsl from seaborg. But, I am not happy about this action.
   I haven't used the system much since the cmb module was upgraded, but my initial impression previously was that support for many quite standard libraries was not immediately apparent. (fftw3, gsl, boost, ATLAS, CBLAS, LAPACK). It is particulary surprising that the AMD math library interface is not compatible with the standard CBLAS interface, but that's obviously not your fault. I think the net result, though, is there are many many copies of these libraries floating around, which users have individually compiled themselves so their existing code would work. On the whole, though, the development support is still excellent.
   It would be great to add more support for highly parallel molecular dynamics code,

most notably NAMD by Klaus Schulten's group
   The OS of Seaborg seems a bit clunky. For example, users can't use the up arrow to get most recent commands, and <tab> doesn't do automatic completion.
   I wish there were a way to run some jobs for more than 24 hours on a small number of processors.
   See comments on HPSS software.
   I would like nedit to be available on Bassi.
   The PathScale compilers on Jacquard, particularly the Fortran one, have been quite awkward/awful for us to use. I keep on running into a variety of problems compiling and running our codes on Jacquard with PathScale, which are absent in other machines such as Bassi and Seaborg. Similar experience also applies to the mvapich library on Jacquard.

So far I have been enjoying the various options of visualization softwares (mostly AVS and IDL) available in DaVinci. However, one of the major simulation code I have been recently using, the NIMROD code, has been designed to have its data output format work mostly with the visualization package Tecplot. Tecplot is a commonly used commercial visualization package that is well known for its easy accessibility and short learning curve. Unfortunately it is not available on DaVinci. I requested the consideration of installation of Tecplot on DaVinci about a year ago, based on not only the need from my own project, but also from the more important fact that the installation of Tecplot will benefit a large pool of NERSC users who are also users of the NIMROD code, which is one of the two major fusion MHD codes supported by DOE Office of Fusion Science. Yet my request is still under "evaluation" after nearly a year. I would like to take the opportunity of this annual survey to reinstate my request and concern about this request.
   Please, note that apart from Seaborg I have not used the other systems much to provide proper information for them.
   Porting some code to the pathscale compiler has been problematic on jacquard.
   As earlier, our the scope of our project is served well by the current setup at NERSC.
   Interactive debugging of large parallel jobs remotely is difficult even with Totalview, due to network lags and the opacity of the PETSc library. It is impossible on machines such as Jacquard on which there can be long delays in the launch of "interactive" jobs.

   Even if it's heretical, please put Intel compilers on Jacquard. Some software does not support PathScale-specific options and will not compile. I have found Intel compilers, even on AMD machines, to be the most reliable and high-performing compilers for all of my programs.
   It would be great to have more up-to-date versions of quantum chemistry packages running at NERSC
   I think the main difficulty that I run into is not having an up to date version of Python available on all the machines. I would like to see versions:

2.3.6

2.4.4

2.5

These are the latest stable version of Python for each of the major releases.

The other things that I would *really* like is to have more modern MPI implementations - specifically ones that support the MPI-2 spec.
   The GNU autotools for building software (autoconf, automake, etc) are frequently out of date which necessitates installing your own version to build some piece of software. Given that these are so commonly used, they should be kept up to date.

   The software on all the computers is excellent!
   I don't use DaVinci. The /project directory has been flakey. I don't want to

move data around.

I would like to see Trilinos installed.
   Compilers are the bane of our existence. NERSC is no worse than any other

site and probably slightly better. The failure of compilers to be ansi compliant

is not something I expect to be fixed any time in the near future. Indeed the reverse

seems more likely. Perhaps NERSC with DOE behind it could be a leader.

Certainly NSF is unlikely to. Otherwise how could they seriously propose

Petascale computing.
   Deubugging with Totalview still seems more painful than necessary --- in parallel mode it is not fun at all (altough I have not used it much in parallel mode in FY2006)
   We recently switched operating systems on PDSF and are now using Scientific Linux 3.02. Unfortunately, the default installation is fairly bare bones, lacking any kind of graphics software to look at PDF, PNG, GIF etc.

   The "RedHat 8 for STAR" CHOS environment on PDSF lacks many small yet highly useful tools, like Midnight Commander for instance. As a result it is necessary to switch to the default profile, which slow things down and makes it impossible to use STAR-specific components together with such tools.
   On seaborg, compiling C++ with optimization is very slow.

    Software resources at NERSC are excellent espcecially for research in

mathematical and physical sciences. NERSC makes special efforts from time to time to upgrade the software and users are advised about the upgrades,etc.

in sufficient details. NERSC deserves thanks from the users for their efforts to

provide most recent upgrades.

What does NERSC do well; why do you compute at NERSC? (What aspects of NERSC are you most pleased with?; what are the reasons NERSC is important to you?)
   Lots of available computing time, Easy to get nodes.
   User services is great!
   Fast computers with the software we need
   Good computing infrastructure and excellent support.
   NERSC is doing a very good job. It is very important to me, since I need to analyze a large mount of data. NERSC is fast and statable.
   it is one of the few places where i can do the computatons i need o do.
   Bassi is really fast, davinci's unlimited quota is my favorite.
   the computing resources are very good.

CPU-time allocation process is quick (also for additional time)
   pdsf
   Excellent hardware and software and good communictions.
   Once on the system, I like the ability to run a job any time of day as well as how fast my program runs on the machine...
   The preinstalled application packages are truly useful to me. Some of these applications are quite tricky to install by myself.
   Most pleased with: short job waiting time, ample CPU resources.

Important to me: ample CPU resources.
   NERSC is very important for my research. Its computer power, support facilities, and the reliability are far better than those provided by other super computer centers.
   Different architectures available with uniform "layout - the huge amount of available nodes allows one to test his own codes to differend amount of data
   As an experimentalist who requests NERSC time in order to collaborate with theorists, I am not a hands-on user (and therefore left most of this survey blank). Because my knowledge of the NERSC computer system is negligible compared to the average user, I expected it to be difficult or confusing to request time and manage an account. Yet the NERSC staff has always been very helpful and have made the process as easy and simple as possible. Thank you!
   NERSC has plenty of computing power, very good software configuration, and great support and consulting staff.
   Excellent management and support of integral high performance computing resources.
   The computing resources at our university (university of Utha) is limited.

We need more computing resources to finish our projects in time.
   NERSC offers unique high performance computing capabilities that enable new avenues in scientific computing, such as our "reaction path annealing" algorithm to explore conformational changes in macromolecules in atomic detail.

NERSC is continuing to improve their computing capabilities and support to users.
   The facilities are good, queue times are shorter than at other facilities, and the administration is responsive and prompt at allocating time.
   

NERSC systems are very stable, and are thus an excellent place for developing code.
   This survey is much too long. Please try to streamline it next time.s
   Staff has been very helpful. Proposal process is efficient. These resources are a tremendous help for our research. In fact, we could not do everything that we're currently doing without these resources.
   capability of massive parallization
   NERSC has the only resources available to complete my computation in a timely manner.
   Your consultants are great; without them, NERSC wouldn't be very useful to us.
   Large parallel machines, turn around time, consultant support
   NERSC is extremely useful for my computing needs. I can effectively run my production jobs at NERSC.
   PDSF used to be wonderful - always up, easy to use, lots of user support, etc.
   NERSC provides excellent, world-class HPC resources in almost all aspects, from hardware to software. What distinguishes it most from other supercomputing centers is, in my opinion, its superior user support, in both consulting and services, although there is still room for improvement. That has made our scientific work more productive, and that's why NERSC is important to me.
   We enjoy the sizable computing resources in multi-way SMP nodes. In particular the 8cpu and above nodes. The large number of nodes permit us to do large simulation batches. We find it possible to do considerable ammounts of code performance enhancements on this hardware thanks to acceptable queue times on debug queues and interactive queues and passable performance monitoring tools.
   I do electronic structure using quantum monte carlo, so having a rubust large computer is of extreme importance to me.
   Provides access to large machines.
   NERSC provides very good user support. I am very satisfied with the way that NERSC people handle user's questions and requests; they are very professional. Also the NIM website is probably one of the most organized online managment system I have experienced.
    NERSC is the most reliable computational center on which I ever run large parallel calculations. The systems are stable and the support people are competent and in most cases come back with an solution or they do show that they take seriously user problems. Very professional team. As I'm working on developing parallel scientific applications I allways need to test and produce data on reliable machines.
   Computer resources are much better than other center (see below).

Just one little comment within my short experience of using NERSC: the interactive jobs for testing my own ideas are a little bit inconvenient before running longer/larger jobs.
   NERSC allows for large quantum chemistry jobs to be run quickly with MOLPRO.
   Queue managment has been greatly improved recently, and things seem to move well. Networking is very good. Consulting is very helpful in resolving issues. The machines run well.
   I am extremely pleased with NERSC. The resources have always been available when I needed them, they keep me well informed of changes, the machines have been reliable and have performed well, and they have been very quick to solve my problems when I had them (usually expired passwords, which is my fault).
   Excellent overall picture. People are trying really hard to satisfy users' requests.
   I am most pleased with the services provided NERSC staff. NERSC is important to me because it provides computing power that we do not have at our home institution.
   I have been using NERSC (or MFECC) for 26 years. It always has been and remains the best run supercomputer center in the world. The staff responds to requests and is very helpful in general.
   Machine is easy to access
   I work here and computing with NERSC is my job.
   Aside from the sheer number of CPU hours available, NERSC's strengths are its knowledgeable and responsive staff, and its comprehensive list of well-maintained and up-to-date software libraries and compilers. I also appreciate the timely updates about outages, the low numbers of such outages, and the queueing policies that make it possible to run many instances of codes that require 100s of processors for 100s of hours as well as those that use 1000s for 10s.
   NERSC provides significant resources and support for those with a minimum of hassle. It is an excellent example of a "user facility," with a sense that it really serves the users, not the people that manage it.
   Excellent high-performance computing access, very professionally managed. High reliability.
   Consulting services are very good.
   Frankly, NERSC has been an utter disappointment. I thought I would be able to run big jobs quickly and get a lot of science done but instead I've spent all my time trying to figure out how to compile stuff. I only compute at NERSC because it was easy to get the time and the queues are short.
   NERSC is important to me for the computation power and it is the main reason why i compute there.
   NERSC is important to me because it allows me to run relatively big parallel job I can not run somewhere else.
   Good variety of computer architechtures and helpful consultants.
   NERSC has a lot of somputational power distributed in many different slafors (SP, Linux clusters, SMP machines) that can be tailored to all sorts of applications. I think that the DaVinci machine was a great addition to your resource pool, for quick and inexpensive OMP parallelization.
   I remain quite satisfied with queue times and ease of use.
   NERSC is a window for me for the whole world. It is part of my academic life like can not do without it. I am very greatful for evryone at NERSC for their continuing good services.
   Discount charging program for large jobs is great.

   very satisfied with consulting, machine accessibility ....
   Consulting Service.

Large scale computations (can not be done locally)
   running climate models requires large resources. a single or couple linux boxes just does not have compute power. Bassi has much better turn around than seaborg and if your application can only use less processors effectively is much better for work/cputime ratio.
   I trust the expertise on technical issues and the reliability of the availability of the resources (hardware, software, people).
   Implements experiment software
   the software env. always works as expected. time from uploading my code and data to having a working production environment is very competitive
   I think NERSC is very well supported, with a very logical layout, and nearly all the tools I would need. This has allowed me to learn the system, and get useful work done in a relatively short time.
   I think the NERSC machines are generally well supported and that the organization is solid. Applications are generally well handled, and the organization gives an impression of running a "tight ship".

   NERSC is very important for me to accomplish important research projects.
   maintenance (uptime and stable operation of computing nodes)
   I think nersc consult service, the act software service are the best in US.
   

1. (a) For the robust stable computing environment .

1. (b) I compute at NERSC as certain problems require the memory

of 1000 processors

2. I'm pleased with the fair batch queuing system and

prompt reply to inquiries.
   NERSC is important to me because I don't have enough computer resources in my group to perform the computation I need to do for my projects.
   Computing at NERSC is relyable. Documentation is complete and any information needed can be found online.
   Good facilities with good support. I've had good turnaround on jacquard (less good on seaborg). But since our code is better suited to jacquard, this is not a problem.
   NERSC is a very well managed center. The precision and uniformity of the user environment and support is outstanding. I am fairly new to NERSC (INCITE award) but it compares very favorably indeed with NSF centers.

Our research is totally dependent on very large scale computation. I hope we will be able to work with NERSC in the future.
   The computers are stable and always up. The consultants are knowledgeable. The users are kept well informed about what's happening to the systems. The available software is complete. The NERSC people are friendly.
   Consulting is very good in Nersc, and consultants are kind, well-responding to my needs.
   I use nersc because Seaborg has significantly more RAM/node than other clusters I have access to.
   NERSC offers state-of-the-art computing platforms and necessary softwares for conducting scientific research. I am very satisfied with the support of NERSC in carrying out my research projects.
   Highly efficient clusters.
   NERSC provides excellent computational facilities and excellent support.

Since the late 80's NERSC has provided all computational resources for my research activity.
   Interactive runs on Seaborg - this is the ONLY useful means of debugging my MPP code available on NERSC or NCCS supercomputers.

   There are machines that fit my calculations and there is the possibility to up (time and spece) quotas to perform these very large, very long jobs. I could not do these runs on any other resource that I have access to.
   Consultant support at NERSC is very good - I rate it more highly than other supercomputer centers I have experienced.
   seems to be a reliable, well-maintained system. we take advantage of the parallel processing resources at NERSC.
   I am familiar with NERSC, and I think you guys provide a good,

universal service with emphasis on HPC.
   Network connectivity is good. HPSS is reliable. Bassi and Seaborg are reliable. This makes post-processing large runs less of a headache than other places.
   NERSC provides me with tremendous computing power and availability. I have been a little disappointed with problems oin Jacquard, related most likely to the MPI implementation. NERSC consulting was however able to help me with that but it was still a considerable decrease in the usability of the machine.
   Your email promised this would take only a few minutes. I have

run out of town and must leave. Sorry.

Should have put these questions first if they are important.
   NERSC provides the easiest MPP access for many of us in the DOE spehre. For those of us who do science and do not program 12+ hours a day, the NERSC "interface" is relatively easy to use once you become familiar with it.
   Very helpful and important for my research
   fairly good consultant support
   I compute at NERSC since one of my programs needs a lot of memory and

nodes. and runs long.

NERSC is for me the next step up from the Ohio Supercomputer Center,

which does not have the same machine capability.

so, I am using OSC when developing code or with smaller code,

and for more I need to come to NERSC. and I am happy with that.

   The waiting time in seaborg has been getting worse and worse. Adding new machines such as bassi and jacquard was adequate. Currently I extensively use jacquard, which shows very reliable performance. However I feel that the home quota (5G) is rather small in jacquard even though I have an option to use NFS.
   Availabiity of many processors (>64).

Large memory jobs possible (with 64 bit compilation)

"Minimal" down time

Jos 'eventually" get done
   I really think the nersc team is doing a great jb of keepin ghte fortran compilers and the math libraries working properlly. I have used other clusters and I have had tons of headaches. While in Seaborg and Bassi, my experience compiling the codes have been really smooth.
   One of the great benefits for us of using NERSC is the fact that the HPSS and PDSF systems are available. I think that the combination of the two is very powerful for experimental particle physics. We do not use the other resources offered by NERSC because they are not suitable for the type of analysis we do. However, being able to read a large data set from HPSS and process it on PDSF in a finite amount of time is very valuable. I also think that in general, the switch to GPFS as the filesystem of choice for NERSC has been an excellent decision.

I am also impressed by the ease with which one can request (small) resources for a start up project. I recently requested some computing resources for a new project we are planning for and was up and running in a few days. This helps us tremendously in trying to reach our scientific goals. Having worked with a number of computer centers, I have to say that NERSC does this very well.

I also think that NERSC is very sensible with the current overall computer security approach (see also below).

Furthermore, I am glad to hear that NERSC has decided to setup an open source software group. I hope that this group will work on some of the open source software that is in use at NERSC and build up detailed expertise using that software. One of the projects that I hope can be looked at is the Sun Grid Engine (SGE) - the batch queue software in use at PDSF. Perhaps this software can also be used on some of the other computer clusters.
   NERSC has an excellent hardware and software resources, which

are very important.

I am most pleased with our request and acquisition of allocation hours,

and the outstanding Help suport (timeliness and accuracy).
   The best part about computing at NERSC is the support and the reliability of the computers. I could use our local computers (LLNL) but the support is not nearly as good nor are the machines as stable.
   NERSC provides a stable computing environment for the work that I could not done somewhere else. Many of my design and analysis in the area of accelerator modeling would not have been possible without NERSC computing power.
   The PDSF specific support staff are very good; they need more help.

HPSS can hold a lot of data.

Access to NERSC computing via ssh and scp is crucial for its overall usability. Please do not go to a keycard/kerberos/gridtoken etc. authentication. This would break much of the automation ability which is vital for large collaborative projects.
   Provides reliable hpc resources - hardware and software. Long term time allocations

and sensible time allocation application process both providing a good match

to ambitious long term scientific programs.

Straightforward and transparent account policies and procedures.

   NERSC runs a reliable computing service with good documentation of resources. I especially like the way they have been able to strike a good balance between the sometimes conflicting goals of being at the "cutting edge" while maintaining a high degree of uptime and reliable access to their computers.
   good
   There's a lot of computing power at PDSF and the system works. I like things that work.

   NERSC makes possible for me extensive numerical calculations that are a crucial part of my research program in environmental geophysics. I compute at NERSC to use fast machines with multiple processors that I can run simultaneously. It is a great resource.
   I use NERSC because I have access to a lot of processors on seaborg.

   Keeping the most advanced hardware available in a stable environment with easy access.
   Availability of resources are good. Performance of computers are good. Documentations are good.
   We compute at NERSC because it has computing resources that far exceed those of our home site. NERSC's support staff has provided very timely responses to our inquiries, and has resolved the few issues we've encountered very quickly. NERSC's support staff has constantly monitored our quotas and usage, and has adjusted allocations for our project in proportion to our usage. The response time for these adjustments is very fast! NERSC's support staff has definitely added to the efficiency and productivity of our project.
   NERSC is very well managed and operated.
   Generally, NERSC does

capacity computing very well, servicing a large community of users; it also has (or soon will have) excellent capability platforms for many jobs, both small and large.
    There is no other supercomputing facility in the world where I can carry out my theoretical and computational research in the Physics and Chemistry of Superheavy elements. I have been using the facility for ~ 10 years and I am most satified with the hardware, software, consultants,etc., and my first choice would be to use the NERSC facility.
   Variety of hardware. Long term support for hardware (even if newer generation hardware

is already available).
   It is a realiable computing center, e.g., Seaborg is regularly up and by today's standards it is still a powerful parallel computing tool (we will be using Bassi more in the future though).

If you would like to comment on the NERSC web site or other web interfaces, please do so here:
   Please update the web pages with queue information more frequently.
   I guess I better start using some of these other services...
   http://www.nersc.gov/nusers/resources/PDSF/stats/ (found by going to www.nersc.gov -> PDSF -> Batch Stats) should show the SGE batch statistics.
   NIM should allow users to submit a file (pdf, word document, etc.) for allocations requests rather than fill in an online text box that doesn't allow for formatting, editing, figures, special characters, etc.
   Searching for relevant information, e.g. ways to optimize code on different architectures, suitability of numerical algorithms and libraries for solving specific problems, could and should be improved.
   The web site does not document the process for changing passwords clearly.
   the site certificate seems to have issues with some browsers
   The web site is really very useful, both for beginners & for advanced people. I am really very impressed.
   The NIM interface looks strange in Firefox because of the frames. Perhaps it's time for an upgrade.
   I occasionanally come across outdated info. No other significant issues.
   I find the NIM interface hard to use; but I haven't spent too mch time learning it which might be the problem.

Searching in general hasn't been too useful, but I don't know if that's the searc function or missing content.
   Sometimes very hard to find out special flags of xlC compiler.
   Messages left on the online help desk were answered usually within a few hours.
   In general the website is OK to good. However, I find the NIM web user interface very poor and non-intuitive.

   NIM.nersc has worked very well for me the few times I've needed it.

What should NERSC do differently? (How can NERSC improve?)
   Better/faster supercomputers. Lower point-to-point latency message passing.
   shorten the queue or make it more consistent wait time- sometimes a job starts within 1 min of submitting, sometimes it take 24-48 hours. Its very difficult to plan jobs and choose which computer to use if the queue is so unpredictable. It would be really helpful if the computer could give me an estimate after I've submitted something for how long it will take to get through the queue. Even a crude estimate would help... 1 hour or 48 hours?...
   Seaborg needs improvement, it keeps crashing.
   It would be good to have queue for long jobs at PDSF.
   Hope NERSC can have improvement on visualization software and hardware.
   The consulting people should put more time to solve the customer's questions.
   mass storage system with two servers and hsi/ftp access is unconfortable

migrating HOME file system to disk in back ground is easier to handle and allows

faster access
   To improve the speed
   Get Franklin online ASAP
   Hard to say...
   I am already quite satisfied with NERSC.
   Seaborg is getting old and slow, it would be nice to have a new computer with a similar size as Seaborg.
   You need a much better queue structure! Not every job runs effectively on a vast number of processors, and those of us with long running jobs that need relatively few processors should be granted a way to use the time that we're allocated.
   Support for software in the field of molecular dynamics/ biophysical chemistry could be somewhat improved, but the existing offers definitively already provide a basis to work with.
   The applications process should allow for submission of a file rather than online text boxes. The charge factors for newer machines should be reduced.
   improving web documentation for turoring of softwares and etc.
   The CPU limit on interactive testing is often restrictive, and a faster turnaround time for a test job queue (minutes, not hours) would help a lot.
   Better information on the collection of software and promoting new tools for use in the scientific community. Help to simplify the use of computers.
   The biggest problem I have is with using the PDSF interactive nodes. They often become unresponsive or slow. I commented on this earlier in the survey.
   Having production accounts for collaborations would be quite helpful.

HPSS should have better interface options.

PDSF reliability is poor (bad nodes draining jobs, periodic slowdowns, etc.)

PDSF interactive responsiveness is poor even on a good day. It can take several seconds to start a vi session, source a script that sets env variables, etc. Login delays of 10s of seconds are common.

It is striking to me that the primary things that I am ranking poorly this year are the same things I complained about last year -- the HPSS software interface is still terrible, production accounts still don't exist, and PDSF is still understaffed/undersupported. The conversion of NFS diskvaults to GPFS based systems is the only thing I can think of that has actually improved at NERSC for me over the past year (and that was a huge improvement, to be fair).
   Do better job on security.

Value users time and effort. ... In events of unplanned outrage, should give users time to make a backup plane. Can't just lock the whole system out without any notice.
   Fix whatever is wrong with interactive use of PDSF. I cannot believe that the backup of user home directories can be having that big an effect on interactive use (what I was told when I filed a support request). I have asked if my problems are related to sl302 vs. rh8 and I am always told that I should not go backward to rh8, that everyone will be transitioned to sl302 soon. It seems like it has been ~1 year since I was first told that. I cannot even use emacs effectively in sl302, and I was advised to use rh8 instead! Conflicting advice, poor performance... if it doesn't improve soon, I'll stop using it.
   Fix the login problems on Bassi!
   It would be desirable to to get rid of the PathScale compilers and the mvapichi package on Jacquard and replace them with better options. They are bug-prone, and there is no obvious reason why they were chosen the first place.

So far I have been enjoying the various options of visualization softwares (mostly AVS and IDL) available in DaVinci. However, one of the major simulation code I have been recently using, the NIMROD code, has been designed to have its data output format work mostly with the visualization package Tecplot. Tecplot is a commonly used commercial visualization package that is well known for its easy accessibility and short learning curve. Unfortunately it is not available on DaVinci. I requested the consideration of installation of Tecplot on DaVinci about a year ago, based on not only the need from my own project, but also from the more important fact that the installation of Tecplot will benefit a large pool of NERSC users who are also users of the NIMROD code, which is one of the two major fusion MHD codes supported by DOE Office of Fusion Science. Yet my request is still under "evaluation" after nearly a year. I would like to take the opportunity of this annual survey to reinstate my request and concern about this request.

NERSC could also implement a more clear and fair computing time reimbursement/refund policy. For example (Reference Number 061107-000061 for online consulting), on 11/07/2006, I had a batch job on bassi interrupted by a node failure. The loadleveler automatically restarted the batch job from beginning, overwritting all the output files before the node failure. Later I requested refund of the 1896 MPP hours wasted in that incident due to the bassi node failure. But my request was denied, which I think is unfair.
   More and faster machines! Reliability of global filesystems would reduce wasted jobs.
   It is always necessary to keep increasing the number and speed of processors as possible, also to keep updated the mail libraries, but I think that is done properly.
   Get more modern hardware. Seaborg, for instance, is quite old.
   The only thing I can think off is the time allocated for debug class jobs. It should be larger.
   It would be nice to have more quantum chemistry programs available on NERSC like ACESII or QCHEM 3.0.
   Increased storage resources would be very helpful. Global file systems have been started and should be continued and improved.
   Nothing, keep up the good work.
   Difficult to improve an already excellent organization
   The move now is to large numbers of CPUs with relatively low amounts of RAM per CPU. My code is moving the opossite direction. While I can run larger problems with very large numbers of CPUs, for full 3-D simulations, large amounts of RAM per CPU are required. Thus NERSC should acquire a machine with say 1024 CPUs, but 16 or 32 GB RAM/CPU. This would be as much RAM as is available on Franklin!
   Have more processors for interactive runs
   In the big push to petascale computing, I would urge NERSC not to abandon the needs of smaller-scale codes (those most efficient on 1000 processors or less), which do not max out the capabilities of next-generation machines, but which do good science that cannot be effectively or economically replicated at smaller facilities or clusters.

   More variety of compilers on machines like jacquard. It misses a Fortran 2003 compiler for example.

   Would you send the plumber to fix your roof? No. Why then does NERSC force, for example, a chemist, to do all the system administration and software support he's not very good at, instead of having tech support fix the problem quickly? I really can't fathom why no one at NERSC is willing to help me get my code running. I'm four months into my work at NERSC and have yet to run anything non-trivial.
   continue to expand resources
   I would like to see more quantum chemistry support. Mostly in the form of keeping up-to -date the existing software.
   I think the emphasis on really large machines is not a good thing. The reason is that *most* of the time the large queues are not being used. I think more smaller machines should be build that focus on the queue sizes that are most used.
   One anxiety I've had is that my project typically only requires the submission of large jobs for short periods of time, followed by periods of testing new codes and working on other projects. But I still feel obligated to find ways to use my allocation, since it will be taken away otherwise, and failing to use a requested allocation puts me at risk for not having it renewed the next year.

I understand it might require additional administrative work, but it might be nice to have single-project applications where a specific block of additional allocation hours could be requested for a particular task.
   It is very very fine as it is.
   The Allocations process should make more use of external review

and be more open to researchers not currently funded by the DOE.
   no suggestions - very happy
   I'd like to see an expanded set of hsi commands, like md5, chksum, gzip, bzip2 and pkzip.

I'd also like to be able to use htar to make a compressed archive either with gzip or bzip2, like the tar -j or tar -z options on gnu tar.
   It would be valuable to some of my applications to enable network access directly to and from the compute nodes of the via compute nodes from other DOE SC centers.
   Enable connectivity with PCs. Allow a way to use commercial tools found on a PC with the CPU power of NERSC.
   better grid support
   The queue structure, which has very few running jobs per user, heavily favors people running many-node jobs, not people running embarassingly parallel problems. From a queueing perspective, it should not make a difference if I do my science through one 128-cpu job or 128 1-cpu jobs. There is nothing saying "better science" is done with large jobs, in fact, embarassingly parallel jobs likely have less overhead and better efficiencies in the queue. This can be accomplished by changing the queue structure so that it's not the number of JOBS that is limited, but the number of NODES, ie one user can have at most x nodes at a time, regardless of how he chooses to use those nodes.

   More adequate and equitable resources allocation based on what the user accomplished in the previous year.
   Do you have large amount of memory available to matlab on Davinci?
   Make it easier to get allocated resources for people who needs to do their job on NERSC. For instance, I do not know one year in advanced whether I will get certain support on doing certain computation. The startup account just does not have enough resources for those computation jobs I need to do. NERSC would mean nothing to me if I could not use it. Also, please be fair to people who use Monte Carlo algorithm. It may be trivial to parallelize a Monte Carlo code but it does not necessarily mean it can tolerate high latency or low bandwidth on a home brewed cluster when scaling to hundreds of processors. And, in some computational problems, Monte Carlo is a way to obtain the most accurate solution.
   I really don't know. I'm very satisfied.
   I am a little dismayed that NERSC is replacing Seaborg with the Cray XT. I should say that I am a strong supporter of Cray in general, but many centers are moving to "cluster-like" systems that have very weak capability on a per-node basis (for example, limited per-node memory) and the XT is at quite an extreme (but with an outstanding interconnect). Personally, I would have loved to see Seaborg replaced with a large Power6 with very large per-node memory. I expect the XT4 will have superior scaling at high processor count, but I doubt we will be able to use it at all due to its limited per-node memory. Pity.
   ??
   NERSC needs to improve on disk space that users need. Some users like me need a large disk space where daily generated huge files are stored.

But HPSS seems to be quite a headache to store files, since one can't open and read files, and transferring back and forth from it to other machines seems to be quite a slow proccess.
   Little more favorable policies for smaller jobs.
   One problem has been the long wait in the batch queque for large jobs.

This was certainly a problem in 2005 (at some point I complained with Francesca Verdier about this). This year I have not been running as much,

bnut it looks like the situation has improved.
   Improved reliability - software changes break my codes or change the results too often

Improved batch queue throughput for medium/large jobs
   Long er batch times on Bassi would be very helpful!
   mainly keep doing what it is doing: bringing on new systems while keeping the old ones available long enough to make the transition easy.
   Don't suqeeze out the small (today) users in completely in favor of the "big guys" (today) --- small projects occasionally grow into big ones
   the long SEABORG outage a month or so ago was pretty inconvenient for me; the timing was awful. I sporadically have problems with F90 compilers, and the documentation on the compiler options is mostly incomprehensible
   the waiting time for jobs to run is still on the high side.
   Notify users promptly by email if their passwords have been reset or deactivated.
   I think the best way will be to have a few nodes where the walltime is larger than 48 hrs. Specially for the 64Gb nodes in Seaborg, if a calculation needs that amoount of memoery is also likely that it needs a longer walltime!
   I am worried that the PDSF cluster is not scaling up very well. As more experiments start running code on PDSF and potentially more machines get installed in the cluster, I think that more operational support will be necessary. I have also the feeling that in the recent year, the system admins of PDSF have become more cavalier in their approach to the whole cluster.

I would like to see the possibility of having group production accounts. This is something that we have requested for the past couple of years. I fully realize that there are security and accounting implications, but there are ways of solving this issue in a way where it is fully trackable who submitted what when. I thought that there was a solution that was going to be implemented, but somehow this never happened.

   I don't understand why NERSC doesn't use one time passwords. It makes me a little nervous that the access is not controlled more like other large computer centers.

   The reduction of queue time definitely will improve my productivity and hence the advancement of science itself.
   I would prefer a less formal, more flexible allocation process. I find my need for computational resources can vary significantly through the year and is not always easy to predict in advance. Being required to commit to a certail level of usage a year in advance (with the implication that if it's not used, it will be difficult to get back to that level in subsequent years) seems likely to lead to a certain level of wasteful computing when averaged over all users. This type of an allocation system has been in place at NERSC for many years and I don't claim to have a detailed suggestion as to how to change it. However, it seems timely to consider going to some type of non-allocation based approach of access to resources as is used at other computing centers.
   Upgrading to faster machines will be a nice improvement.
   
   Jobs using a large number of nodes tend to have too long wait times. I know this is not a simple problem to solve, but perhaps there is room for improvement.

How does NERSC compare to other centers you have used? List the center(s) you are comparing us to. (What do other centers do that NERSC should do?)
   Very well.
   Very Well
   I've used AHPCRC (army high performance computing research center and the super computers at the Minnesota supercomputer center). Nersc is much better about communicating problems/changes/new information.
   I am also a user of RCF at BNL. I prefer to work on NERSC (PDSF), which is faster than RCF.
   In terms of production run, NERSC is doing very good, especially with the machine Bassi, compared to SDSC. However, data visualization is still need improvement.

   UCSD

ucsd has smaller number of processors available
   NIC, Germany:

NERSC does very well in terms of allocation of CPU-time compared to NIC

mass storage system with migration is simple to use as the NERSC HPSS file system

   RCF
   cf COSMS, Cambridge, UK:

NERSC has much better facilities, and is considerably more stable.
   Your staff is more user friendly and that is crucial to success...
   It is better than ORNL-NLCF and PSC.
   Compared to ABCC, NERSC has faster job turn over rate, shorter job waiting time, much more CPUs to use.
   NERSC is the best one.
   NERSC has greater amount of computational resources, easier access and constant availabiliti through the year
   NERSC is better than NCCS at ORNL.
   NERSC machines are up more reliably than NCAR & ORNL

We have fewer hardware and software problems with NCAR machines.

We get better turnaround on ORNL and NCAR machines

ORNL has much more responsive consultants.
   I have used computing clusters at Fermilab, SLAC, CERN, and BaBar experiment clusters around the world. Generally the computing power at NERSC is better on paper, but the ease of use (production accounts, HPSS software, usable uptime/stability, etc.) seems worse at NERSC. Other centers allow production accounts for processing of collaboration data.
   compared to Jazz in Argonne National Lab, I find NESRC provide much better computing resources in terms of availibity and perfromance.
   The TJNAF computer farm (>100 Linux boxes) has a much faster turnaround for small (test) jobs, but there are no real consultants available.
   n/a
   NERSC is the best computing center I have used. The user support and system administration is top-notch. This is compared to a big cluster I attempted to use at LSU, the SNO grid computers, and locally administered mini-clusters.
   RCF used to be terrible and PDSF was the model site. They have switched places, in my opinion. I also use Livermore Computing, and that system is also better managed and operates more smoothly than PDSF these days.
   NERSC is the main center I use
   The allocation process at other centers (such as NCSA) is simpler.
   I will rank NERSC as one of the top centers among all centers that I have used. The quality of service that NERSC provides is comparable (or even better ) to that of Minnesota Supercomputer Institute or National Center of Computational Science (ORNL). In my opinion, significant expansion of the current computing facility to accommodate grand challanges in science is the probably the most imporant next step that NERSC should consider. The arrival of the new Franklin machine will definately narrow the gap and we are looking forward to testing and porting our codes the Franklin platform as existed NERSC users.
    In terms of reliability and user support it is very good comparative with NCCS at ORNL.
   Minnesota Supercomputing Institute, Minneapolis, MN
   No other centers.
   I have been extremely pleased with NERSC and it compares with the top centers I have used. I have used resources at LLNL, LANL, Sandia National Laboratory.
   NERSC is more professional than other centers that I have used
   Hands down the best. Much better than SDSC, OSCER (our local center), or HLRN
   Easy to get large jobs run. PSC,ARSC
   In addition to NERSC, I have made use of the NCCS at Oak Ridge as well as local clusters at my home institution. Compared to the NCCS, NERSC has far better reliability and software support, a less overtaxed and better-informed support-and-consulting staff, and much more informative web resources, with timelier information about system outages, upgrades, and similar issues. I also find NERSC's queueing policies more congenial, and its security measures less of an impediment to productivity.

   I have been using NCSA, PNNL, OSC and the Cornell facilities. NERSC compares well with all other centers. Its strength is reliability and access to large numbers of processors. Its weaknesses are long wait time on seaborg and the missing Fortran 2003 compiler.

   NERSC is generally better than most other centers
   Hands down the worst I've seen. Other centers give their users timely and effective tech support, even if it means actually spending some time on support requests. If I struggle for more than week installing a code at PNNL, they install it for me so I can get on with my project.
   NERSC has super people now only very knowledgably but also very friendly. Thank you so much for all of the people there.

   I have allocations also at San Diego and Pittsburgh. San Diego has a queuing

policy that favors large jobs. Both of them operate strategic user programs to

help the large users develop efficient codes.
   We use ERDC and NAVO systems of DoD.

NERSC has better websites, explanations .... for users.
   Good. ERDC MRSC
   very good center, very reliable. ran at NCEP they had stability problems.
   It is as good if not better than all facilities I utilize.
   as good as the best compute centers I've used
   The NASA advanced supercomputing division Columbia machine, which has much more flexible queueing policies than NERSC and quicker turnaround. (It also has less users, which isn't NERSC's fault.)
   Excellent
   NERSC is the best of all.
   Compared to Juelich and DESY: NERSC's documentation is far more complete. I also find the queing system on Seaborg (debug class etc.) much more efficient.
   Not really sure.
   LLNL, SDSC, PSC, NCSA, TACC.

I think you do better than any of these in terms of user support.
   NERSC is by far the best compared to ORNL NCCS, ARSC, SDSC, etc.
   An excellent facility is indeed NERSC, compared to the Okridge Supercomputing center.
   I only use NERSC facilities.
   N/A
   NERSC compares very favorably to other centers (eg ORNL, LANL).
   no knowledge
   NCCS.
   see above.

OSC has more different architectures around to test things out.

that is great -- though I would not recommend NERSC doing the same --

NERSC needs to concentrate on big machines and run them well.
   The seaborg processors are quite a bit slower (~factor of 3-4) than the current intel-Xeon processors we have on our local parallel clusters. The availability of large number of processors at seaborg is atttractive.

Local clusters can have extenive down times ( several days).
   I have used Jazz in ANL. The cluster there is smaller, ~300 CPUS. However, the math libraries (i.e. SCALAPACK, FFTW, etc) and the fortran compilers are not as well integrated as it is on Seaborg. As a physicists that is more worried about the science rather than software issues; therefore, the experience of porting codes to Seaborg has been smoother
   I have used the RHIC Computing Facility at BNL and I believe that NERSC compares very favorably with that cluster. One of the reasons for this is that NERSC appears to have a more pragmatic approach towards the security burden. Most users realize that in these times we need to be careful with computer security, on the other hand, this should not overburden the user. I know that at RCF, some users can no longer do their work because of the security situation there. I think that NERSC has mostly solved this by careful network monitoring and isolation of machines.
   NERSC is provides by far the best support. Compared to LLNL.
   I also use NCCS at ORNL. NERSC hardwares are more stable and perform better. On the other hand, NCCS staff would assist in more direct ways with users to improve the performance of applications on their machines.
   Very favorably, in particular as far as a development of a long term research program

is concerned unlike some others supercomputing centers often aiming at quick

benefits at low costs (short allocation period, difficulties with extensions etc). E.g.

MareNostrum at Barcelona SC.
   See response to the first question in this section.
   NERSC is my favorite center.
   We have used LBL's SCS (scientific cluster support) service in the past. NERSC support's response time and quality of cluster maintenance (both hardware and software-wise) is definitely in a higher league. This is likely due to the fact that SCS has many different cluster setups with different hardware and customized software, so issues are more complex. The only thing I can think of is that SCS's clusters seem to have a more secure gateway, namely that you need a secure key provided by a handheld device to log onto the system.

   LCF at ORNL. It compares well in consulting, although ORNL is getting better with the years.
   NERSC is very good compared to ORNL, OSC, PSC, SDSC.
   NERSC is among the top best.

If you would like to comment on NERSC services or request additional services please do so here:
   Off-hours support for PDSF is limited. The PDSF-staff are always helpful and responsive, but if they are not available (off-hours or on travel), critical issues sometimes get delayed. For non-critical issues this is fine, but for critical issue, such as GPFS usability (stale file handles) and filesystem slowness should be addressed by off-hours support staff.
   Frequently I lose contact with SEABORG, or SEABORG goes down, yet the MOTD says nothing.
   I don't understand the comment about software changes. Is this with respect to computer libraries or routines or is this with regard to my program?
   I would like to express greatly gratitude for Dr. Andrew Rose's help on PDSF service.
   sometimes when the pdsf cluster goes down, I'm unsure whether to report it or not. I usually assume someone else has becuase so many people use it. I find that the pdsf webpage, which is supposed to report its current status, is usually very lacking in staying updated on an hour-to-hour basis for these types of crisises
   The notices are sufficient to inform me.
   By in large, I am very impressed with the quality and professionalism of the NERSC staff and organization. In my 35+ years of running on academic and government computer systems, NERSC is best experience I have had.
   the MOTD on PDSF is nearly useless because so much scrolls by that the important messages are sometimes off the top of the screen by the time I get my prompt.
   I would like better email communication of unplanned outages and downtimes.
   I was not informed about the recent lock-out for more than a week. There was no announcement from the NERSC. I belive it is NERSC responsiblity to inform the users in advance of such a long recess. Have you ever thought that you wasted about 3% of users' annual research time? It is very precious, if you have not notied that yet.
   The automatic reduction in allocated hours is very inconvenient, because it does not let us schedule the runs as to suit the needs of the project.
   Overall, I am satisfied with NERSC services. But I have never yet heard a response to requests (placed indirectly through my PI) for an increase in the inode quota on my home directories, which would make my work much easier.

   There are perhaps too many e-mail messages - I tend to lose track of more important ones (like major outages) amongst the many I receive from NERSC.
   The Allocations process should make more use of external review, and be

more open to researchers who are not currently funded by the DOE.
   All seems to be fine.
   NERSC runs a first-class operation.
   Allocations process in terms of ERCP is good but the overall DOE philosophy of strongly favoring the big-big projects is somewhat short-sighted.

Many of us work on projects which are small now but may grow to be very major players --- i.e., from little acorns, might oaks grow and so on. Squeezing us out when small may lead to massive difficulties later.

One can argue it is best to develop MPP codes when a "small" player than when a major player (where bugs/mistakes could waste millions of CPU hours.
   Correction of the security issue that occurred last month was handled very professionally and efficiently.
   

As mentioned earlier, I was not informed when my password was deactivated.
   I would like to see better off-hours support for PDSF. The system is currently being run with full support during business hours only, I would like to see this support to be expanded to off-hours.

The PDSF system has grown up to be a significant system in the NERSC infrastructure and deserves more attention during the non-business hours. One of the problems we occasionally encounter is that one of the batch nodes is "bad" (HW or SW malfunction) and that the node "eats" jobs. Jobs will start on the node, but immediately abort due to the problem on the compute node, then the following job starts, draining the queues down without actually completing the jobs. NERSC operators do not appear to be willing to fix these nodes by taking them out of the batch queue system or other approaches and a node like that can be malfunctioning for a whole weekend. There are some work-arounds for this problem, but they require fairly advanced knowledge of the batch system. I would like to see NERSC support PDSF in the off-hours.

   Generally, PDSF personel seem to work very hard to respond to problems, even off-hours. Is off-hour support official, or just something they do to be nice? Really, PDSF should be officially supported 24/7 by NERSC.

   The MOTD on PDSF is too cluttered, it's hard to extract any useful information even if it hasn't scrolled out yet.
   The email updates work very well.
   Off-hours 24x7 Computer and ESNet Operations support currently does not provide account support (e.g. password reset). It would be more helpful if it provide account support if off-hours as well in the future.
    I am most satisfied with the NERSC supercomputing facility. I enjoy using this facility even from Canada and I am most grateful to DOE and my PI for providing me access to this state-of the art facility. Thanks .

If you would like to comment on NERSC training or request additional training topics or methods please do so here:
   I mostly use the PDSF-FAQ, Ganglia pages for PDSF (to see if a server is down if I cannot reach it) and support request form. These all are fine.
   A web tutorial on parallel debugging would be v. useful.
   The last question is tough to answer. This is a resource issue. For example I may not telecommunicate because I do not have the inherent capability...
   I attended the ACTS workshop in 2005 and found to be very useful and well organized.
   I hope NERSC can provide some traveling support for my students to attend the traing on-site.
   In general, I think my capabilities were pretty good before I started using NERSC facilities. So in general, I haven't needed too much in the way of training facilities. Thats why I don't find the training services particularly useful. I'm sure they are useful for people who need them.
   More training on 3D visualization software and how to include user-developed modules for AVS Express and/or Visit. An easy step-by-step web tutorial would be great.
   I don't use training.
   Being in Ohio, my group takes courses offered by OSC. I have offered to send them

to NERSC training. None have felt it necessary. I don't know any have used

online tutorials. OSC does a good job for our general education. Once they have

that they can find in on the web.
   It would be good ot have all past classes avaialble as MPEG downloads if feasible (maybe you do already...?)

If you would like to comment on NERSC consulting, please do so here:
   Debugging software is not easy. On basic problems, it works like a charm; however, on difficult issues, some additional thought is warranted. For example, some how TABs were inserted in my software. It is difficult to identify the source of this as in: computer operations, my incorrectly using the editor, use of debugging tools, or causes yet to be determined.

It is funny but just when you get things going, it is like a moving train that stops very suddenly...
   They have always been very quick to respond to problems by email or phone. Excellent work!
   Often when I send in a problem, I get a short reply that doesn't address my issue. Often it takes two or three exchanges of email to get my issue resolved, and since each exchange takes roughly a day, I'm often waiting 2-3 days for a fix. I run your machines very predictably. When I have a problem, as often as not, it's the machine that's misbehaving. Consultants nearly always assume that it's my error.
   Our old pdsf consultant, Iwona was terrific. However, when she was not availabe there was really no one to adequately fill in for her. I have not had very much contact with the new pdsf consultant, so I have no comments, but I've heard he's good. I wish we had more staff that were qualified to fill in when the default pdsf consultant is out.
   Consulting staff if very responsive and proactive in answering questions and resolving issues. Nice work!
   Thanks for your prompt, good service!
   Support for systems especially in emergency situations outside of normal business hours is still somewhat lacking.
   PDSF is undersupported. Eric and company are very good at responding to problems in a timely manner during regular business hours, but I have found the NERSC help desk to be nearly useless for off-hours problems, even when they are quite major (a licensing server being down, a bad node draing jobs, ...) Overall I am very satisfied with the PDSF-specific consultants when they are available, and very dissatisfied with the help desk's ability to help (or even page the POC) with PDSF related off-hours problems.
   NERSC consultants are your most important resource.
   I think the NERSC consultants are most likely the best that DOE has.
   Quality of technical response and advice varies considerably among different consultants. A more individually based rating system for each consultant might be more helpful.
   On the rare occassion where technical support was necessary, the technicians I spoke with were curteous and knowledgeable.
   Consultants have been extremely helpful.
   Excellent job
   In general NERSC consulting is the best in the world. Recently, requests have been turned down rather brusquely and some questions/problems have never been followed up on.
   The efficiency and expertise of the NERSC consulting staff is unmatched by any other high performance computing facility I've used, and is one of NERSC's best resources. Our one area of disappointment has been in the lag involved in addressing our difficulties in scaling our code to Bassi; I hope this will happen soon.

   I asked to have software supported and it took two months just to hear that it would not be supported. I am still waiting over two days for a request that should have been responded to in 4 hours. Basically, I cannot use NERSC because the support is so useless. This is in very stark contrast to all other experiences with DOE computing facilities. The support staff at PNNL is 1000 times more effective than anything I've seen at NERSC.
   Excellent consulting service. Many thanks (expecially to Dave Turner)!
   example, discovered substantial performance bug in mpi implementation on jacquard but did not receive a follow-up

of the exact problem and the fix from either nersc or the vendor
   

They provide an excellent service. From the technical questions to simple

requests for increased disk allocation.
   I've been very satisfied with them. They work well with users (well, me at least) to track down issues and find solutions. I've never felt like they were talking down to me, but they also explain issues well when I'm not acquainted with particulars.
   Best consulting staff of all Supercomputing centers...
   NERSC provides an exemplary service overall. The uniformity of the programming environments on Seaborg, Bassi and Jacquard is excellent!
   One ongoing problem with our use of the PDSF cluster is the lack of support on nights and weekends. Often our processing pipeline, which requires < 24-hour turn-around on a large amount of data each night, is effectively brought to a halt because of e.g. a bad node draining jobs from the queue. The existing PDSF staff is quite knowledgeable and helpful, but overworked; we have been trying to work around known problems in generic ways, but there's no substitute for having someone on call if an unforseen problem arises.
   i have not used these services during the past year, but my colleague has, so his answers will be more informative.
   Very good and responsive folks.
   Great job!
   I am sorry not to be more positive. And I don't keep a log of which of the platforms

we use at various sites drive the most crazy. Remember compilers are our

problem. Otherwise we have no problems. Generally we try to properly compile code before moving to NERSC. So our need for NERSC help is limited.
   I have requested info about wien2k software and it has been over 10 days without a response.........I have emailed back and hopefully someone will reply. I initially called NERSC help line and understand there may not be a staff person at the moment with expertise on the software wien2k available in Seaborg.
   All the times I have used NERSC consulting this year I have been very happy --- my kudo's to the staff
   I like the fast responses - I am impressed
   I am very impressed with the speed at which responses are answered, as well as the efficiency of technical support in solving problems.
   The issue of new temporary passwords is prompt.
   Want to thank all the consultants for a job well done.
   I've always been impressed with the consulting services at NERSC.
   The few times I have needed assistance, it has been QUICK AND COMPETENT.

I feel very fortunate to have such good people working behind the scenes, even if

I don't often need them.
    NERSC consulting staff is very user-friendly and most consultants go out of their way to solve user problems. This group has been performing as a most reliable and very helpful team and I have found that but for their help and advice I would not have accomplished in my research as much as I did. I believe NERSC consultants are sine quo non for the NERSC supercomputing facility, as the users have to talk to the consultants as they know best how to talk to the computing machines. My sincerest thanks to you all in this group for a magnificent performance.

Respondants (257):

OfficeNProgramNScience CategoryNProject ClassN
sjbailey10.4% Overhead257100.0% Unknown User20.8% 257100.0%
kurt10.4% Bailey, Stephen10.4%
ernst10.4% Cuffey, Kurt10.4%
mbarad10.4% Sichtermann, Ernst10.4%
luzhbin10.4% Barad, Michael10.4%
nemeth10.4% Luzhbin, Dmytro10.4%
arun10.4% Nemeth, Karoly10.4%
ibhadju10.4% Venkatnathan, Arun10.4%
lgarrard10.4% Hajdu, Levente10.4%
alimvl10.4% Glascoe, Elizabeth10.4%
kurts10.4% Van Leeuwen, Marco10.4%
qhxu10.4% Stockinger, Kurt10.4%
markh10.4% Xu, Qinghua10.4%
akcay10.4% Horner, Mark10.4%
psli10.4% Akcay, Cihan10.4%
karmas10.4% Li, Pak Shing10.4%
fspera10.4% Sawyer, Karma10.4%
hamed10.4% Frank Spera10.4%
fenghe10.4% Hamed, Ahmed10.4%
rpnabar10.4% He, Feng10.4%
dmitry10.4% Nabar, Rahul10.4%
bravenec10.4% Arkhipkin, Dmitry10.4%
fab10.4% Bravenec, Ron10.4%
tummala10.4% Schlegel, Fabrice10.4%
petermc10.4% Tummala, Naga10.4%
pjc10.4% McCorquodale, Peter10.4%
zhangson10.4% Cameron-Smith, Philip10.4%
anogga10.4% Zhang, Song10.4%
xinghua10.4% Nogga, Andreas10.4%
tristram10.4% Shi, Xinghua10.4%
zuntz10.4% Tristram, Matthieu10.4%
pmurad10.4% Zuntz, Joseph10.4%
deg10.4% Murad, Paul10.4%
liuls10.4% de Gasperis, Giancarlo10.4%
zhaox10.4% Liu, Lianshou10.4%
mlei10.4% Zhao, Xiongce10.4%
u533410.4% Lei, Ming10.4%
pferrin10.4% Ma, Qiancheng10.4%
acbauer10.4% Ferrin, Peter10.4%
reitzner10.4% Bauer, Andrew10.4%
cmalone10.4% Reitzner, Diane10.4%
wangxb10.4% Malone, Christopher10.4%
bmcmahon10.4% Wang, Xiaobin10.4%
eichhorn10.4% McMahon, William10.4%
adrianne10.4% Eichhorn, Falk10.4%
wdluo10.4% Middleton, Adrianne10.4%
severini10.4% Luo, Weidong10.4%
llhsu10.4% Severini, Horst10.4%
lhyatt10.4% Hsu, Lauren10.4%
jnbntz10.4% Hyatt, Lewis10.4%
dlh10.4% Bentz, Jonathan10.4%
apmaclin10.4% Harrison, Diana10.4%
fengliu10.4% Arlene Maclin10.4%
jthaler10.4% Liu, Feng10.4%
lipfert10.4% Thaler, Jesse10.4%
deisher10.4% Lipfert, Jan10.4%
averillf10.4% Deisher, Amanda10.4%
bweber10.4% Averill, Frank10.4%
schrier10.4% William Weber10.4%
snurr10.4% Schrier, Joshua10.4%
tholme10.4% Randall Snurr10.4%
cheolwha10.4% Holme, Tim10.4%
denisd10.4% Park, Cheol Hwan10.4%
bweaver10.4% Demchenko, Denis10.4%
vanda10.4% Weaver, Benjamin10.4%
jliua10.4% Glezakou, Vassiliki-Alexandra10.4%
faisal10.4% Liu, Jintao10.4%
osni10.4% Mehmood, Faisal10.4%
steiner10.4% Marques, Osni10.4%
plewa10.4% Steiner, Herbert10.4%
u34210.4% Plewa, Tomasz10.4%
twlee10.4% LoDestro, Lynda10.4%
drazer10.4% Lee, Tae-Woo10.4%
msd10.4% Drazer, German10.4%
lkong10.4% Daugherity, Michael10.4%
kbeard10.4% Kong, Lingzhu10.4%
izzo10.4% Beard, Kevin10.4%
ashinde10.4% Izzo, Valerie10.4%
shesheng10.4% Shinde, Aniketa10.4%
fujikawa10.4% Zhang, Shesheng10.4%
u102410.4% Fujikawa, Brian10.4%
tcox10.4% Gomes, Itacil10.4%
jasondet10.4% Cox, Thomas10.4%
steveh10.4% Detwiler, Jason10.4%
deturner10.4% Steve Holbrook10.4%
bhlee10.4% Turner, Dave10.4%
pingzhu10.4% Lee, Byounghak10.4%
jklay10.4% Zhu, Ping10.4%
hlzhou10.4% Klay, Jennifer10.4%
dimitre10.4% Zhou, Hsiao-Ling10.4%
mcosent10.4% Dimitrov, Dimitre10.4%
dazlich10.4% Cosentino, Mauro10.4%
ram10.4% Dazlich, Donald10.4%
lgrabow10.4% Devanathan, Ram10.4%
comolli10.4% Grabow, Lars10.4%
airbc10.4% Comolli, Luis10.4%
dacb10.4% Lee, Byeongchan10.4%
kappbb10.4% Beck, David10.4%
bayliss10.4% Kappes, Branden10.4%
lijewski10.4% Bayliss, Adam10.4%
mgmarino10.4% Lijewski, Mike10.4%
rome10.4% Marino, Michael10.4%
asriniva10.4% Salomon, Romelia10.4%
atang10.4% Srinivasan, Ashok10.4%
puj10.4% Tang, Alfred10.4%
felixk10.4% Pu, Jingzhi10.4%
rusanu10.4% Koziol, Felix10.4%
knam10.4% Nam, Kwangho10.4%
liu10.4% Liu, Keh-Fei10.4%
yifeik10.4% Kong, Yifei10.4%
u1151610.4% Nishimura, Yasutaro10.4%
cysung10.4% Sung, Chunyi10.4%
stibrich10.4% Geddes, Cameron10.4%
geddes10.4% Zhang, Jianbo10.4%
jbzhang10.4% Prusa, Joseph10.4%
jprusa10.4% Palacios, Alicia10.4%
palacios10.4% Mullowney, Paul10.4%
paulm10.4% Brugger, Eric10.4%
brugger10.4% Xantheas, Sotiris10.4%
u61710.4% Rashkeev, Sergey10.4%
rashkesn10.4% Zhang, Bin10.4%
bzhangtx10.4% Baron, Edward10.4%
baron10.4% Xu, Jin10.4%
jin_xu10.4% Benedek, Roy10.4%
u1662110.4% Gunter, Dan10.4%
dang10.4% Breslau, Joshua10.4%
jbreslau10.4% Draper, James10.4%
draper10.4% Morris, James10.4%
jmorris10.4% Hennig, Richard10.4%
rhennig10.4% Olcese, Luis10.4%
lolcese10.4% Hingerty, Brian10.4%
hingerty10.4% Scalzo, Richard10.4%
rscalzo10.4% Hammond, Jeff10.4%
jhammond10.4% Li, Zenghai10.4%
zli10.4% Lee, Lie-Quan10.4%
llee10.4% Xiao, Liling10.4%
liling10.4% Wang, Dunyou10.4%
dunyou10.4% Yeung, Pui-kuen10.4%
pyeung10.4% Granger, Brian10.4%
granger10.4% Tonzani, Stefano10.4%
tonzani10.4% Marone, Federica10.4%
federica10.4% Chen, Jin10.4%
jinchen10.4% Ponthieu, Nicolas10.4%
ponthieu10.4% McCabe, Clare10.4%
mccabe10.4% Fawzi, Nicolas10.4%
fawzin10.4% Hamilton, Edward10.4%
hamiltoe10.4% Vahala, George10.4%
gvahala10.4% Altun, Zikri10.4%
zikalt10.4% Soe, Min10.4%
u1013910.4% Lin, Pei10.4%
peilin10.4% Poon, Alan10.4%
poon10.4% Roche, Kenneth10.4%
roche10.4% Egan, Rob10.4%
rsegan10.4% Lenzen, Allen10.4%
u532210.4% Jonsson, Patrik10.4%
patrik10.4% Ching, Wai-Yim10.4%
ching10.4% Matis, Howard10.4%
matis10.4% Bagley, Justin10.4%
bagley10.4% Konerding, David10.4%
dkonerd10.4% Lou, Tak Pui10.4%
lou10.4% Leung, Mary Ann10.4%
maleung10.4% Ballance, Connor10.4%
cball10.4% Zulauf, Michael10.4%
zulauf10.4% Simmonett, Andy10.4%
andysim10.4% Koutsou, Giannis10.4%
koutsou10.4% Li, Xiaoye (Sherry)10.4%
xiaoye10.4% Wang, Boyang10.4%
bwang910.4% Ethier, Stephane10.4%
ethier10.4% O'Dwyer, Ian10.4%
iodwyer10.4% Prindle, Duncan10.4%
prindle10.4% Harkness, Robert10.4%
harkness10.4% Cai, Xing10.4%
xingca10.4% Morrow, Brian10.4%
bmorrow10.4% Donzis, Diego10.4%
donzis10.4% An, Joonhee10.4%
jman10.4% Tompkins, Lauren10.4%
tompkins10.4% Martin, Daniel10.4%
dmartin10.4% Li, Wen10.4%
liw10.4% Reichardt, Christian10.4%
chrisr10.4% Prior, Gersende10.4%
gprior10.4% Wang, Haobin10.4%
haobin10.4% Schiavilla, Rocco10.4%
u1501210.4% Sugiyama, Linda10.4%
u443410.4% Nilekar, Anand10.4%
anilekar10.4% Manson, Steven10.4%
smanson10.4% Colgan, James10.4%
jcolgan10.4% Krueger, Steven10.4%
skrueger10.4% Cary, John10.4%
cary10.4% Pinar, Ali10.4%
apinar10.4% Shao, Ming10.4%
swing10.4% Hawkes, Evatt10.4%
ehawke10.4% Cummings, Julian10.4%
julianc10.4% Wu, Yuanfang10.4%
wuyf10.4% Kechechyan, Armen10.4%
kechech10.4% Keskitalo, Reijo10.4%
keskital10.4% Renk, Thorsten10.4%
trenk10.4% Lamarque, Jean-Francois10.4%
lamar10.4% Spichty, Martin10.4%
mspichty10.4% Isaacs, William10.4%
waisaacs10.4% Oberacker, Volker10.4%
u113110.4% Loch, Stuart10.4%
sloch10.4% John Wilkins10.4%
jwilkins10.4% Ford, Denise10.4%
dcford10.4% Yan, Yiguang10.4%
ygyan10.4% Davila, Lilian10.4%
lilian10.4% Commer, Michael10.4%
mcommer10.4% Song, Jung-Hwan10.4%
song10.4% Bourg, Ian10.4%
ianb10.4% Loy, Raymond10.4%
rloy10.4% Fawley, William10.4%
fawley10.4% Furman, Miguel10.4%
furman10.4% Elster, Charlotte10.4%
celster10.4% Tzoufras, Michail10.4%
tzoufras10.4% Kim, Yong-Hyun10.4%
yhkim10.4% Wu, Ruqian10.4%
rwu10.4% Lordi, Vincenzo10.4%
lordi10.4% Zhang, Yong10.4%
yonzhang10.4% Gin, Brian10.4%
ginbc10.4% Mikhaylov, Ivan10.4%
ivanm10.4% Yoon, Mina10.4%
myoon10.4% Sarsour, Murad10.4%
msar10.4% Chen, Hanghui10.4%
hhchen10.4% Biswas, Rana10.4%
biswasr10.4% Bogatko, Stuart10.4%
sbogatko10.4% Candel, Arno10.4%
candel10.4% Idrobo, Juan10.4%
idrobo10.4% Szuba, Marek10.4%
cyberman10.4% Decowski, Michal10.4%
decowski10.4% Puzyrev, Yevgeniy10.4%
puzyrev10.4% Ratcliff, Jay10.4%
jratclif10.4% Potter, Gerald10.4%
u1655210.4% Ng, Cho-Kuen10.4%
cho10.4% Petit, Leon10.4%
l8p10.4% Stompor, Radek10.4%
radek10.4% Peter, William10.4%
u452310.4% Spong, Don10.4%
dspong10.4% Strand, Gary10.4%
strandwg10.4% Shay, Michael10.4%
u1181210.4% Che, Haihong10.4%
hche10.4% Miletic, Tatjana10.4%
tanya10.4% Horton-Smith, Glenn10.4%
gas10.4% Domin, Dominik10.4%
domin10.4% Aydemir, Ahmet10.4%
aydemir10.4% Merten, Dirk10.4%
merten10.4% Jardin, Stephen10.4%
u43110.4% Minami, Tatsuya10.4%
tminami10.4% Ishii, Kazumi10.4%
kazumi10.4% Sosonkina, Masha10.4%
masha10.4% Gu, Zhihui10.4%
guz10.4% Zhao, Jin10.4%
jinz10.4% Rezayi, Edward10.4%
ehr10.4% Dean, David10.4%
ddean10.4% Malli, Gulzari10.4%
u1055010.4% Luszczek, Piotr10.4%
luszczek10.4% Saniz, Rolando10.4%
saniz10.4% Gerber, Richard10.4%
ragerber10.4%