palm Quick Start Guide
The SGI Altix systems at the NCCS are all
accessed through a single front
end system. A total of 1,152 Itanium2
processors are available in four
system images for a combined totall
of almost 7 TFLOP/s. For links
to more information on palm, see
the NCCS Systems Overview.
The examples in the guide below answer
general questions and provide a quick start
for users accessing the Altix compute resources.
Step 1: Connect to the SGI Altix
front end (palm) from your workstation
(ssh)
System access is allowed only through the
secure shell (ssh) command
and only from an authorized workstation.
If you receive an access denied
error when attempting the following
commands, most likely your workstation
address is not entered into the
hosts.allow file. In that case
please contact the NCCS
User Services Group with your workstation address.
If your workstation address is entered in
the hosts.allow file, then enter
the following command:
% ssh your_userid@login.nccs.nasa.gov
At the PASSCODE: prompt, enter your 4-digit
NCCS PIN + 6-digit Authentication Key Token
tokencode, together as one number, with no
spaces.
At the host: prompt, enter the name palm (the
name of the Altix front end). Finally
enter your password at the password
prompt. Note that only the Altix
front end is accessible for direct
logins; all other compute nodes
in the Altix environment must be accessed through
PBS.
| Top
of Page |
Step 2: Choose
your login shell
All new users will have their default login
shell set to /bin/bash. To change
your default login shell, contact
the NCCS
User Services Group. To
change your shell temporarily,
issue the csh, sh, or bsh commands
as applicable.
The startup files for C shell are .login and
.cshrc. The startup file for Posix
shell is .profile. Sample startup
files are available.
Save changes to the .cshrc file by issuing
the command:
% source .cshrc
Be sure to include the dot in front of cshrc.
| Top
of Page |
Step 3: Modify your .cshrc and .login startup
files (users of both Altix and
Irix environments)
Since the home file systems for both the SGI
Altix and Irix systems are shared,
it may be necessary to put architecture
specific options into your login
files to set up the environment
you desire. When using bash, sh,
or ksh, you may include the following
syntax in your
.profile, .login, or .bashrc file
to set up a different environment
when logging into either the Altix
or the Irix systems:
SYSTYPE=`/bin/uname -m`
if [ "$SYSTYPE" == "" ]
; then
SYSTYPE=`/usr/bin/uname
-m`
if [ "$SYSTYPE" == "" ]
; then
echo "Cannot
determine system architecture"
fi
fi
if [ "$SYSTYPE" != "ia64" -a "$SYSTYPE" != "IP35" ]
; then
echo "Invalid
system architecture"
fi
if [ "$SYSTYPE" == "IP35" ]
; then
echo "Put
IRIX specifics here"
elif [ "$SYSTYPE" == "ia64" ]
; then
echo "Put
Altix specifics here"
fi
When using csh or tcsh, the following syntax
may be included into your .cshrc file to set
up a different environment when logging into
either the Altix or the Irix systems:
switch ( `uname -m` )
case ia64:
setenv
SYSTYPE ia64
echo "Put
Altix specifics here"
breaksw
case IP35:
setenv
SYSTYPE IP35
echo "Put
IRIX specifics here"
breaksw
default:
setenv
SYSTYPE unknown
echo "Unable
to determine SYSTYPE"
breaksw
endsw
Similar things can be done based on hostname
or O/S type. Adding the above to your login
files will ensure that you set up the correct
software modules or system specific environment
variables at login.
| Top
of Page |
Step 4: Transfer files from your
workstation (scp) to the SGI
Altix systems
You cannot transfer files, such as data sets
or application files, directly
to the Altix front end; you must
use the mass storage platform (dirac)
to transfer files from outside
to inside the NCCS environment.
Use the secure copy command
(scp) to transfer your code or
data from your workstation to dirac.
For more information, issue the
command man scp or see our ssh
page for more information.
To scp the file “myfile&rdquo from your workstation's home directory; to
your dirac home directory, issue the
following command:
scp /home/myfile your_userid@dirac.gsfc.nasa.gov:myfile
Similar to an initial login, you will be asked
to provide both your PASSCODE and your password.
Note that this is your dirac password and not
your SGI Altix password, which may be different.
Depending on whether or not your login directory
is set up to be your home directory
on dirac, files may be transferred
either to your home directory or
your mass storage directory. Your
home directory, mass storage directory,
nobackup file systems, and others
are available on dirac. Note that
it is important to check your origin
and destination directories carefully
before transferring any files to
avoid accidentally overwriting
data.
| Top
of Page |
Step 5: Transfer files from the SGI
Altix systems to your workstation
If you are set up to transfer
files to your own workstation using
a service such as scp, for example,
if your workstation is running
an ssh server, you can initiate
a scp from the SGI Altix front
end to your workstation using the
following command:
scp your_local_userid@your_workstation_address:myfile myfile
This will transfer your file to your
home directory on your own workstation.
| Top
of Page |
Step 6: Load, unload, and swap
modules
Several versions of the Intel compilers
and other support applications
are available for users on the
SGI Atlix systems. These applications
are loaded into your
environment through the use of
modules. When you log into the
Altix system, there are several
modules loaded by default. To see
which modules you currently
have loaded, issue the following
command:
% module list
To see which modules are available to you,
issue this command:
% module avail
The output will display a complete list of
modules that are available to be
loaded into your environment. You
can load, unload, and even swap
modules using the module command.
For more information about the
module command, see the man module page.
| Top
of Page |
Step 7: Compile and run your code
(Fortran, C/C++)
The Intel compiler suite is installed on all
of the SGI Altix systems, but the
command used to compile your application
depends on which of several versions
of the Intel compiler suite you
are using. To see which
version you have loaded, issue
the module list command.
To compile your application with Intel compilers
before version 8, issue one of
the following commands, depending on whether
you are compiling Fortran or C code:
% ecc myprogram.c (for C/C++
code)
% efc myprogram.f (for Fortran
code)
To compile with the Intel compilers version
8 and later, issue one of the following commands:
% icc myprogram.c (for
C/C++ code)
% ifc myprogram.f (for
Fortran code)
Many different optimization options are available,
and the man pages outline all available options.
| Top
of Page |
Step 8: Use Message Passing Interface
Model (MPI, SHMEM)
To compile programs using MPI and link with
the necessary MPI libraries, the
following option needs to be added
to the compilation command, with
the assumption that the necessary
mpi.h headers are included in the
source:
% ecc
myprogram.c
-lmpi (for
C/C++ code)
% efc
myprogram.f
-lmpi (for
Fortran code)
For more information about MPI in general,
see the man mpi page. For an index and some
examples of MPI, refer to the <link MPI
Forum Standard Index>.
Complementary to MPI, a shared memory model
(SHMEM) can also be used in programs.
See the man shmem page for more
information. Assuming the necessary
headers are included in the source
(C/C++ applications require mpp/shmem.h),
use the following commands to compile
your applications:
% ecc
myprogram.c
-lsma (for C/C++
code)
% efc
myprogram.f
-lsma (for Fortran
code)
| Top
of Page |
Step 9: Solve
any problems encountered with compiling
MPI applications
Sometimes modules cause problems when loaded
in the necessary library paths
for the compilers. If you receive
the following error when compiling
an MPI application:
catastrophic error: could not open source
file "mpi.h"
the compiler is having difficulty finding
the necessary MPI libraries. There
are a couple of ways to solve this problem.
First, you can simply add the MPI path options
to the compilation line to point the compiler
to the MPI libraries and include files as follows:
% ecc
myprogram.c
-I/opt/sgi/mpt/1.11-100/include -L/opt/sgi/mpt/1.11-100/lib –lmpi
Note that this example assumes you are using
MPT library version 1.11.0.0.
Again, to see which version you
are using and to get more information
about the paths to the library
and include files, issue the following
commands:
% module list
% module display mpt.1.11.0.0
Another way of solving this problem is to
module swap to a later compiler. The following
command will swap a version 7 compiler for
version 8 compiler, which usually solves the
catastrophic error mentioned above:
% module swap intel-comp .7.1.042 intel-comp.8.1.030
| Top
of Page |
Step 10: Use OpenMP (OMP)
OpenMP is an extension to standard Fortran,
C, and C+ that supports shared
memory parallel execution. Users can fairly
easily add directives to their source code
to parallelize their applications and specify
certain properties of variables. To compile
your application with OpenMP, issue the –openmp
option to the Intel compiler:
% ecc
myprogram.c –lopenmp (C/C++
code)
% efc
myprogram.f –lopenmp (Fortran
code)
The Intel compiler supports the OpenMP 2 standard,
and MPI and OpenMP can be used
to create a mixed-mode parallel
approach. More information is available
in the Intel
Fortran compiler User's Guides.
| Top
of Page |
Step 11:
Submit a job to the batch queue
(qsub, PBS)
To access the compute hosts in the
Altix environment, you must submit
jobs to the batch queues. See Batch
Queues on the SGI Altix for more
information about the available
batch queues on the Altix system
and the amount of resources that can be requested.
For
more information consult man pbs.
In general, you will create a batch script
and then issue that batch script
to PBS using the following command:
% qsub myscript
This assumes that all the necessary requirements
are included in the batch script
itself using comments. Note that
you must provide your Computational
Project (formerly Sponsor Code
Account) while running a batch
script. Use the getsponsor command
to get your Computational Project
information.
To see the status of your job, issue the following
command:
% qstat -a
| Top
of Page |
Step 12: Run interactive jobs on SGI Altix
compute hosts
Since all compute hosts in the Altix environment
must be accessed through the PBS batch system,
the only way to run an interactive job on one
of the compute engines is through the following
command:
% qsub –I
To specify the total number of CPUs
and wallclock time, you may include
those options at the command line.
For example, suppose you wanted
16 CPUs for a total of 4 hours
to run some interactive work. You
would issue this command:
% qsub –I –l
ncpus=16,wallclock=04:00:00
In some cases, your job will not be started
immediately but will start when
sufficient resources become available.
| Top
of Page |
Step 13: Use performance analysis tools
Two tools can be used for profiling your code
on the SGI Altix systems.
First, the performance monitor pfmon is
a user interface to the performance
monitoring units available on the
Itanium2 processor chips. See man
pfmon for more information.
The SGI-supplied profiling Perl
script profile.pl is also available
for use. See Introduction
to Performance Analysis in SGI's Techpubs
Library for more detailed information.
| Top
of Page |
Step 14: Use debugging tools
The SGI Altix software page explains in further
detail the debugging tools available on the
Altix systems. In short, the following debuggers
are available:
- idb, an Intel debugger
- gdb, and open source Gnu debugger
- ddd, an open source data display
debugger
- totalview, not yet licensed
on the Altix systems
| Top
of Page |
Step 15: Software on the SGI Altix
systems (mathematical library, netCDF, HDF)
In general, most software
packages are available on both the SGI Altix
and Irix systems, including a SCSL (SGI mathematical
and statistical libraries), netCDF, HDF,
etc.
| Top
of Page |
Step 16: Use visualization tools (NCAR
graphics, idt, GrADS)
NCAR
graphics visualization software
(module pd-ncar.4.4.1) and
many sample programs are available
on the SGI Altix systems. The
X window interactive image
display tool idt is used to
visualize the graphics output
from NCAR graphics.
The other graphical software GrADS (module
pd-grads.1.9b4) is also available
at /local/LinuxIA64/grads/1.9b4/bin.
GrADS can be loaded via:
%module load pd-grads.1.9b4
| Top
of Page |
Step 17: Use file systems and data storage
Several different types of file
systems are available for storing
different types of data. The
following list shows a summary
of the different types of file
systems and their access methods.
Detailed information about filesystems
is available at NCCS
SGIs: Filesystem Access and Policies.
- Home File Systems
- Available
for all SGI hosts, Altix
and Irix
- Should be
used to store applications,
scripts, etc.
- Has
a limited size and should
not be used to store
large data sets.
- Nobackup
- Generally used to
store large working files
(input and output) used
for running applications,
post processing, analysis,
etc.
- Is
not backed up; any
files that need to be saved
for long periods should
be copied into the mass
storage directories.
- Scratch
- Is set up on each
compute host; a temporary
directory is created at
the time a PBS batch job
begins running.
- Is accessed
via the $TMPDIR environment
variable and is the fastest
performing file system.
- Is a temporary
storage area created
for the life of a PBS
batch job; any data
that needs to be saved
must be removed before
the job is completed.
- Mass Storage
Step 18: Sample PBS Script
Please Click here to view Sample PBS Script
| Top of Page |
Step 19: Using Quota command
quota -v userid
| Top of Page |
|