Talon 3 is a computing cluster, a network of many computing servers. This guide will show you how to gain access and use Talon 3 (See the Tutorial page for detailed information about Talon 3’s Topology and configuration).
Using Talon 3 and other HPC resources, first you will to register for a UNT HPC account. Click Here to request an account.
Once you have an HPC account, you can access Talon 3 and other UNT HPC resources.
Table of Contents
1. Accessing Talon 3
- Logging into Talon 3
- Changing password
- Navigating though Talon 3
- File Storage
- Transferring files
- Environment modules
- Other Talon 3 access interfaces
2. Running Jobs and Job Submission
- Overview
- Basic information about SLURM
- Talon 3 job submission
- The SLURM Job Script
- Interactive sessions
3. Compiling Software
- Overview
- Compiling procedures
- Compiling with MPI
- Package environment modules
- GPU compiling
- High-level languages
Talon 3 User Guide
1. Accessing Talon 3
- Logging into Talon 3
- Changing password
- Navigating though Talon 3
- File Storage
- Transferring files
- Environment modules
- Other Talon 3 access interfaces
Logging into Talon 3
To login, you will need to make a remote connection to one of Talon 3’s login nodes.
Requirements
- You must have an active account that has been approved by RITS.
- To request an account, fill out the New User Account Request Form
- You must be on a computer that is connected to the UNT network
- This includes the UNT Campus LAN, Eaglenet Wi-Fi, and the UNT Campus Virtual Private Network, VPN.
Accessing to Talon 3 requires using a Secure Shell (SSH) connection. Logins to Talon 3 are primarily NOT accessible through a web browser. You will need a SSH client on your computer.
When you login to Talon 3, you will be asked your username and password from your UNT HPC account. Typically, your username is your UNT EUID and your password is temporally assigned to you when your account was created.
If you forgotten your password or having problems logging in, email hpc-admin@unt.edu.
Connecting from a Windows Computer
You'll first need to download a SSH client application that supports SSH-2 protocol. Examples programs you can download are of PuTTY, OpenSSH, Cygwin/X, and SecureCRT. You can also use other approaches like using ‘Windows Subsytem for Linux’ and install, for example, the Ubuntu app from the Microsoft Store.
Once you have installed a SSH client app, you can connect to Talon 3 by entering in the Hostname of the login Talon 3 nodes: talon3.hpc.unt.edu. (Caution, talon3.hpc.unt.edu will NOT work with a web connections with a web browser. You MUST use a SSH client).
For a detail instruction about connecting to Talon 3, please go through our Tutorial page.
Connecting from Linux or MacOS
If you are logging from a Linux or Mac machine, make sure it has an SSH client (ssh, openssh, etc.) installed. Then, access Talon 3 by opening the Terminal application and using the command:
localhost$ ssh talon3.hpc.unt.edu |
To connect from off campus
Connecting to Talon 3 is only possible if your compute is in the UNT network. If you are off campus you may connect using the Campus VPN. Connecting to the VPN is outside the scope of this document, please consult this Campus VPN guide or the IT Service's Remote Access to Campus Computers (RDP). Should you require further assistance, please contact the University IT Help Desk.
Changing my password
To change your password, follow these steps:
- Login into talon3.hpc.unt.edu using your password.
- At the Command Line Prompt, enter the command:
$ passwd |
- Follow the prompts until completion and you receive the "Password successfully updated" message.
If you forget your password, please consult hpc-admin@unt.edu. Temporary password can be issued with a 48-hour expiration.
Account passwords expire every 180 days, so keep your password up to date to avoid any interruption of your use of Talon 3. If your password has expired, please contact hpc-admin@unt.edu.
Navigating though Talon 3
Talon 3 using a Centos 7 (Linux) Operating System. When you connect to talon3.hpc.unt.edu via SSH connection, you are remotely connected to Talon 3’s login servers. You will be confronted by a Linux Command Line Interface. Expertise of Linux’s Command Line structure is essential to conduct research on Talon 3. Look out for Workshops that go over Linux Basis and HPC courses conducted by the HPC staff.
For more information about the topology of Talon 3, please visit our Tutorial page.
Talon 3 is setup as a network of computer servers (nodes) that fall between two categories, login nodes and compute nodes.
Talon 3 Login Nodes
Primary login nodes: The Talon 3 login nodes can be access by the domain name: talon3.hpc.unt.edu. These login nodes are for users to setup jobs and simple file editing via a Linux Command Line terminal. Users will submit jobs via the SLURM queuing system to be dispatched on the compute nodes. There is NO X11 Forwarding capable on this server.
There are Talon 3 login nodes that are shared between all users. You should NEVER run computationally or disk-intensive processes on this login node as there are regularly many users logged in simultaneously. Any code or computationally-intensive processes MUST be submitted though the batch queuing system. Any violation will result in account suspension.
Visualization login nodes: There are three login nodes that have X11 capabilities and are Slurm submission hosts. This host is intended for using the graphical-based software, e.g., gnuplot, matplotlib, and other notebook features in software, such as MATLAB and Mathematica. Users can use these nodes for any post-processing tasks, but any compute-intensive tasks MUST be submitted through the Slurm queuing system. Also, X11 graphical sessions require large bandwidth to work effectively, and you may experience lag while on a home DSL or Cable ISP.
These nodes can be accessed by the domain name: vis.hpc.unt.edu. X11 forwarding will need to be enabled with the SSH client that you are using. Alternately, you can access a specific visualization node by entering one of the following domain names: vis-01.acs.unt.edu, vis-02.acs.unt.edu, vis-03.acs.unt.edu. See the Tutorials are a detailed view on logging in to the visualization nodes.
Compute nodes
The Talon 3 compute nodes are where the users will run their software and other computationally intensive applications. Users do NOT directly login to these nodes. Users run their calculations on these nodes by submitting them to the Slurm queuing system.
For detailed information on each of the compute nodes available on Talon 3, please click here.
File Storage
Talon 3 mounts two main file system available for each user ($HOME and $SCRATCH). Both file systems are assessible though all the systems in the Talon 3 network (login and compute nodes).
The $HOME storage space is located at /home/$USER for each user while the $SCRATCH space is located at /storage/scratch2/$USER.
The $HOME space is for simple storage of input/output files and code. It is NOT intended for intensive file operations. This space is limited to 100GB of data storage.
The $SCRATCH file space is a Lustre file system that is intended for users to run their programs and can handled large, parallel file operations. This space is limited to 50TB of data storage.
There is also storage space available on Talon 3 as a share space for users with the same allocation (PI group), UNT courses, and other special projects. This space can be created with special permission to the UNT HPC staff by email hpc-admin@unt.edu.
For more information about the files systems on Talon 3, click here.
Transferring files
To transfer files with a Windows machine, you will need an SFTP/SCP client software such as WinSCP or Cyberduck installed. When you download a client application, the application will require you to connect to Talon 3 (talon3.hpc.unt.edu). Once logged in, you can use the file transfer window within the program to drag and drop files between local (your personal computer) and remote (Talon 3) machines.
To transfer files to and from a cluster on a Linux/Mac machine, you may use the terminal application and the secure copy (scp) command or secure file transfer protocol (sftp). The following is an example of uploading a file foo.f to Talon 3 from your local machine:
localhost$ scp foo.f talon3.hpc.unt.edu |
You can download files from Talon 3 to your local machine.
localhost$ scp talon3.hpc.unt.edu:/home/EUID/foo.f ./ |
To recursively copy an entire directory foo from inside directory project_foo in your home directory on Talon 3 to the current working directory on localhost, use the following command:
localhost$ scp -rp talon3.hpc.unt.edu:/home/EUID/project_foo ./ |
A detail explanation on transferring files to and from Talon 3, please visit our Tutorial page.
Environment Modules
Talon 3 uses LMOD to manage your environment. It handles setting up access to the software packages, libraries, and other utilities. This includes compilers, system libraries, and math libraries that are typically needed to build scientific codes. If you need to use a certain application that the HPC staff maintains, you will need to load the appropriate module. Loading the module will modify your environment variables, such as $PATH and $LD_LIBRARY_PATH, that are needed to run the certain application.
To see the list of all available modules, run the command:
$ module avail |
To see which modules are already loaded:
$ module list |
To add a module for the current session
$ module load <module_name> |
where <module_name> is the name of the module you wish to load.
For example, if you need to use the Intel compilers, you can run
$ module load intel/compilers/18.0.3-AVX |
This will load the Intel version 18.0.3 compilers.
To configure a module so that it will be loaded into your environment at login:
$ module initadd <module_name> |
To remove a module:
$ module remove <module_name> |
Other Talon3 access Interfaces
Rstudio
RStudio is a development environment for R that is available on the Visualization nodes of Talon 3. You can access an RStudio interface via a web browser by going one of the following addresses
http://vis-01.acs.unt.edu:8787 http://vis-02.acs.unt.edu:8787 http://vis-03.acs.unt.edu:8787 |
The login credentials are the same as your Talon 3 login.
Once you login, you have access to Rstudio on the Talon 3 network. You can execute simple R scripts, transfer files, and visualize data.
Since RStuido is running on the Visualization login nodes, any computationally intensive processes are prohibited and can result in expulsion from using UNT HPC resources. Any large R tasks MUST be submitted as a SLURM batch job.
More information about accessing and using Rstudio on Talon 3 can be found on our Tutorial page.
Jupyter Notebooks
Talon 3 can use Jupyter Notebooks for a browser based, interactive interface for Python. You can view your data and debug simple scripts with data on Talon 3 and share and create notebook with other Talon 3 users.
You can start a notebook server using either
- One of the visualization login nodes
- Debug and visualize your code
- Running small python task (Running large computationally intensive tasks on these nodes will be killed)
- On the compute nodes
- Submit your notebook server as a SLURM job though the queuing system
- You can run large CPU and memory intensive tasks
If you want to use the visualization login nodes, first login one of the visualization nodes
localhost$ ssh vis-01.acs.unt.edu |
Then load the Jupyter Notebook module
$ module load jupyter |
Then create a server by running the command,
$ jupyter notebook |
You should be given the location of your Jupyter Notebook server and a Token of your server. You can then setup another SSH connection from your computer by
localhost$ ssh -L 8888:localhost:8888 vis-01.acs.unt.edu #Replacing 8888 to the port that your notebook created |
You can then open a browser with the address of your notebook server.
http://localhost:8888 #Replacing 8888 to the port that your notebook created |
You will need to enter in the token that was created with your notebook server.
If you want to use the compute nodes,
Login to Talon 3 and submit a SLURM batch job.
Example of a jupyter notebook SLURM batch job.
#!/bin/bash |
Once the job starts, the connection information is in the output file that was create for the job. You will need to setup another SSH connection from your compute by
localhost$ ssh -L 8888:compute-X-X-X:8888 vis-01.acs.unt.edu #replacing 8888 and compute-X-X-X to the port and compute node of #your server |
You can then open a browser with the address of your notebook server.
http://localhost:8888 |
More information using Jupyter Notebooks on Talon 3 can be found on our Tutorial page.
2. Running Jobs and Job Submission
- Overview
- Basic information about SLURM
- Talon 3 job submission
- The SLURM Job Script
- Interactive sessions
In an HPC cluster, the users' tasks to be done on compute nodes are controlled by a batch queuing system. Queuing systems manage job requests (shell scripts generally referred to as jobs) submitted by all users on Talon 3. In other words, to get your computations done by the cluster, you must submit a job request to a specific batch queue. The scheduler will assign your job to a compute node in the order determined by the policy on that queue and the availability of an idle compute node. Currently, Talon 3 resources have several policies in place to help guarantee fair resource utilization from all users.
The SLURM Workload Manager is used to control how jobs are dispatched on the compute nodes.
Please see the Tutorial page for a more detail look about submitting jobs though the queuing system.
Basic Information about Slurm
The Slurm Workload Manager (formally known as Simple Linux Utility for Resource Management or SLURM), or Slurm, is a free and open-source job scheduler for Linux and Unix-like kernels, used by many of the world's supercomputers and computer clusters. It provides three key functions. First, it allocates exclusive and/or non-exclusive access to resources (computer nodes) to users for some duration of time so they can perform work. Second, it provides a framework for starting, executing, and monitoring work (typically a parallel job such as MPI) on a set of allocated nodes. Finally, it arbitrates contention for resources by managing a queue of pending jobs. Slurm is the workload manager on about 60% of the TOP500 supercomputers, including Tianhe-2 that, until 2016, was the world's fastest computer. Slurm uses a best fit algorithm based on Hilbert curve scheduling or fat tree network topology in order to optimize locality of task assignments on parallel computers.
Slurm Tutorials and Commands:
A Quick-Start Guide for those unfamiliar with Slurm can be found here:
https://slurm.schedmd.com/quickstart.html
Slurm Tutorial Videos can be found here for additional information:
https://slurm.schedmd.com/tutorials.html
Talon 3 job submission
Partitions
On Talon 3, the main partition that most jobs are submitted to is named public. There are also GPU and big memory partitions also well.
Partition |
Description |
---|---|
public |
Contains most of the compute nodes for Talon3. Contains the r420 and c6320 compute nodes Limit of 672 CPUs or 24 compute nodes QOS: debug, general, large |
gpu |
Contains the r730 GPU compute nodes. Limit to 3 compute nodes and 1 week No QOS specification |
bigmem |
Contains the r720 compute nodes that have 512GB of memory Limit to 2 compute nodes and 1 week time No QOS specification |
There are other private partitions for users that need more computing resources. Please contact hpc-admin@unt.edu for more info about requesting these partitions.
Quality of Service, QOS
The Quality of Service, QOS, sets the priority of the job to the queuing system. There are three QOSs under the public partition. You do NOT need to specify QOS for the gpu and bigmem partitions.
Name |
Description |
---|---|
debug |
For running quick test computations for debugging purposes. -Limit to two hours and two compute nodes -Exclusive Jobs Allowed -High priority |
general |
This is the default QOS for submitting jobs that take 72 hours or fewer. -Time Limit: 72 hours -Limit 616 CPUs -Medium priority |
large |
This QOS is for large jobs requiring more resources -Time Limit: three weeks -Limit 22 compute nodes. -Exclusive Jobs Allowed -Low priority |
SLURM information
The following table lists frequently used commands
Slurm Command |
Description |
---|---|
sbatch script.job |
submit a job |
squeue [job_id] |
display job status (by job) |
squeue -u $USER |
display status user's jobs |
squeue |
display queue summary status |
scancel |
delete a job in current state |
scontrol update |
modify a pending job |
When using squeue, the following job states are possible. |
||
State |
Full State Name |
Description |
R |
RUNNING |
The job currently has an allocation. |
CA |
CANCELED |
The job was explicitly canceled by the user or system administrator. The job may or may not have been initiated. |
CD |
COMPLETED |
The job has been terminated all processes on all nodes. |
CF |
CONFIGURING |
The job has been allocated resources, but are waiting for them to become ready for use (e.g. booting) |
CG |
COMPLETING |
The job is in the process of completing. Some processes on some nodes may still be active. |
F |
FAILED |
The job terminated with non-zero exit code or other failure condition. |
NF |
NODE_FAIL |
The job terminated due to failure of one or more allocated nodes. |
PD |
PENDING |
The job is awaiting resource allocation. |
PR |
PREEMPTED |
The job terminated due to preemption. |
S |
SUSPENDED |
The job has an allocation, but execution has been suspended. |
TO |
TIMEOUT |
The job terminated upon reaching its time limit. |
The following table lists common SLURM variables that you can use in your job script. For a complete list, see the sbatch manpage (man sbatch).
SLURM Variable |
Description |
---|---|
$SLURM_SUBMIT_DIR |
current working directory of the submitting client |
$SLURM_JOB_ID |
unique identifier assigned when the job was submitted |
$SLURM_NTASKS |
number of CPUs in use by a parallel job |
$SLURM_NNODES |
number of hosts in use by a parallel job |
$SLURM_ARRAY_TASK_ID |
index number of the current array job task |
$SLURM_JOB_CPUS_PER_NODE |
number of CPU cores per node |
$SLURM_JOB_NAME |
Name of JOB |
Examples
Submitting a job
$ sbatch slurm-job.sh |
Where slurm-job.sh is the name of the job script
List all current jobs that a user has in the queue
$ squeue -u $USER |
Get job details
$ scontrol show job 106 |
Where 106 is the JOB ID number
Kill a job that is in the queue
$ scancel 106 |
where 106 is the JOB ID number of the job you want to be killed
Hold a job in the queue
$ scontrol hold 106 |
where 106 is the JOB ID number
Release a held job in the queue
$ scontrol release 106 |
The SLURM Job Script
At the top of your job script, begin with special directive #, which are SBATCH options.
#SBATCH -p public |
Defines the partition which may be used to execute this job. The only partition on Talon 3 is 'public'.
#SBATCH -J job_name |
The -J option defines the job name.
#SBATCH -o JOB.out |
The -o option defines the output file name.
#SBATCH -e JOB.e%j |
Defines the error file name.
#SBATCH –-qos general |
Defines the QOS the job will be executed. (debug, general, large are the only options)
#SBATCH --exclusive |
Sets the job to be exclusive not allowing other jobs to share the compute node. This is required for all large QOS submissions.
#SBATCH -t 1:00:00 |
Sets up the WallTime Limit for the job in hh:mm:ss.
#SBATCH -n 84 |
Defines the total number of cpu tasks.
#SBATCH -N 3 |
Defines the number of compute nodes requested.
#SBATCH –-ntasks-per-node 28 |
Defines the number of tasks per node.
#SBATCH -C c6320 |
Requests the c6320 compute nodes. (Also can request r420, r720, and r730 compute nodes) See here for details on each compute node
#SBATCH –-mail-user=user@unt.edu |
Sets up email notification. (where user@unt.edu is your email address)
#SBATCH –-mail-type=begin |
Email user when job begins.
#SBATCH –-mail-type=end |
Email user when job finishes.
Examples
Simple serial job script example
#!/bin/bash # Example of a SLURM job script for Talon3 # Job Name: Sample_Job # Number of cores: 1 # Number of nodes: 1 # QOS: general # Run time: 12 hrs ######################################
#SBATCH -J Sample_Job #SBATCH -o Sample_job.o%j #SBATCH --qos general #SBATCH -N 1 #SBATCH -n 1 #SBATCH -t 12:00:00 #SBATCH -C r420
### Loading modules module load intel/compilers/18.0.1
./a.out > outfile.out |
Simple parallel MPI job script example
When submitting a MPI job, make sure you load the module and use mpirun to launch the application
#!/bin/bash # Example of a SLURM job script for Talon3 # Job Name: Sample_Job # Number of cores: 28 # Number of nodes: 1 # QOS: general # Run time: 12 hrs ######################################
#SBATCH -J Sample_Job #SBATCH -o Sample_job.o%j #SBATCH --qos general #SBATCH -N 1 #SBATCH -n 16 #SBATCH --ntasks-per-node 16 #SBATCH -t 12:00:00 #SBATCH -C r420
### Loading modules module load PackageEnv/intel17.0.4_gcc8.1.0_MKL_IMPI_AVX
### Use mpirun to run parallel jobs mpirun ./a.out > outfile.out |
Large MPI job script example
#!/bin/bash # Example of a SLURM job script for Talon3 # Job Name: Sample_Job # Number of cores: 112 # Number of nodes: 4 # QOS: general # Run time: 12 hrs ######################################
#SBATCH -J Sample_Job #SBATCH -o Sample_job.o%j #SBATCH --qos general #SBATCH -N 4 #SBATCH -n 64 #SBATCH --ntasks-per-node 16 #SBATCH -t 12:00:00 #SBATCH -C r420
### Loading modules module load PackageEnv/intel17.0.4_gcc8.1.0_MKL_IMPI_AVX
## Use mpirun for MPI jobs mpirun ./a.out > outfile.out |
Large MPI job script example
#!/bin/bash # Example of a SLURM job script for Talon3 # Job Name: Sample_Job # Number of cores: 112 # Number of nodes: 4 # QOS: general # Run time: 12 hrs ######################################
#SBATCH -J Sample_Job #SBATCH -o Sample_job.o%j #SBATCH --qos general #SBATCH -N 4 #SBATCH -n 64 #SBATCH --ntasks-per-node 16 #SBATCH -t 12:00:00 #SBATCH -C r420
### Loading modules module load PackageEnv/intel17.0.4_gcc8.1.0_MKL_IMPI_AVX
## Use mpirun for MPI jobs mpirun ./a.out > outfile.out |
Big Memory job script example
#!/bin/bash # Example of a SLURM job script for Talon3 # Job Name: Sample_Job # Number of MPI tasks: 32 # Number of nodes: 1 # Run time: 12 hrs ######################################
#SBATCH -J Sample_Job #SBATCH -o Sample_job.o%j #SBATCH --qos general #SBATCH -N 1 #SBATCH --ntasks-per-node 32 #SBATCH -t 12:00:00
### Loading modules module load PackageEnv/intel17.0.4_gcc8.1.0_MKL_IMPI_AVX
### Set the number of threads
mpirun ./a.out > outfile.out |
CUDA parallel GPU job script example
#!/bin/bash
#SBATCH -p public #SBATCH -C r730
|
Interactive sessions
Interactive job sessions can be used on Talon if you need to compile or test software. Interactive jobs will start a command line session on a compute node. If you have to run large tasks or processes, running an interactive job will allow you to run these tasks without using the login nodes.
An example command of starting an interactive session is shown below:
$ srun -p public --qos debug -C r420 -N 1 -n 16 -t 2:00:00 --pty bash srun: job 55692 queued and waiting for resources srun: job 55692 has been allocated resources [c64-6-32 ~]$ python test.py # Run python on compute node [c64-6-32 ~]$ exit # Exit interactive job |
This launches an interactive job session and launches a bash shell to a compute node. From there, you can execute software and shell commands that would otherwise not be allowed on the Talon login nodes.
3. Compiling Software
- Overview
- Compiling procedures
- Compiling with MPI
- Package environment modules
- GPU compiling
- High-level languages
Talon 3 has a vast list of pre-compiled scientific software that already has been compiled and tested (Please see the Scientific Software Guide section to see how to use pre-compiled software). If there is software that is not supported by the UNT HPC staff, the user can compile and run their own build of their code.
Compiling procedures
Talon 3 has various compilers and libraries available for compiling code.
Compilers supported by Talon 3 include the following.
- GCC - The GNU Compiler Collection
- INTEL- The Intel Compiler XE
- PGI - The Portland Group Compiler
- NVCC - Nvidia CUDA Compiler
To compile code, the module corresponding to a certain compiler needs to be loaded.
Modules are loaded by
$ module load compiler/verson |
See Environment Module section for more information.
Intel compilers
The Intel Compilers are recommended for use on Talon3 to get the highest performance out of your code.
For example, the load the INTEL compiler, version 18.0.3, the command would be
$ module load intel/compilers/18.0.3-AVX |
Below are a set of example commands using the Intel Compilers to build a simple code
$ icc foo.c #CC code $ icpc foo.cpp #C++ code $ ifort foo.f90 #Fortran90 code |
The Intel Compilers also support OpenMP
$ icc -qopenmp foo.c #C code $ icpc -qopenmp foo.cpp #C++ code $ ifort -fopenmp foo.f90 #Fortran90 code |
GNU compilers
The GNU Compilers are also available on Talon 3
$ module load gcc/8.1.0 #loads the 8.1.0 version of GNU compilers $ cc foo.c #CC code $ c++ foo.cpp #C++ code $ gfortran foo.f90 #Fortran90 code $ c -fopenmp foo.c #C code with OpenMP |
Math libraries
Talon 3 supports various Math libraries. To use them, the module corresponding to the desire Math library needed to be loaded.
- MKL: Intel's Math Kernel Library (Must load Intel compilers first)
$ module load intel/mkl/18.0.3 #Loads MKL |
- OpenBLAS: BLAS library based on GotoBLAS2
$ module load openblas/0.2.19 #Loads OpenBlas |
Using MKL
The MKL libraries contain optimized math routines and functions that include
- BLAS
- LAPACK
- ScaLAPACK
- Sparse Solvers
- Fast Fourier transforms
- Vector Mathematics
$ module load intel/compilers/18.0.3 $ module load intel/mkl/18.0.3 #load MKL $ icc -mkl foo.c #compiles C code with MKL $ icc -mkl=parallel foo.c #loads the ‘threaded’ version of MKL $ icc -mkl=sequential foo.c #loads the ‘unthreaded’ version of MKL |
Compiling with MPI
Talon 3 offers Intel MPI and OPENMPI libraries for parallel codes. These libraries are loaded by
For Intel MPI
$ module load intel/compilers # must load compilers first $ module load intel/IMPI # loads Intel MPI
$ mpiicc foo_mpi.c $ mpiicpc foo_mpi.cpp $ mpiifort foo_mpi.f90 |
For Open MPI
$ module load openmpi/gcc # loads OPENMPI for gcc compilers $ module load openmpi/intel # loads OPENMPI for Intel compilers
$ mpicc foo_mpi.c $ mpic++ foo_mpi.cpp $ mpi90 foo_mpi.f90 |
See the SLURM section for information on submitting parallel MPI jobs.
Package environment modules
There are 'PackageEnv' modules that will load compilers, MPI, and MATH libraries in one module.
Examples of using the ‘PackageEnv’ modules
$ module load PackageEnv/intel18.0.3_gcc8.1.0_MKL_IMPI_AVX # Loads the following modules: # Intel compilers version 18.0.3 # GNU version 8.1.0 # With MKL and Intel MPI
$ module load PackageEnv/gcc8.1.0_OBLAS0.2.19_OMPI2.1.0 # Loads the following modules: # GNU version 8.1.0 # With OpenBLAS version 0.2.19 # With Open MPI version 2.1.0 |
See Environment Module section for more information
GPU compiling
Talon 3 has NVCC compilers to compile and run GPU enable jobs.
To compile GPU code, first start an interactive job on the GPUs to use (and test) with the NVCC compilers
$ srun -p public --qos debug -C r730 -N 1 -n 1 -t 2:00:00 --pty bash |
loading cuda module
[gpu-8-7-1] $ module load cuda/75/toolkit/7.5.18 |
compiling GPU program
[gpu-8-7-1] $ nvcc foo.cu -o foo.x |
More information about compiling software (including GPU) can be found on the Tutorial section.
High-Level Languages
Talon 3 offers high-level languages such as:
- R
- Python
- Java
- Perl
- Matlab
- Mathematica
- Go
These languages can be accessed by loading the appropriate module.
The Tutorials section has more information about running code using some of these languages.