FSL is a comprehensive library of image analysis and statistical tools for FMRI, MRI and DTI brain imaging data. FSL is written mainly by members of the Analysis Group, FMRIB, Oxford, UK. FSL website.
On the Biowulf cluster, FSL is installed in /usr/local/fsl. Users
who run FSL jobs regularly should set up the environment variables in their
startup files.
csh users should add the following to their .cshrc file:
setenv FSLDIR /usr/local/fsl source $FSLDIR/etc/fslconf/fsl.csh setenv PATH $FSLDIR/bin:$PATH
Bash users should add the following to their .bash_profile file:
FSLDIR=/usr/local/fsl . $FSLDIR/etc/fslconf/fsl.sh PATH=$FSLDIR/bin:$PATH export FSLDIR PATH
FSL v 4.1.0 is now available on the Biowulf cluster. There are several important changes from versions previous to 4.0:
- The parallelization procedure has changed; instead of specifying the nodes with the FSLMACHINELIST variable, all parallelization is done through the fsl_sub command which submits a swarm of jobs.
- For versions previous to FSL 4.0, you were required to copy the fsl.sh file into /home/user/.fslconf to run programs in parallel. This is no longer necessary unless you wish to modify the default environment variables. Any old /home/user/.fslconf/fsl.sh files should be deleted, or they may cause problems at login time.
- All FSL programs must be submitted from an interactive node, rather than the Biowulf head node. An example is shown in the sample bedpost run at the end of this page.
FSL has been compiled as a 32-bit application, so that FSL jobs can run on all the 32-bit and 64-bit nodes in the Biowulf cluster. However, some users may need larger file sizes or memory access than are available in a 32-bit application. A 64-bit version of FSL has been built and installed in /usr/local/fsl-64/. The fsl_sub program in that version has been modified to run on only the 64-bit nodes of the cluster.
To use the 64-bit version, replace /usr/local/fsl with /usr/local/fsl-64 in your .bashrc or .cshrc.
FSL is typically used on Biowulf to process many images. This is most easily done via the swarm utility. Below is a sample swarm command file which runs mcflirt and bedfunc in succession on each image.
# this file is called myswarmfile mcflirt -in /data/user/fmri1 -out mcf1 -mats -plots -refvol 90 -rmsrel -rmsabs; betfunc mcf1 bet1 mcflirt -in /data/user/fmri2 -out mcf2 -mats -plots -refvol 90 -rmsrel -rmsabs; betfunc mcf2 bet2 mcflirt -in /data/user/fmri3 -out mcf3 -mats -plots -refvol 90 -rmsrel -rmsabs; betfunc mcf3 bet3 ...
This file would be submitted as follows:
swarm -f myswarmfileNote that by default, each line in the swarm file will be processed by one processor on a node, so that all the processors on each node are simultaneously in use. Thus, each processor can utilize a max of half (one-fourth for the dual-core nodes) of the node memory, otherwise the node memory will become overloaded. The Biowulf nodes have a minimum of 1GB RAM per processor. If the individual FSL programs in the swarm command file require more than 1GB memory, it may be necessary to specify a node type or to ensure that only one line is processed by each node. If you need help with optimizing your FSL swarm jobs, contact the Biowulf staff (staff@biowulf.nih.gov).
All parallelization in FSL v4.0 and up is done via the fsl_sub command.
You must allocate an interactive node to submit FSL jobs, as in the example below. The FSL job submission commands will not run correctly on the Biowulf head node.
The following programs in FSL can use parallelization: FEAT, MELODIC, TBSS, BEDPOST, FSL-VBM, POSSUM. See the FSL website for more information.
Sample session running bedpost (User input in bold)
[susanc@biowulf home] qsub -I -l nodes=1 qsub: waiting for job 1581291.biobos to start qsub: job 1581291.biobos ready [susanc@p2 ~]$ cd mydir [susanc@p2 mydir]$ bedpostx sampledataset subjectdir is /data/susanc/bedpost/sampledataset Making bedpostx directory structure Queuing preprocessing stages Queuing parallel processing stage 0 slices processed Queuing post processing stage /usr/local/fsl-4.1.0/bin/bedpostx: line 210: 27598 Terminated ${subjdir}.bedpostX/monitor [susanc@p2 mydir] exit qsub: job 1581291.biobos completed [susanc@biowulf ~]
The jobs can be monitored using the 'qstat' and 'jobload' commands, and the user monitor. Typically, you would see the pre-processing stage run immediately while the bedpost processing steps are in 'H' (hold) state. A few minutes later all the bedpost single-slice processing should run simultaneously, as in the example below.
[susanc@p2 mydir]$ qstat -u susanc biobos: Req'd Req'd Elap Job ID Username Queue Jobname SessID NDS TSK Memory Time S Time --------------- -------- -------- ---------- ------ --- --- ------ ----- - ----- 1580363.biobos susanc norm STDIN 27081 1 1 -- -- R 00:09 1580463.biobos susanc norm bpx_prepro 3137 1 1 -- -- R 00:00 1580464.biobos susanc norm swarm1n106 -- 1 1 -- -- H -- 1580465.biobos susanc norm swarm2n106 -- 1 1 -- -- H -- 1580466.biobos susanc norm swarm3n106 -- 1 1 -- -- H -- 1580467.biobos susanc norm swarm4n106 -- 1 1 -- -- H -- 1580468.biobos susanc norm swarm5n106 -- 1 1 -- -- H -- 1580469.biobos susanc norm swarm6n106 -- 1 1 -- -- H -- 1580470.biobos susanc norm swarm7n106 -- 1 1 -- -- H -- 1580471.biobos susanc norm swarm8n106 -- 1 1 -- -- H -- 1580472.biobos susanc norm swarm9n106 -- 1 1 -- -- H -- 1580473.biobos susanc norm swarm10n10 -- 1 1 -- -- H -- 1580475.biobos susanc norm swarm11n10 -- 1 1 -- -- H -- 1580476.biobos susanc norm swarm12n10 -- 1 1 -- -- H -- 1580477.biobos susanc norm swarm13n10 -- 1 1 -- -- H -- 1580478.biobos susanc norm swarm14n10 -- 1 1 -- -- H -- 1580479.biobos susanc norm swarm15n10 -- 1 1 -- -- H -- 1580480.biobos susanc norm swarm16n10 -- 1 1 -- -- H -- 1580481.biobos susanc norm swarm17n10 -- 1 1 -- -- H -- 1580482.biobos susanc norm swarm18n10 -- 1 1 -- -- H -- 1580483.biobos susanc norm bpx_postpr -- 1 1 -- -- H -- [,.. after 2 mins...] [susanc@p4 mydir]$ qstat -u susanc biobos: Req'd Req'd Elap Job ID Username Queue Jobname SessID NDS TSK Memory Time S Time --------------- -------- -------- ---------- ------ --- --- ------ ----- - ----- 1580363.biobos susanc norm STDIN 23498 1 1 -- -- R 01:05 1580464.biobos susanc norm swarm1n106 19394 1 1 -- -- R 00:00 1580465.biobos susanc norm swarm2n106 25610 1 1 -- -- R 00:00 1580466.biobos susanc norm swarm3n106 22687 1 1 -- -- R 00:00 1580467.biobos susanc norm swarm4n106 30442 1 1 -- -- R 00:00 1580468.biobos susanc norm swarm5n106 9794 1 1 -- -- R 00:00 1580469.biobos susanc norm swarm6n106 30983 1 1 -- -- R 00:00 1580470.biobos susanc norm swarm7n106 12429 1 1 -- -- R 00:00 1580471.biobos susanc norm swarm8n106 12006 1 1 -- -- R 00:00 1580472.biobos susanc norm swarm9n106 6919 1 1 -- -- R 00:00 1580473.biobos susanc norm swarm10n10 17030 1 1 -- -- R 00:00 1580475.biobos susanc norm swarm11n10 24613 1 1 -- -- R 00:00 1580476.biobos susanc norm swarm12n10 24682 1 1 -- -- R 00:00 1580477.biobos susanc norm swarm13n10 14467 1 1 -- -- R 00:00 1580478.biobos susanc norm swarm14n10 18595 1 1 -- -- R 00:00 1580479.biobos susanc norm swarm15n10 20954 1 1 -- -- R 00:00 1580480.biobos susanc norm swarm16n10 18845 1 1 -- -- R 00:00 1580481.biobos susanc norm swarm17n10 17404 1 1 -- -- R 00:00 1580482.biobos susanc norm swarm18n10 25006 1 1 -- -- R 00:00 1580483.biobos susanc norm bpx_postpr -- 1 1 -- -- H -- [susanc@p4 bedpost]$ jobload susanc Jobs for susanc Node Load 1580363.biobos p4 25% 1580464.biobos p1483 96% 1580465.biobos p1512 96% 1580466.biobos p1513 96% 1580467.biobos p1515 96% 1580468.biobos p1517 95% 1580469.biobos p1519 96% 1580470.biobos p1520 96% 1580471.biobos p1525 96% 1580472.biobos p1535 96% 1580473.biobos p1536 97% 1580475.biobos p1538 96% 1580476.biobos p1539 97% 1580477.biobos p1540 97% 1580478.biobos p1543 95% 1580479.biobos p1544 95% 1580480.biobos p1545 97% 1580481.biobos p1546 97% 1580482.biobos p1548 49% User Average: 90%
- Documentation for individual FSL tools at the FSL website in Oxford.
- FSL FAQ
- FSL support and training.