Cluster: Difference between revisions
No edit summary |
(→Matlab) |
||
(2 intermediate revisions by the same user not shown) | |||
Line 120: | Line 120: | ||
'''note: remember to start an interactive session before starting matlab!''' | '''note: remember to start an interactive session before starting matlab!''' | ||
In order to use matlab, you have to load the matlab environment: | |||
module load matlab | |||
Once the matlab environment is loaded, you can start a matlab session by running | |||
matlab -nojvm -nodesktop | |||
=== Sage === | === Sage === | ||
I've installed sage | I've installed sage in /clusterfs/cortex/software/sage. Sage is [http://sagemath.org http://sagemath.org]. | ||
Sample pbs and mpi script is here: | Sample pbs and mpi script is here: | ||
Line 140: | Line 144: | ||
In your interactive session, if you want to have a scipy environment (run ipython, etc), first do: | In your interactive session, if you want to have a scipy environment (run ipython, etc), first do: | ||
% | % /clusterfs/cortex/software/sage/sage -sh | ||
then you can run: | then you can run: | ||
Line 148: | Line 152: | ||
or you can just do: | or you can just do: | ||
% | % /clusterfs/cortex/software/sage/sage -ipython | ||
This is a temporary solution for people wanting use scipy with mpi on the cluster. It was built against the default openmpi (1.2.8) (icc) and mpi4py 1.1.0. For those using hdf5, I also built hdf5 1.8.3 (gcc) and h5py 1.2. | This is a temporary solution for people wanting use scipy with mpi on the cluster. It was built against the default openmpi (1.2.8) (icc) and mpi4py 1.1.0. For those using hdf5, I also built hdf5 1.8.3 (gcc) and h5py 1.2. | ||
--Amir | |||
=== CUDA === | |||
I've installed CUDA 2.2 toolkit here: | |||
/clusterfs/cortex/software/cuda-2.2 | |||
The SDK is here: | |||
/clusterfs/cortex/software/cuda-2.2/sdk | |||
To your PATH, add: | |||
/clusterfs/cortex/software/cuda-2.2/bin | |||
To your LD_LIBRARY_PATH, add: | |||
/clusterfs/cortex/software/cuda-2.2/lib | |||
--Amir | |||
== Support Requests == | == Support Requests == |
Revision as of 17:39, 23 July 2009
General Information
home directory quota
There is a 10GB quota limit enforced on $HOME directory (/global/home/users/username) usage. Please keep your usage below this limit. There will NETAPP snapshots in place in this file system so we suggest you store only your source code and scripts in this area and store all your data under /clusterfs/cortex (see below).
data
For large amounts of data, please create a directory
/clusterfs/cortex/scratch/username
and store the data inside that directory.
Connect
get a password
- press the PASS WORD button on your crypto card
- enter passoword
- press enter
- the 7 digit password is given (without the dash)
setup environment
- put all your customizations into your .bashrc
- for login shells, .bash_profile is used, which in turn loads .bashrc
ssh to the gateway computer (hadley)
note: please don't use the gateway for computations (e.g. matlab)!
ssh -Y neuro-calhpc.berkeley.edu (or hadley.berkeley.edu)
and use your crypto password
Useful commands
Start interactive session on compute node
- start interactive session:
qsub -X -I
- start interactive session on particular node (nodes n0000.cortex and n0001.cortex have GPUs):
qsub -X -I -l nodes=n0001.cortex
Perceus commands
The perceus manual is here
- listing available cluster nodes:
wwstats
- list cluster usage
wwtop
- to restrict the scope of these commands to cortex cluster, add the following line to your .bashrc
export NODES='*cortex'
- module list
- module avail
- module help
- help pages are here
Resource Manager PBS
- Job Scheduler MOAB
- List running jobs:
qstat -a
- List jobs of a given node:
qstat -n 98
- sample script
#!/bin/bash #PBS -q cortex #PBS -l nodes=1:ppn=2:cortex #PBS -l walltime=01:00:00 #PBS -o path-to-output #PBS -e path-to-error cd /global/home/users/kilian/sample_executables cat $PBS_NODEFILE mpirun -np 8 /bin/hostname sleep 60
- submit script
qsub scriptname
- interactive session
qsub -I -q cortex -l nodes=1:ppn=2:cortex -l walltime=00:15:00
- list nodes that your job is running on
cat $PBS_NODEFILE
- run the program on several cores
mpirun -np 4 -mca btl ^openib sample_executables/mpi_hello
Matlab
note: remember to start an interactive session before starting matlab!
In order to use matlab, you have to load the matlab environment:
module load matlab
Once the matlab environment is loaded, you can start a matlab session by running
matlab -nojvm -nodesktop
Sage
I've installed sage in /clusterfs/cortex/software/sage. Sage is http://sagemath.org.
Sample pbs and mpi script is here:
~amirk/test
You can run it as:
% mkdir -p ~/jobs % cd ~amirk/test % qsub pbs
In your interactive session, if you want to have a scipy environment (run ipython, etc), first do:
% /clusterfs/cortex/software/sage/sage -sh
then you can run:
% ipython
or you can just do:
% /clusterfs/cortex/software/sage/sage -ipython
This is a temporary solution for people wanting use scipy with mpi on the cluster. It was built against the default openmpi (1.2.8) (icc) and mpi4py 1.1.0. For those using hdf5, I also built hdf5 1.8.3 (gcc) and h5py 1.2.
--Amir
CUDA
I've installed CUDA 2.2 toolkit here:
/clusterfs/cortex/software/cuda-2.2
The SDK is here:
/clusterfs/cortex/software/cuda-2.2/sdk
To your PATH, add:
/clusterfs/cortex/software/cuda-2.2/bin
To your LD_LIBRARY_PATH, add:
/clusterfs/cortex/software/cuda-2.2/lib
--Amir
Support Requests
- If you have a problem that is not covered on this page, you can send an email to our user list:
redwood_cluster@lists.berkeley.edu
- If you need additional help from the LBL group, send an email to their email list. Please always cc our email list as well.
scs@lbl.gov
- In urgent cases, you can also email Krishna Muriki (LBL User Services) directly.