Two set of computers will be available for the contest.
From the 12th to the 20th of July, five 8-core machines, named edel-'X'
, and two 8-core 2-GPU machines named adonis-'X'
will be available for testing purpose.
During the contest days 40 edel and 10 adonis will be available. So a total of 400 cores and 1.2TBytes of main memory.
Note that the default network interface for the edel nodes is ip over infiniband, whereas gigabit ethernet is the default for adonis.
Those machines have been pre-reserved and should be used with the following instructions :
access.grenoble.grid5000.fr
using ssh and your personnal PASCO2010 login/password. Nothing should be executed on access.grenoble.grid5000.fr
except step 2.access.grenoble.grid5000.fr
, connect with ssh to frontend.grenoble.grid5000.fr
. This is the machine where you will reserve edel and adonis nodes for the programming contest. Please avoid compiling or running cpu-intensive programs on this host. Then, it is recomanded to generate a ssh key without passphrase dedicated to the cluster (ssh-keygen command).
Your home directory on the cluster is shared between the two service
hosts access and frontend and the computing nodes edel and adonis
(using NFS). You might transfer some files from your computer to the
cluster using scp to access.grenoble.grid5000.fr
.
In order to develop and test your programs, you must make a
reservation on one (or more) of the pre-reserved machines on the edel
and adonis clusters. Reservations are performed the pascosub
command,
for instance:
$ pascosub edel -n 1 -h 4
will ask for one edel computer, for 4 hours (walltime parameter). Please be fair: do not reserve if you cannot use all processing time, especially for the initial phase from July the 12th until July the 20th.
You can check nodes availability using the pascocheck
command
(those two pasco commands are in fact wrappers for oar scheduller commands).
Once your reservation has been accepted, you will automatically connected to one of your nodes.
The full list of your node can be obtained through the OAR_NODEFILE
variable :
nturro@chartreuse:~$ pascosub adonis -n 2 -h 4 [ADMISSION RULE] Modify resource description with type constraints Generate a job key... OAR_JOB_ID=1077997 Interactive mode : waiting... Starting... Initialize X11 forwarding... Connect to OAR job 1077997 via the node adonis-6.grenoble.grid5000.fr nturro@adonis-6:~$ cat $OAR_NODEFILE adonis-6.grenoble.grid5000.fr adonis-6.grenoble.grid5000.fr adonis-6.grenoble.grid5000.fr adonis-6.grenoble.grid5000.fr adonis-6.grenoble.grid5000.fr adonis-6.grenoble.grid5000.fr adonis-6.grenoble.grid5000.fr adonis-6.grenoble.grid5000.fr adonis-7.grenoble.grid5000.fr adonis-7.grenoble.grid5000.fr adonis-7.grenoble.grid5000.fr adonis-7.grenoble.grid5000.fr adonis-7.grenoble.grid5000.fr adonis-7.grenoble.grid5000.fr adonis-7.grenoble.grid5000.fr adonis-7.grenoble.grid5000.fr
Note: the hostname adonis is repeated as many time as the number of cores on the machine.
A recent OFED stack is already installed on the clusters, containing open MPI. Cuda libraries (for the adonis cluster) and others libraries can be found on the module repository:
nturro@adonis-6:~$ source /applis/ciment/env.bash **************************************************** Welcome into the CIMENT variables environment! **************************************************** You have now access to the default applications, but you may need to load more variables using the 'module' command. You can list available modules with: module avail And to load a particular module (for example netcdf v4.0.1): module load netcdf/4.0.1 **************************************************** nturro@adonis-6:~$ module av ------------------------------------------- /applis/ciment/x86_64/share/modules/modulefiles/ciment -------------------------------------------- R/2.10.1 cuda/3.0 gcc/4.4.3 netcdf/nco/4.0.0 openmpi/1.3.2-intel python/2.6.5 R/2.11.1(default) elmer/5.4.1 irods/2.3 netcdf/ncview/1.93g openmpi/1.4.1 python/2.6.5-intel cmake/2.8.0 gcc/4.1.2(default) netcdf/4.0.1 numpy/1.4.1 openmpi/1.4.1-intel tcl/8.5.8 cuda/2.3(default) gcc/4.3.3 netcdf/4.0.1-intel numpy/1.4.1-intel petsc/3.0.0-p12-intel
For instance, if you need to use CUDA-3.0, enter:
nturro@adonis-6:~$ module load cuda/3.0
which set the environment variable "CUDA_INSTALL_PATH"
to the correct path.
Good programming !