Once it's done, you can now connect directly to the cluster by typing `ssh u123456@cluster.calc.priv`
in your terminal (replacing u123456 by your university ID).
You will then be in your [home](faq/home-info), from which you will have access to
the mass storage and to the compute nodes.
### 2.2 Very important points:
When you connect to the cluster, you are on the master.
**It is FORBIDDEN to make your calculations directly on the master.**
You must use `slurm` to send your analyzes (jobs) to the compute nodes.
Slurm is a resource management system.
It allows cluster users to allocate to each job the necessary resources and to launch them as soon
as these resources are available.
To use slurm, you can write a bash script that contains information about the resources needed
as well as the command(s) to run. Or you can use an interactive slurm session in which you can
run your command directly.
You can find more information about slurm on [this wiki page](cluster/slurm/slurm_home)
The CECI also has a really well explained [tutorial](https://support.ceci-hpc.be/doc/_contents/QuickStart/SubmittingJobs/SlurmTutorial.html)
and [FAQ](https://support.ceci-hpc.be/doc/_contents/SubmittingJobs/SlurmFAQ.html).
In addition to this, there are some very important considerations related to the type of jobs you want to run.
- As the compute nodes are available to everyone, it is important not to launch several millions of jobs
at the same time so as not to use all the available resources. To limit the number of jobs running in parallel,
you can use arrays (see below).
- In addition, you should avoid launching a large number of very short jobs one after the other.
Indeed, if the overhead time used by slurm to managed each job is more important that the jobs itself and
there is a lots of these jobs sent at the same time, slurm is going to crash.
It is recommended that each individual job takes at least 20 minutes.
If you have lots of small jobs, please combine several ones in one job executing them one after the other
(for example with a for loop) so that the actual job managed by slurm last about 20 minutes or more
(and send several of these combined jobs in parallel using the array method explained below)
Also note that if you are launching a large number of jobs, you should avoid asking to receive an email
each time a job starts or ends. In the past, our server has sometimes been blacklisted due to the fact that
it wanted to send more than 10,000 emails in 1 hour, and when this happens, no one receives emails anymore.
In addition, the mass storage is not optimized to store a very large number of small files,
so if the outputs of your commands are small files, you should ideally generate them on the scratch disk (see below)
and then either concatenate them, or gather the information that interests you in a single file,
or [group them in an archive](mass-storage/mass-storage-compression) before transferring it to mass storage.
Same thing for slurm logs, if you have several thousands of jobs and want to keep all the logs,
we recommend to combine them into a single archive.
### 2.3. Operating system, programs and compute nodes:
The operating system of the cluster is CentOS which is a linux distribution.
The main programming languages are available on the cluster.
There is also a series of programs installed as `modules`.
These modules can be used by following the instructions on [this wiki page](cluster/software/cluster-module).
To use a module in an analysis, you must load the module in your script.
They are several compute nodes available on the cluster.
These nodes have different resources (number of CPUs and RAM available) and are grouped into "partitions".
By default, any GIGA member has access to the compute nodes that are in the partitions all_5hrs, all_24hrs and kosmos. There is no limit of time for the jobs sent to the kosmos partition, but jobs sent to the 2 other partitions will be killed by slurm if they don't complete in the indicated time (5h and 24h respectively).
You can see the nodes present in the partitions to which you have access by typing
```
module load slurm # to load the slurm module - no longer required once the module is loaded
sinfo
```
And see the resources available on each node with the 2 following commands:
```
sinfo -lN
cat /etc/slurm/slurm.conf | grep ^Node
```
If your lab bought some compute nodes, they are probably in a separated partition and the PI of your lab
need to make a request to the [UDIMED/UDIGIGA](contacts) to add you to the list of people having access to it.
### 2.4 Interactive sessions
For the interactive session you can use the command srun. An example of a this command is the following: