documentacion-externa

Repositorio usado para la documentación externa del BIFI, disponible para todos.

MiembrosContacto
Daniel Martínez Cucalondaniel.martinez@bifi.es
John Díaz Laglerajohn.diaz@bifi.es

Procedimientos básicos en Agustina

Información acerca de los procedimientos de gestión en Agustina

Solicitud de acceso al sistema

Para obtener un usuario del sistema Agustina es necesario llevar a cabo una serie de pasos que comprenden la creación de un proyecto, sus cuentas asociadas, activación de las mismas y vinculación al proyecto creado.

Los formularios a cumplimentar se encuentran en : https://soporte.bifi.unizar.es/forms/form.php

Ingreso en el sistema

El comando para acceder a Agustina por ssh es:

ssh id_usuario@agustina.bifi.unizar.es

Ejemplo:

ssh john.diaz@agustina.bifi.unizar.es

El usuario podrá acceder de este modo siempre y cuando tenga una dirección IP fija abierta en el firewall. En caso contrario, el acceso podrá hacerse a través de Bridge:

ssh id_usuario@bridge.bifi.unizar.es

y una vez hecho el login:

ssh id_usuario@agustina.bifi.unizar.es

Ejemplo:

ssh jdiazlag@bridge.bifi.unizar.es

ssh john.diaz@agustina.bifi.unizar.es

Editores de texto

El editor establecido por defecto es Vi, aunque pueden cargarse Emacs (module load emacs) o Vim (module load vim).

Uso del almacenamiento

Una vez hecho login en el sistema Agustina, el usuario dispondrá de un pequeño espacio (limitado por cuota) bajo su directorio home. Dicho espacio no está pensado para almacenar datos, ya que es muy reducido; su uso está orientado a guardar scripts, notas y elementos ligeros.

Cada usuario podrá acceder a espacio de almacenamiento scratch bajo sistema LUSTRE, con mayor capacidad y orientado a uso de trabajo más intensivo, ubicado en /fs/ugustina/id_usuario

Ejemplo:

	/fs/agustina/john.diaz/

Gestión de trabajos

IMPORTANTE Cualquier trabajo que se lance de forma local en los nodos de login será cancelado. Los jobs deben lanzarse contra el cluster.

Agustina se basa en un sistema Slurm de gestión de colas. El procedimiento normal para ejecutar trabajos se basa en la creación de un script indicando las características de la tarea y el uso del comando sbatch.

El sistema dispone de 5 particiones:

	PARTITION AVAIL  TIMELIMIT  NODES  STATE NODELIST
	fat          up 7-00:00:00      3   idle afat[01-03]
	thin*        up 7-00:00:00     93   idle athin[01-93]
	full         up 7-00:00:00     96   idle afat[01-03],athin[01-93]
	hopper       up 7-00:00:00      2   idle agpuh[02-03]
	ada          up 7-00:00:00      9   idle agpul[01-09]
	RES          up 7-00:00:00      9   idle afat01,athin[01-02,04-06,09,11-12,14-16,18-23,25-30]

Siendo thin la que comprende los nodos estandar de computación, fat la de nodos con el doble de memoria y full la totalidad de los nodos de cómputo. Por otra parte tenemos las particiones de GPU, las cuales corresponden a hopper con GPUs H100 y ada con GPUs L40S. Podremos comprobar su estado con el comando sinfo

IMPORTANTE: Los usuarios que usen horas de cálculo provenientes de la RES (Red Española de Supercomputación), deberán usar la partición RES.

Ejemplo de script helloWorld.sh:

#!/bin/env bash

#SBATCH -J helloTest # job name
#SBATCH -o helloTest.o%j # output and error file name (%j expands to jobID)
#SBATCH -N 3 # total number of nodes
#SBATCH --ntasks-per-node=12 # number of cores per node (maximum 24)
#SBATCH -p thin # partition

echo "Hello world, I am running on node $HOSTNAME"
sleep 10
date

Para lanzar la tarea es imprescindible hacerlo asociándola a su proyecto correspondiente:

sbatch --account id_proyecto helloWorld.sh

De lo contrario, obtendremos un mensaje de error:

sbatch: error: QOSMaxSubmitJobPerUserLimit
sbatch: error: Batch job submission failed: Job violates accounting/QOS policy (job submit limit, user\'s size and/or time limits)

Podemos comprobar el estado de ejecución del script con el comando squeue.

Ejemplo:

[john.diaz@alogin02 slurmTest]$ sbatch --account proyecto_prueba helloWorld.sh
Submitted batch job 1707

[john.diaz@alogin02 slurmTest]$ squeue
	     JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)
	      1707      thin helloTes john.dia  R       0:01      3 athin[49-51]

Existen otros comandos útiles para la gestión de trabajos:

Cancelación

scancel job_id

Información detallada

scontrol show job identificadorJob

IMPORTANTE: Los trabajos tiene una duración máxima en el sistema de una semana (7 días desde el lanzamiento del trabajo), por lo tanto, si el trabajo dura más de una semana el sistema de colas lo cancelará. Para evitar este hecho, el usuario deberá dividir los datos de entrada para lanzar trabajos con menos carga de procesado o paralelizar su software.

Información sobre códigos de estado: https://confluence.cscs.ch/display/KB/Meaning+of+Slurm+job+state+codes

Guía de usuario Slurm: https://slurm.schedmd.com/quickstart.html

FHI-aims

FHI-aims is an all-electron electronic structure code based on numeric atom-centered orbitals. It enables first-principles simulations with very high numerical accuracy for production calculations, with excellent scalability up to very large system sizes (thousands of atoms) and up to very large, massively parallel supercomputers (ten thousand CPU cores).

Usage

Example script : testFHI.sh

#!/bin/bash

# Request 2 nodes with 128 MPI tasks per node for 20 minutes
#SBATCH --job-name=FHI-aims
#SBATCH --nodes=3
#SBATCH --ntasks-per-node=2
#SBATCH --cpus-per-task=1
#SBATCH --mem-per-cpu=64GB

#SBATCH --partition=thin

ulimit -s unlimited
module purge
module load gnu12/12.2.0
module load openmpi4/4.1.4
module load hwloc/2.7.0
module load fhi-aims/240507
mpirun aims.240507.scalapack.mpi.x

In addition to the script for launching the task, it is necessary to provide the input data in the files control.in and geometry.in

Submit with :

sbatch --account=your_project_ID testFHI.sh

More info :

Gaussian

Gaussian 16 is the latest in the Gaussian series of programs. It provides state-of-the-art capabilities for electronic structure modeling. Gaussian 16 is licensed for a wide variety of computer systems. All versions of Gaussian 16 contain every scientific/modeling feature, and none imposes any artificial limitations on calculations other than your computing resources and patience.

Usage

Example script : testGaussian.sh

#!/bin/bash

#SBATCH -e TSHX2_boro_NBOs%j.err
#SBATCH -o TSHX2_boro_NBOs%j.msg
#SBATCH -p thin
#SBATCH -N 4
#SBATCH --ntasks-per-node=24

module load gaussian/g16

g16 < TSHX2_boro_NBOs.gjf

Submit with :

sbatch --account=your_project_ID testGaussian.sh

Users can also make use of Gaussian 09, for legacy purposes, indeed this module could be loaded as:

module load gaussian/g09

Usage:

Example script : testGaussian.sh

#!/bin/bash

#SBATCH -e TSHX2_boro_NBOs%j.err
#SBATCH -o TSHX2_boro_NBOs%j.msg
#SBATCH -p thin
#SBATCH -N 4
#SBATCH --ntasks-per-node=24

module load gaussian/g09

g09 < TSHX2_boro_NBOs.gjf

Submit with :

    sbatch --account=your_project_ID testGaussian.sh

More info :

Comsol

Create physics-based models and simulation applications with this software platform. The Model Builder enables you to combine multiple physics in any order for simulations of real-world phenomena. The Application Builder gives you the tools to build your own simulation apps. The Model Manager is a modeling and simulation management tool.

Usage

Example script : testComsol.sh

#!/bin/bash
#
#SBATCH -J a00P1TMax80ay220d300 # job name
#SBATCH -o a00P1TMax80ay220d300.o%j # output and error file name (%j expands to  jobID)
#SBATCH --nodes 2
#SBATCH --exclusive

module load comsol/6.1
comsol batch -inputfile a00P1TMax80ay220d300.mph -outputfile outa00P1TMax80ay220d300.mph -batchlog loga00P1TMax80ay220d300.txt

Submit with :

sbatch --account=your_project_ID testComsol.sh

More info :

Usage

Example script : testOrca.sh

#!/bin/bash 
# 
#SBATCH -J testOrca # job name 
#SBATCH -o testOrca.o%j # output and error file name (%j expands to  jobID)
#SBATCH -e testOrca.e%j # output and error file name (%j expands to  jobID)  

#SBATCH -n 2

module load orca/5.0.4
orca water.inp

Example file : water.inp

!HF DEF2-SVP
* xyz 0 1
O   0.0000   0.0000   0.0626
H  -0.7920   0.0000  -0.4973
H   0.7920   0.0000  -0.4973
*

Example script : testOrca1.sh

#!/bin/bash

#SBATCH --job-name=orca_thin_job 	# Nombre del trabajo
#SBATCH --output=orca_thin_job.o%j 	# Archivo de salida
#SBATCH --error=orca_thin_job.e%j 	# Archivo de error
#SBATCH --partition=thin 		# Partición 'thin'
#SBATCH --nodes=2 			
#SBATCH --account=proyecto_prueba 	# Cuenta del proyecto
#SBATCH --mincpus=48
#SBATCH --ntasks=12
#SBATCH --cpus-per-task=4

export ORCADIR=/opt/ohpc/pub/apps/orca/orca_5_0_4_linux_x86-64_shared_openmpi411
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/ohpc/pub/libs/hwloc/lib

module load openmpi/4.1.2-gcc-12.2.0-fosm5wz
module load orca/5.0.4

$ORCADIR/orca input.inp > output.out

Example file : input.inp

! B3LYP def2-SVP TightSCF Opt      # Cambia según tu método y requerimientos

%pal
  nprocs 12                        # Usar 48 núcleos en total (24 por nodo x 3 nodos)
end

* xyz 0 1                          # Geometría del sistema (esto es solo un ejemplo)
C    0.000000    0.000000    0.000000
H    0.000000    0.000000    1.089000
H    1.026719    0.000000   -0.363000
H   -0.513360   -0.889165   -0.363000
H   -0.513360    0.889165   -0.363000
*

Submit with :

sbatch --account=your_project_ID Orca_script_name.sh

More info : https://www.faccts.de/orca/

Autodock-Vina

AutoDock Vina is an open-source program for doing molecular docking. It was originally designed and implemented by Dr. Oleg Trott in the Molecular Graphics Lab (now CCSB) at The Scripps Research Institute.

Usage

Example script : testAutodock.sh

#!/bin/bash
#
#SBATCH -J testAutodock # job name
#SBATCH -o testAutodock.o%j # output and error file name (%j expands to  jobID)
#SBATCH -e testAutodock.e%j # output and error file name (%j expands to  jobID)
#SBATCH -n 2

module load vina/1.2.5
vina_1.2.5_linux_x86_64 --config conf.txt --out out-10modes.pdbqt --exhaustiveness 8 --cpu 2

Submit with :

sbatch --account=your_project_ID testAutodock.sh

More info :

Namd

NAMD, recipient of a 2002 Gordon Bell Award, a 2012 Sidney Fernbach Award, and a 2020 Gordon Bell Prize, is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. Based on Charm++ parallel objects, NAMD scales to hundreds of cores for typical simulations and beyond 500,000 cores for the largest simulations. NAMD uses the popular molecular graphics program VMD for simulation setup and trajectory analysis, but is also file-compatible with AMBER, CHARMM, and X-PLOR. NAMD is distributed free of charge with source code.

Usage

Example script : testNamd.sh

#!/bin/bash
#SBATCH --job-name=alanin
#SBATCH --output=%x.o%j
#SBATCH --error=%x.e%j
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=2
#SBATCH --mem=32G
#SBATCH --time=1-00:00:00
#SBATCH --partition=thin
#

module load namd/3.0b6

namd3 +p ${SLURM_TASKS_PER_NODE} alanin.conf > alanin.log

Example files can be found at : lib/replica/example/ in NAMD_3.0_Linux-x86_64-multicore.tar.gz

Submit with :

sbatch --account=your_project_ID testNamd.sh

More info :

OpenMX

OpenMX (Open source package for Material eXplorer) is a software package for nano-scale material simulations based on density functional theories (DFT) [1], norm-conserving pseudopotentials [32,33,34,35,36], and pseudo-atomic localized basis functions [41]. The methods and algorithms used in OpenMX and their implementation are carefully designed for the realization of large-scale ab initio electronic structure calculations on parallel computers based on the MPI or MPI/OpenMP hybrid parallelism. The efficient implementation of DFT enables us to investigate electronic, magnetic, and geometrical structures of a wide variety of materials such as bulk materials, surfaces, interfaces, liquids, and low-dimensional materials. Systems consisting of 1000 atoms can be treated using the conventional diagonalization method if several hundreds cores on a parallel computer are used. Even ab initio electronic structure calculations for systems consisting of more than 10000 atoms are possible with the O($N$) methods implemented in OpenMX if several thousands CPU cores on a parallel computer are available. Since optimized pseudopotentials and basis functions, which are well tested, are provided for many elements, users may be able to quickly start own calculations without preparing those data by themselves. Considerable functionalities have been implemented for calculations of physical properties such as magnetic, dielectric, and electric transport properties. Thus, it is expected that OpenMX can be a useful and powerful theoretical tool for nano-scale material sciences, leading to better and deeper understanding of complicated and useful materials based on quantum mechanics. The development of OpenMX has been initiated by the Ozaki group in 2000, and from then onward many developers listed in the top page of the manual have contributed for further development of the open source package. The distribution of the program package and the source codes follow the practice of the GNU General Public License version 3 (GPLv3) [102], and they are downloadable from http://www.openmx-square.org/

Usage

Example script : testOpenmx.sh

#!/bin/bash
#SBATCH --job-name=openMx
#SBATCH --output=openMx.o%j
#SBATCH --error=openMx.e%j
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=2
#SBATCH --mem=32G
#SBATCH --time=1-00:00:00
#SBATCH --partition=thin

module load openmpi/4.1.5-gcc-12.2.0-vo6j57n
module load openmx/3.9-gcc-12.2.0-yewsx2z

mpirun -np 4 openmx -runtest -nt 1
cp runtest.result runtest.result.4.1

mpirun -np 2 openmx -runtest -nt 2
cp runtest.result runtest.result.2.2

mpirun -np 1 openmx -runtest -nt 4
cp runtest.result runtest.result.1.4

Executed in openmx3.9/work directory inside openmx source files.

Submit with :

sbatch --account=your_project_ID testOpenmx.sh

More info :

Gimic

This is the GIMIC program for calculating magnetically induced currents in molecules. For this program produce any kind of useful information, you need to provide it with an AO density matrix and three (effective) magnetically perturbed AO density matrices in the proper format. Currently only recent versions of ACES2 (CFOUR), Turbomole, QChem, LSDalton, FERMION++, Gaussian can produce these matrices. Dalton is in the works.If you would like to add your favourite program to the list please use the source, Luke.

Usage

Example script : testGimic.sh

#!/bin/bash
#SBATCH -J testGimic # job name
#SBATCH -N 1 # total number of nodes
#SBATCH --cpus-per-task=2
#SBATCH --output=testGimig-%j.out #output file (%j expands to jobID)
#SBATCH --error=testGimic-%j.err #error file (%j expands to jobID)
#SBATCH --mem-per-cpu=16G
#SBATCH -p thin # partition

module load gimic/2.2.1
gimic > gimic.out

Launched from gimic-master/examples/benzene/3D directory.

More info :

OpenBabel

OpenBabel is a project to facilitate the interconversion of chemical data from one format to another – including file formats of various types. This is important for the following reasons:

  • Multiple programs are often required in realistic workflows. These may include databases, modeling or computational programs, visualization programs, etc.
  • Many programs have individual data formats, and/or support only a small subset of other file types.
  • Chemical representations often vary considerably:
    • Some programs are 2D. Some are 3D. Some use fractional k-space coordinates.
    • Some programs use bonds and atoms of discrete types. Others use only atoms and electrons.
    • Some programs use symmetric representations. Others do not.
    • Some programs specify all atoms. Others use "residues" or omit hydrogen atoms.
  • Individual implementations of even standardized file formats are often buggy, incomplete or do not completely match published standards.

As a free, and open source project, OpenBabel improves by way of helping others. It gains by way of its users, contributors, developers, related projects, and the general chemical community. We must continually strive to support these constituencies.

Usage

Example script : testOpenbabel.sh

#!/bin/bash

#SBATCH -e openbabel_test%j.err
#SBATCH -o openbabel_test%j.msg
#SBATCH -p thin
#SBATCH -N 4
#SBATCH --ntasks-per-node=24

module load openbabel/3.1.1

obabel --help

Submit with :

sbatch --account=your_project_ID testOpenbabel.sh

More info :

NVIDIA HPC SDK

A Comprehensive Suite of Compilers, Libraries and Tools for HPC. The NVIDIA HPC Software Development Kit (SDK) includes the proven compilers, libraries and software tools essential to maximizing developer productivity and the performance and portability of HPC applications.

IMPORTANT: If you make use of GPU queues, it is mandatory to select a minimum number of CPU cores given by 16 cores/GPU multiplied by the amount of nodes selected and the number of selected GPUs (16 cores/gpu * num. GPUs * num. nodes).

Usage

Example script : testNvidiaHpcSdk.sh

#!/bin/bash

#SBATCH -e nvidiahpcsdk_test%j.err
#SBATCH -o nvidiahpcsdk_test%j.msg
#SBATCH -p thin
#SBATCH -N 4
#SBATCH --ntasks-per-node=24

module load nvidia-hpc-sdk/24.5

nvcc --help
nvcc --version

Submit with :

sbatch --account=your_project_ID testNvidiaHpcSdk.sh

Example script (Running in ada queue) : sumParallel.sh

#!/bin/bash

#SBATCH -J multi-gpu-test
#SBATCH -e multigpu-test%j.err
#SBATCH -o multigpu-test%j.msg
#SBATCH -p ada # queue (partition)
#SBATCH --nodes=1
#SBATCH --gres=gpu:2 # launch in 2-GPUs
#SBATCH --cpus-per-task=32 # 16 cores por GPU * 2 GPUs * 1 nodo

module load nvidia-hpc-sdk/24.5
echo "Loaded NVIDIA SDK !!!"
nvidia-smi

mkdir -p /fs/agustina/$(whoami)/test-multigpu
export CUDA_SUM_CODE=/fs/agustina/$(whoami)/test-multigpu
nvcc $CUDA_SUM_CODE/sum-array-multigpu.cu -o $CUDA_SUM_CODE/sum-array-multigpu
$CUDA_SUM_CODE/./sum-array-multigpu

Example script (Running in hopper queue) : sumParallel.sh

#!/bin/bash

#SBATCH -J multi-gpu-test
#SBATCH -e multigpu-test%j.err
#SBATCH -o multigpu-test%j.msg
#SBATCH -p hopper # queue (partition)
#SBATCH --nodelist=agpuh02
#SBATCH --gres=gpu:4 # launch in 4-GPUs
#SBATCH --cpus-per-task=64 # 16 cores por GPU * 4 GPUs * 1 nodo

module load nvidia-hpc-sdk/24.5
echo "Loaded NVIDIA SDK !!!"
nvidia-smi

export CUDA_SUM_CODE=/fs/agustina/sergiomtzlosa/test-multigpu
nvcc $CUDA_SUM_CODE/sum-array-multigpu.cu -o $CUDA_SUM_CODE/sum-array-multigpu
$CUDA_SUM_CODE/./sum-array-multigpu

Execution in selected nodes: sumParallel.sh

#!/bin/env bash

#SBATCH -J multgpu-test # job name
#SBATCH -o multi-gpu.o%j # output and error file name (%j expands to jobID)
#SBATCH -p hopper # H100 (partition)
#SBATCH --gres=gpu:4 # gpus per node
#SBATCH -–nodelist=agpuh[02-03] # two nodes (node 2 and 3)
#SBATCH --ntasks=2 # one task per node
#SBATCH --cpus-per-task=128 # 16 cores por GPU * 4 GPUs * 2 nodos

module load nvidia-hpc-sdk/24.5
export CUDA_SUM_CODE=/fs/agustina/$(whoami)/test-multigpu
nvcc $CUDA_SUM_CODE/sum-array-multigpu.cu -o $CUDA_SUM_CODE/sum-array-multigpu
mpirun $CUDA_SUM_CODE/./sum-array-multigpu

Submit with :

sbatch --account=your_project_ID sumParallel.sh

CUDA file : sum-array-multigpu.cu

#include "cuda_runtime.h"
#include "device_launch_parameters.h"

#include <stdio.h>

// for random initialize
#include <stdlib.h>
#include <time.h>

// for memeset
#include <cstring>

void printGpuInfo(int i) {

	cudaDeviceProp prop;
	cudaGetDeviceProperties(&prop, i);
	printf("Device Number: %d\n", i);
	printf("  Device name: %s\n", prop.name);
	printf("  Memory Clock Rate (KHz): %d\n",  prop.memoryClockRate);
	printf("  Memory Bus Width (bits): %d\n", prop.memoryBusWidth);
	printf("  Peak Memory Bandwidth (GB/s): %f\n\n", 2.0*prop.memoryClockRate*(prop.memoryBusWidth/8)/1.0e6);
}

void compare_arrays(int *a, int *b, int size) {
	for (int i = 0; i < size; i++) {
		if (a[i] != b[i]) {
			printf("%d != %d\n", a[i], b[i]);
			printf("Arrays are different!\n\n");
			return;
		}
	}
	printf("Arrays are the same!\n\n");
}

// CUDA Kernel
__global__ void sum_array_gpu(int *a, int *b, int *c, int size) {
	int gid = blockIdx.x * blockDim.x + threadIdx.x;

	if (gid < size) {
		c[gid] = a[gid] + b[gid];
	}
}

void sum_array_cpu(int *a, int *b, int *c, int size) {
	for (int i = 0; i < size; i++) {
		c[i] = a[i] + b[i];
	}
}

int main() {

	int size = 10000;
	int block_size = 128;
	int nDevices;
	int NO_BYTES = size * sizeof(int);

	// host pointers
	int *h_a, *h_b, *gpu_results, *h_c;

	h_a = (int *)malloc(NO_BYTES);
	h_b = (int *)malloc(NO_BYTES);
	h_c = (int *)malloc(NO_BYTES);

	// initialize host pointer
	time_t t;
	srand((unsigned)time(&t));

	for (int i = 0; i < size; i++) {
		h_a[i] = (int)(rand() & 0xff);
	}

	for (int i = 0; i < size; i++) {
		h_b[i] = (int)(rand() & 0xff);
	}

	sum_array_cpu(h_a, h_b, h_c, size);

	cudaGetDeviceCount(&nDevices);

	// device pointer
	int *d_a, *d_b, *d_c;

	for (int dev = 0; dev < nDevices; dev++) {

		printGpuInfo(dev);
		cudaSetDevice(dev);

		gpu_results = (int *)malloc(NO_BYTES);
		memset(gpu_results, 0 , NO_BYTES);

		cudaMalloc((int **)&d_a, NO_BYTES);
		cudaMalloc((int **)&d_b, NO_BYTES);
		cudaMalloc((int **)&d_c, NO_BYTES);

		cudaMemcpy(d_a, h_a, NO_BYTES, cudaMemcpyHostToDevice);
		cudaMemcpy(d_b, h_b, NO_BYTES, cudaMemcpyHostToDevice);

		// launching the grid
		dim3 block(block_size);
		dim3 grid((size/block.x) + 1);

		sum_array_gpu<<<grid, block>>>(d_a, d_b, d_c, size);
		cudaDeviceSynchronize();

		cudaMemcpy(gpu_results, d_c, NO_BYTES, cudaMemcpyDeviceToHost);

		// array comparison
		compare_arrays(gpu_results, h_c, size);

		cudaFree(d_a);
		cudaFree(d_b);
		cudaFree(d_c);
		free(gpu_results);
	}

	free(h_a);
	free(h_b);

}

For a multigpu and multinode launch, it is mandatory to use OpenMPI, here is the same bash script with multigpu support:

#SBATCH -J multi-gpu-test
#SBATCH -e multigpu-test%j.err
#SBATCH -o multigpu-test%j.msg
#SBATCH -p ada # queue L40S (partition)
#SBATCH --gres=gpu:4 # gpus per node
#SBATCH --nodes=4 # four nodes
#SBATCH --ntasks=4 # one task per node
#SBATCH --cpus-per-task=256 # 16 cores por GPU * 4 GPUs * 4 nodos

echo "This bash script launches the program on 16 GPUs"

module load nvidia-hpc-sdk/24.5
echo "Loaded NVIDIA SDK !!!"
nvidia-smi

export CUDA_SUM_CODE=/fs/agustina/$(whoami)/test-multigpu
nvcc $CUDA_SUM_CODE/sum-array-multigpu.cu -o $CUDA_SUM_CODE/sum-array-multigpu
mpirun $CUDA_SUM_CODE/./sum-array-multigpu

Submit with :

sbatch --account=your_project_ID sumParallel.sh

OLLAMA execution in Agustina

To use OLLAMA in Agustina it is mandatory to obtain the OLLAMA binary file for Linux distributions:

Uncompress the ollama-linux-amd64.tgz file and create directories:

$ mkdir -p /fs/agustina/$(whoami)/test-ollama/prompts # prompts folder
$ mkdir -p /fs/agustina/$(whoami)/test-ollama/models-ollama # new folder to download OLLAMA models
$ cd /fs/agustina/$(whoami)/test-ollama
$ wget https://github.com/ollama/ollama/releases/download/v0.3.14/ollama-linux-amd64.tgz
$ tar xvf ollama-linux-amd64.tgz

OLLAMA uses the /home directory to store the models, indeed in $HOME/.ollama/models, but we change this path by setting the OLLAMA_MODELS environment variable.

Here is an example of running OLLAMA model in Agustina cluster on H100 GPUs:

bash file : test-ollama.sh

#!/bin/bash

#SBATCH -J ollama-gpu-test
#SBATCH -e ollama-test%j.err
#SBATCH -o ollama-test%j.msg
#SBATCH -p hopper # H100 queue (partition)
#SBATCH --nodelist=agpuh02
#SBATCH --gres=gpu:4 # four GPUs
#SBATCH --cpus-per-task=64 # 16 cores por GPU * 4 GPUs * 1 nodo

module load nvidia-hpc-sdk/24.5
echo "Loaded NVIDIA SDK !!!"

module load python-math/3.11.4

python --version
nvidia-smi

echo "Current path: $(pwd)"

export BASE_OLLAMA_TEST=/fs/agustina/$(whoami)/test-ollama
export PROMPTS_PATH=$BASE_OLLAMA_TEST/prompts
export OLLAMA_BIN=$BASE_OLLAMA_TEST/bin

echo "OLLAMA PATH: $OLLAMA_BIN"

export OLLAMA_NUMPARALLEL=4
export OLLAMA_LOAD_TIMEOUT=900

# change the models download path with this environment variable
export OLLAMA_MODELS=$BASE_OLLAMA_TEST/models-ollama

$OLLAMA_BIN/./ollama serve &
$OLLAMA_BIN/./ollama list

for i in llama3.1:8b-instruct-q2_K llama3.1:8b-instruct-q8_0; do
        touch answer1-$i.txt && >answer1-$i.txt
        echo "" >> answer1-$i.txt
        echo "PROMPT:" >> answer1-$i.txt
        echo "" >> answer1-$i.txt
        cat $PROMPTS_PATH/prompt1.txt >> answer1-$i.txt
        echo "" >> answer1-$i.txt
        echo "ANSWER:" >> answer1-$i.txt
        echo "" >> answer1-$i.txt
        echo $(< $PROMPTS_PATH/prompt1.txt) | $OLLAMA_BIN/./ollama run $i >> answer1-$i.txt
        echo "" >> answer1-$i.txt
        echo "--------------------------------" >> answer1-$i.txt
        echo "" >> answer1-$i.txt
        echo "PROMPT:" >> answer1-$i.txt
        echo "" >> answer1-$i.txt
        cat $PROMPTS_PATH/prompt2.txt >> answer1-$i.txt
        echo "" >> answer1-$i.txt
        echo "ANSWER:" >> answer1-$i.txt
        echo "" >> answer1-$i.txt
        echo $(< $PROMPTS_PATH/prompt2.txt) | $OLLAMA_BIN/./ollama run $i >> answer1-$i.txt
done

$OLLAMA_BIN/./ollama list

echo "DONE!"

The script takes the input prompts from files $PROMPTS_PATH/prompt1.txt and $PROMPTS_PATH/prompt2.txt.

Submit with :

sbatch --account=your_project_ID test-ollama.sh

More info :

CUDA 12.0

CUDA is a proprietary parallel computing platform and application programming interface that allows software to use certain types of graphics processing units for accelerated general-purpose processing, an approach called general-purpose computing on GPUs.

IMPORTANT: If you make use of GPU queues, it is mandatory to select a minimum number of CPU cores given by 16 cores/GPU multiplied by the amount of nodes selected and the number of selected GPUs (16 cores/gpu * num. GPUs * num. nodes).

Usage

Example script : test-cuda.sh

#!/bin/bash

#SBATCH -J cuda-test # job name
#SBATCH -o cuda-test.o%j # output and error file name (%j expands to jobID)
#SBATCH -p ada # queue L40S (partition)
#SBATCH -N 2 # total number of nodes
#SBATCH --gres=gpu:1 # gpus per node
#SBATCH --cpus-per-task=32 # 16 cores por GPU * 1 GPU * 2 nodos

module load cuda/12.0

echo $CUDA12_HOME
echo $CUDA12_BIN
echo $CUDA12_LIB64
echo $CUDA12_INCLUDE

$CUDA12_BIN/nvcc --version

echo "done!!"

The informative report will be shown un the job output file.

Submit with :

sbatch --account=your_project_ID test-cuda.sh

More info :

MUMAX3-cQED 1.0

Mumax3-cQED: like Mumax3 but for a magnet coupled to a cavity. This is a fork of the micromagnetic simulation open source software mumax3. Mumax3-cQED, enhances mumax3 by including the effect of coupling the magnet to an electromagnetic cavity.

IMPORTANT: If you make use of GPU queues, it is mandatory to select a minimum number of CPU cores given by 16 cores/GPU multiplied by the amount of nodes selected and the number of selected GPUs (16 cores/gpu * num. GPUs * num. nodes).

Usage

Example script : test-mumax.sh

#!/bin/bash

#SBATCH -o mumax3-cqed-infos%j.o
#SBATCH -p hopper
#SBATCH -N 1
#SBATCH --gres=gpu:1 # launch in 1-GPUs
#SBATCH --cpus-per-task=16 # 16 cores por GPU * 1 GPU * 1 nodo

module load mumax3-cqed/1.0

echo "GCC version: $(gcc --version)"

echo "GO version: $(go version)"

mumax3 test-script.mx3

echo "done!!"

The informative report will be shown in the output file provided by the job.

Submit with :

sbatch --account=your_project_ID test-mumax.sh

More info :

NCCL Test

NCCL are the optimized primitives for inter-GPU communication. NCCL (pronounced "Nickel") is a stand-alone library of standard communication routines for GPUs, implementing all-reduce, all-gather, reduce, broadcast, reduce-scatter, as well as any send/receive based communication pattern. It has been optimized to achieve high bandwidth on platforms using PCIe, NVLink, NVswitch, as well as networking using InfiniBand Verbs or TCP/IP sockets. NCCL supports an arbitrary number of GPUs installed in a single node or across multiple nodes, and can be used in either single- or multi-process (e.g., MPI) applications.

These tests check both the performance and the correctness of NCCL operations.

Usage

Example script : test-nccl.sh

#!/bin/bash

#SBATCH -J nccl-tst # job name
#SBATCH -o nccl-test.o%j # output and error file name (%j expands to jobID)
#SBATCH -p ada # queue L40S (partition)
#SBATCH -N 2 # total number of nodes

module load nccl-test/1.0

# Run 2 MPI processes in 2 GPUs in 2 Nodes
for i in $NCCLBUILD/*_perf; do
        FILENAME=$(basename $i)
        echo ""
        echo "Running test $FILENAME..."
        echo ""
        mpirun -np 2 $NCCLBUILD/./$FILENAME -b 8 -e 8G -f 2 -g 2
done

echo "done!!"

The informative report will be shown un the job output file.

Submit with :

sbatch --account=your_project_ID test-nccl.sh

More info :

XTB

Semiempirical Extended Tight-Binding Program Package.

Usage

Example script : testXtb.sh

#!/bin/bash

#SBATCH -e xtb_test%j.err
#SBATCH -o xtb_test%j.msg
#SBATCH -p thin
#SBATCH -N 4
#SBATCH --ntasks-per-node=24

module load xtb/6.7.0

xtb --help
xtb --version

Submit with :

sbatch --account=your_project_ID testXtb.sh

More info :

CREST

CREST was developed as a utility and driver program for the semiempirical quantum chemistry package xtb. The programs name originated as an abbreviation for Conformer–Rotamer Ensemble Sampling Tool as it was developed as a program for conformational sampling at the extended tight-binding level GFN-xTB. Since then several functionalities have been added to the code. In its current state, the program provides a variety of sampling procedures, for example for improved thermochemistry, or explicit solvation.

Usage

Example script : testCrest.sh

#!/bin/bash

#SBATCH -e crest_test%j.err
#SBATCH -o crest_test%j.msg
#SBATCH -p thin
#SBATCH -N 4
#SBATCH --ntasks-per-node=24

module load crest/3.0.1

crest --help
crest --version

Submit with :

sbatch --account=your_project_ID testCrest.sh

More info :

CP2K

CP2K is a quantum chemistry and solid state physics software package that can perform atomistic simulations of solid state, liquid, molecular, periodic, material, crystal, and biological systems. CP2K provides a general framework for different modeling methods such as DFT using the mixed Gaussian and plane waves approaches GPW and GAPW. Supported theory levels include DFTB, LDA, GGA, MP2, RPA, semi-empirical methods (AM1, PM3, PM6, RM1, MNDO, …), and classical force fields (AMBER, CHARMM, …). CP2K can do simulations of molecular dynamics, metadynamics, Monte Carlo, Ehrenfest dynamics, vibrational analysis, core level spectroscopy, energy minimization, and transition state optimization using NEB or dimer method.

Usage

Example script : testCp2k.sh

#!/bin/bash

#SBATCH -e cp2k_test%j.err
#SBATCH -o cp2k_test%j.msg
#SBATCH -p thin
#SBATCH -N 4
#SBATCH --ntasks-per-node=24

module load cp2k/2024.1

cp2k --help
cp2k --version

Submit with :

sbatch --account=your_project_ID testCp2k.sh

More info :

Julia

The Julia Programming Language.

Usage

Example script : testJulia.sh

#!/bin/bash

#SBATCH -e julia_test%j.err
#SBATCH -o julia_test%j.msg
#SBATCH -p thin
#SBATCH -N 4
#SBATCH --ntasks-per-node=24

module load julia/1.10.3

julia --help
julia --version

Submit with :

sbatch --account=your_project_ID testJulia.sh

More info :

Custom Python 3.11 environment for science

Custom Python 3.11.4 environment with matplotlib, scipy, numpy and math packages.

Usage

Example script : testPython.sh

#!/bin/bash

#SBATCH -e python_test%j.err
#SBATCH -o python_test%j.msg
#SBATCH -p thin
#SBATCH -N 4
#SBATCH --ntasks-per-node=24

module load python-math/3.11.4

python --help
python --version

Submit with :

sbatch --account=your_project_ID testPython.sh

More info :

Golang 1.9

Build simple, secure, scalable systems with Go.

  • An open-source programming language supported by Google
  • Easy to learn and great for teams
  • Built-in concurrency and a robust standard library
  • Large ecosystem of partners, communities, and tools

Usage

Example script : testGolang.sh

#!/bin/bash

#SBATCH -e golang_test%j.err
#SBATCH -o golang_test%j.msg
#SBATCH -p thin
#SBATCH -N 4
#SBATCH --ntasks-per-node=24

module load golang/1.9

go --help
go version

Submit with :

sbatch --account=your_project_ID testGolang.sh

More info :

Qibo

Qibo is an open-source middleware for quantum computing. An end-to-end open source platform for quantum simulation, self-hosted quantum hardware control, calibration and characterization. It has a comprehensive, flexible ecosystem of tools, libraries and community resources that lets researchers and users to quickly deploy quantum powered applications.

Qibo can run circuits in CPU and GPU backends.

IMPORTANT: If you make use of GPU queues, it is mandatory to select a minimum number of CPU cores given by 16 cores/GPU multiplied by the amount of nodes selected and the number of selected GPUs (16 cores/gpu * num. GPUs * num. nodes).

Usage

Example script : testQibo.sh

#!/bin/bash

#SBATCH -e qibo_test%j.err
#SBATCH -o qibo_test%j.msg
#SBATCH -p thin
#SBATCH -N 4
#SBATCH --ntasks-per-node=24

module load qibo/0.1.2

python qibo-test.py

Submit with :

sbatch --account=your_project_ID testQibo.sh

Example script : qibo-test.py

import numpy as np
from qibo import gates
from qibo.models import Circuit
import qibo

qibo.set_device("/CPU:0")

# Construct the circuit
c = Circuit(2)

# Add some gates
c.add(gates.H(0))
c.add(gates.H(1))

# Define an initial state (optional - default initial state is |00>)
initial_state = np.ones(4) / 2.0

# Execute the circuit and obtain the final state
result = c(initial_state) # c.execute(initial_state) also works
print(result.state())
# should print `tf.Tensor([1, 0, 0, 0])`

Qibo also runs Quantum circuits on GPUs with NVIDIA support via CuPy backend:

Example script : testQiboGPU.sh

#!/bin/bash

#SBATCH -e qibogpu-errors%j.err
#SBATCH -o qibogpu-infos%j.msg
#SBATCH -p hopper # H100 queue
#SBATCH -N 2
#SBATCH --gres=gpu:1 # launch in 1-GPUs
#SBATCH --cpus-per-task=32 # 16 cores por GPU * 1 GPU * 2 nodos

module load nvidia-hpc-sdk/24.5
module load qibo/0.2.12-gpu
nvcc --version
python test-qibogpu.py

Submit with :

sbatch --account=your_project_ID testQiboGPU.sh

Example script : test-qibogpu.py

# Minimize energy for the 6-qubits XXZ Heisenberg hamiltonian
# using VQE and Powell optimizer
import numpy as np
from qibo import models, gates, hamiltonians
import qibo

qibo.set_device("/GPU:0") # select GPU backend

nqubits = 6
nlayers = 1

# Create variational circuit
circuit = models.Circuit(nqubits)
for l in range(nlayers):
    circuit.add((gates.RY(q, theta=0) for q in range(nqubits)))
    circuit.add((gates.CZ(q, q+1) for q in range(0, nqubits-1, 2)))
    circuit.add((gates.RY(q, theta=0) for q in range(nqubits)))
    circuit.add((gates.CZ(q, q+1) for q in range(1, nqubits-2, 2)))
    circuit.add(gates.CZ(0, nqubits-1))
circuit.add((gates.RY(q, theta=0) for q in range(nqubits)))

# Create XXZ Hamiltonian
hamiltonian = hamiltonians.XXZ(nqubits=nqubits)
# Create VQE model
vqe = models.VQE(circuit, hamiltonian)

# Optimize starting from a random guess for the variational parameters
initial_parameters = np.random.uniform(0, 2*np.pi, 2*nqubits*nlayers + nqubits)
best, params, extra = vqe.minimize(initial_parameters, method='Powell', compile=False)

print("Best value:", best)
print("Optimized params:", params)
print(extra)

More info :

Qiskit

Qiskit is the world’s most popular software stack for quantum computing, with over 2,000 forks, over 8,000 contributions, and over 3 trillion circuits run.

Usage

Example script : testQiskit.sh

#!/bin/bash

#SBATCH -e qiskit_test%j.err
#SBATCH -o qiskit_test%j.msg
#SBATCH -p thin
#SBATCH -N 4
#SBATCH --ntasks-per-node=24

module load qiskit/0.45.0

python vqe-qiskit-test.py

Submit with :

sbatch --account=your_project_ID testQiskit.sh

Example script : vqe-qiskit-test.py

# General libraries
import warnings
import numpy as np
from functools import partial
from scipy.optimize import minimize

# Qiskit libraries
import qiskit
from qiskit.utils import QuantumInstance, algorithm_globals
from qiskit import Aer
from qiskit.circuit.library import TwoLocal
from qiskit.primitives import Estimator
from qiskit.algorithms.minimum_eigensolvers import VQE

warnings.filterwarnings("ignore")

# Minimize the energy of this 4-qubit Hamiltonian given in Pauli operators
# H = 1.0 * XXII + 0.3 * ZIII + 1.0 * IXXI + 0.3 * IZII + 1.0 * IIXX + 0.3 * IIZI + 1.0 * XIIX + 0.3 * IIIZ

def ising_chain_ham(n, gam):

    # Esta función nos devuelve el Hamiltoniano en términos utilizables por los algoritmos de Qiskit
    # n = number of spin positions
    # gam = transverse field parameter
    from qiskit.opflow import X, Z, I

    for i in range(n):
        vecX = [I] * n
        vecZ = [I] * n
        vecX[i] = X
        vecZ[i] = Z

        if i == n - 1:
            vecX[0] = X
        else:
            vecX[i+1] = X

        auxX = vecX[0]
        auxZ = vecZ[0]

        for a in vecX[1:n]:
            auxX = auxX ^ a
        for b in vecZ[1:n]:
            auxZ = auxZ ^ b

        if i == 0:
            H = (auxX) + (gam * auxZ)
        else:
            H = H + (auxX) + (gam * auxZ)

    return H

# Hamiltonian definition
n = 4 # number of qubits
gam = 0.3

op_H = ising_chain_ham(n, gam) # Hamiltonian creation

print("Estructura del Hamiltoniano en matrices de Pauli:")
print("--------------------------------------------------\n")
print(op_H)

# ansatz creation
ansatz = TwoLocal(num_qubits=n, rotation_blocks='ry', entanglement_blocks='cx')

seed = 63
np.random.seed(seed) # seed for reproducibility
algorithm_globals.random_seed = seed

# use L-BFGS-B optimizer with minimizze function
optimizer = partial(minimize, method="L-BFGS-B")

initial_point = np.random.random(ansatz.num_parameters) # valor inicial

intermediate_info = {
    'nfev': [],
    'parameters': [],
    'mean': [],
}

def callback(nfev, parameters, mean, stddev):
    intermediate_info['nfev'].append(nfev)
    intermediate_info['parameters'].append(parameters)
    intermediate_info['mean'].append(mean)

backend = Aer.get_backend('aer_simulator')

qi = QuantumInstance(backend, shots=1024, seed_simulator=seed, seed_transpiler=seed)

vqe_min = VQE(estimator = Estimator(),
           ansatz=ansatz,
           optimizer=optimizer,
           initial_point=initial_point,
           callback=callback)

vqe_min.quantum_instance = qi

result = vqe_min.compute_minimum_eigenvalue(op_H)

print('\nEigenvalue:', result.eigenvalue)
print('Eigenvalue real part:', result.eigenvalue.real)

print(result, "\n")
print("E_G =", result.optimal_value)

print("Qiskit version:", qiskit.__version__)

More info :

Pennylane

The definitive open-source Python framework for quantum programming. Built by researchers, for research. Suitable for Quantum Machine Learning.

This framework can run on CPU and GPU using the built in devices. The following example shows the Pennylane framework with GPU device.

IMPORTANT: If you make use of GPU queues, it is mandatory to select a minimum number of CPU cores given by 16 cores/GPU multiplied by the amount of nodes selected and the number of selected GPUs (16 cores/gpu * num. GPUs * num. nodes).

Usage

Example script : testPennylaneGPU.sh

#!/bin/bash

#SBATCH -e pennylanegpu-errors%j.err
#SBATCH -o pennylanegpu-infos%j.msg
#SBATCH -p hopper # H100 queue
#SBATCH -N 2
#SBATCH --gres=gpu:1 # launch in 1-GPUs
#SBATCH --cpus-per-task=32 # 16 cores por GPU * 1 GPU * 2 nodos

module load nvidia-hpc-sdk/24.5
module load pennylane/0.33.1
nvcc --version
python test-pennylanegpu.py

Submit with :

sbatch --account=your_project_ID testPennylaneGPU.sh

Example script : test-pennylanegpu.py

#!/usr/bin/python

# General libraries
from scipy.ndimage import gaussian_filter
import warnings

# Pennylane libraries
import pennylane as qml
from pennylane import numpy as np

warnings.filterwarnings("ignore")

# Minimize the energy of this 4-qubit Hamiltonian given in Pauli operators
# H = 1.0 * XXII + 0.3 * ZIII + 1.0 * IXXI + 0.3 * IZII + 1.0 * IIXX + 0.3 * IIZI + 1.0 * XIIX + 0.3 * IIIZ

def ising_chain_ham(n, gam, pennylane = False):

    # This function build the hamiltonian with Pauli operators
    # n = number of spin positions
    # gam = transverse field parameter
    if pennylane:
        import pennylane as qml
        from pennylane import numpy as np

    from qiskit.opflow import X, Z, I

    for i in range(n):
        vecX = [I] * n
        vecZ = [I] * n
        vecX[i] = X
        vecZ[i] = Z

        if i == n - 1:
            vecX[0] = X
        else:
            vecX[i+1] = X

        auxX = vecX[0]
        auxZ = vecZ[0]

        for a in vecX[1:n]:
            auxX = auxX ^ a
        for b in vecZ[1:n]:
            auxZ = auxZ ^ b

        if i == 0:
            H = (auxX) + (gam * auxZ)
        else:
            H = H + (auxX) + (gam * auxZ)

    if pennylane:
        h_matrix = np.matrix(H.to_matrix_op().primitive.data)
        return qml.pauli.pauli_decompose(h_matrix)

    return H


# Hamiltonian definition
n = 4 # número de qubits
gam = 0.3

H = ising_chain_ham(n, gam, pennylane = True) # Creamos el Hamiltoniano

print("Hamiltoniano in Pauli operators:")
print("--------------------------------------------------\n")
print(H)

# choose device GPU or CPU
dev = qml.device("lightning.gpu", wires = n) # pennylane GPU simulator device
#dev = qml.device("lightning.qubit", wires = n) # pennylane CPU simulator device

init_param = (
    np.array(np.random.random(n), requires_grad=True),
    np.array(1.1, requires_grad=True),
    np.array(np.random.random(n), requires_grad=True),
)

rot_weights = np.ones(n)
crot_weights = np.ones(n)

nums_frequency = {
    "rot_param": {(0,): 1, (1,): 1, (2,): 1., (3,): 1.}, # parámetros iniciales para las rotaciones de puertas
    "layer_par": {(): n},
    "crot_param": {(0,): 2, (1,): 2, (2,): 2, (3,): 2},
}

@qml.qnode(dev)
def ansatz(rot_param, layer_par, crot_param, rot_weights = None, crot_weights = None):

    # Ansatz
    for i, par in enumerate(rot_param * rot_weights):
        qml.RY(par, wires = i)

    for _ in list(range(len(dev.wires))):

        qml.CNOT(wires = [0, 1])
        qml.CNOT(wires = [0, 2])
        qml.CNOT(wires = [1, 2])
        qml.CNOT(wires = [0, 3])
        qml.CNOT(wires = [1, 3])
        qml.CNOT(wires = [2, 3])

    # Measure of expected value for hamiltonian
    return qml.expval(H)

max_iterations = 500 # max. iterations for optimizer

# We use the Rotosolve optimizer built-in Pennylane
opt = qml.RotosolveOptimizer(substep_optimizer = "brute", substep_kwargs = {"num_steps": 4})

param = init_param

rot_weights = np.array([0.4, 0.8, 1.0, 1.2], requires_grad=False)
crot_weights = np.array([0.5, 1.0, 1.5, 1.8], requires_grad=False)

cost_rotosolve = []

for n in range(max_iterations):

    param, cost, prev_energy = opt.step_and_cost(
         ansatz,
         *param,
         nums_frequency=nums_frequency,
         spectra = [],
         full_output=True,
         rot_weights=rot_weights,
         crot_weights=crot_weights,
    )

    # Compute energy
    energy = ansatz(*param, rot_weights=rot_weights, crot_weights=crot_weights)

    # Calculate difference between new and old energies
    conv = np.abs(energy - prev_energy)

    if n % 10 == 0:
        print("Iteration = {:},  Energy = {:.15f} Ha,  Convergence parameter = {:.15f} Ha".format(n, energy, np.mean(conv)))

    cost_rotosolve.extend(prev_energy)

print("\n==================================")
print("Number of iterations = ", max_iterations)
print("Last energy value = ", cost_rotosolve[len(cost_rotosolve) - 1])

qml.about()

More info :

R

R is a free software environment for statistical computing and graphics. It compiles and runs on a wide variety of UNIX platforms, Windows and MacOS.

Usage

Example script : testR.sh

#!/bin/bash

#SBATCH -e R_test%j.err
#SBATCH -o R_test%j.msg
#SBATCH -p thin
#SBATCH -N 4
#SBATCH --ntasks-per-node=24

module load R/4.2.1

R --help
R --version

Submit with :

sbatch --account=your_project_ID testR.sh

More info :

NBO/7.0

The Natural Bond Orbital (NBO) program NBO 7.0 is a discovery tool for chemical insights from complex wavefunctions. NBO 7.0 is the current version of the broad suite of 'natural' algorithms for optimally expressing numerical solutions of Schrödinger's wave equation in the chemically intuitive language of Lewis-like bonding patterns and associated resonance-type 'donor-acceptor' interactions.

Usage

Example script : testNBO7.sh

#!/bin/bash

#SBATCH -e nbo7_test%j.err
#SBATCH -o nbo7_test%j.msg
#SBATCH -p thin
#SBATCH -N 4
#SBATCH --ntasks-per-node=24

module load nbo/7

# lauch with gaunbo7

Submit with :

sbatch --account=your_project_ID testNBO7.sh

More info :

DIA-NN

DIA-NN - a universal software suite for data-independent acquisition (DIA) proteomics data processing. Conceived at the University of Cambridge, UK, in the laboratory of Kathryn Lilley (Cambridge Centre for Proteomics), DIA-NN opened a new chapter in proteomics, introducing a number of algorithms which enabled reliable, robust and quantitatively accurate large-scale experiments using high-throughput methods. DIA-NN is currently being further developed in the laboratory of Vadim Demichev at the Charité (University Medicine Berlin, Germany).

Installed version: DIA-NN v1.8.1

Usage

Example script : testDIANN.sh

#!/bin/bash

#SBATCH -e diann_test %j.err
#SBATCH -o diann_test%j.msg
#SBATCH -p thin
#SBATCH -N 1
#SBATCH --ntasks-per-node=1

module load diann/1.8.1

$DIANN_HOME/diann-1.8.1 -h

Submit with :

sbatch --account=your_project_ID testDIANN.sh

More info :

Miniconda3

Miniconda is a free, miniature installation of Anaconda Distribution that includes only conda, Python, the packages they both depend on, and a small number of other useful packages.

Miniconda is useful to create your own conda environment in the /fs/agustina/USER folder. To do this, you can import your own yaml file and create the environment with the following commands:

Usage

module load miniconda3/3.12
mkdir -p /fs/agustina/$(whoami)/conda-env
$MINICONDA3_HOME/bin/conda-env create --prefix /fs/agustina/$(whoami)/conda-env/my-conda-env --file /fs/agustina/$(whoami)/conda-env/my-environment-file.yml
source $MINICONDA3_HOME/bin/activate /fs/agustina/$(whoami)/conda-env/my-conda-env

Once you have the environment ready and activated, you can more packages with pip install inside the environment, the you can create a bash script to launch the environment in agustina cluster:

Example script : test-miniconda.sh

#!/bin/bash

#SBATCH -o test-miniconda-infos%j.o
#SBATCH -p thin
#SBATCH -N 1

module load miniconda3/3.12

# activate environment
source $MINICONDA3_HOME/bin/activate /fs/agustina/$(whoami)/conda-env/my-conda-env

python -v

echo "done!!"

Submit with :

sbatch --account=your_project_ID test-miniconda.sh

**Base file: my-environment-file.yml **

name:
channels:
  - defaults
dependencies:
  - _libgcc_mutex=0.1=main
  - _openmp_mutex=5.1=1_gnu
  - blas=1.0=mkl
  - bzip2=1.0.8=h5eee18b_6
  - ca-certificates=2024.9.24=h06a4308_0
  - expat=2.6.3=h6a678d5_0
  - intel-openmp=2023.1.0=hdb19cb5_46306
  - ld_impl_linux-64=2.40=h12ee557_0
  - libffi=3.4.4=h6a678d5_1
  - libgcc-ng=11.2.0=h1234567_1
  - libgomp=11.2.0=h1234567_1
  - libmpdec=4.0.0=h5eee18b_0
  - libstdcxx-ng=11.2.0=h1234567_1
  - libuuid=1.41.5=h5eee18b_0
  - mkl=2023.1.0=h213fc3f_46344
  - mkl-service=2.4.0=py313h5eee18b_1
  - mkl_fft=1.3.11=py313h5eee18b_0
  - mkl_random=1.2.8=py313h06d7b56_0
  - ncurses=6.4=h6a678d5_0
  - numpy=2.1.3=py313hf4aebb8_0
  - numpy-base=2.1.3=py313h3fc9231_0
  - openssl=3.0.15=h5eee18b_0
  - pip=24.2=py313h06a4308_0
  - python=3.13.0=hf623796_100_cp313
  - python_abi=3.13=0_cp313
  - readline=8.2=h5eee18b_0
  - setuptools=72.1.0=py313h06a4308_0
  - sqlite=3.45.3=h5eee18b_0
  - tbb=2021.8.0=hdb19cb5_0
  - tk=8.6.14=h39e8969_0
  - tzdata=2024b=h04d1e81_0
  - xz=5.4.6=h5eee18b_1
  - zlib=1.2.13=h5eee18b_1
  - pip:
      - alabaster==1.0.0
      - babel==2.16.0
      - certifi==2024.8.30
      - charset-normalizer==3.4.0
      - contourpy==1.3.1
      - cycler==0.12.1
      - docutils==0.21.2
      - fonttools==4.55.0
      - idna==3.10
      - imagesize==1.4.1
      - jinja2==3.1.4
      - kiwisolver==1.4.7
      - markupsafe==3.0.2
      - matplotlib==3.9.2
      - packaging==24.2
      - pillow==11.0.0
      - pygments==2.18.0
      - pyparsing==3.2.0
      - python-dateutil==2.9.0.post0
      - requests==2.32.3
      - scipy==1.14.1
      - six==1.16.0
      - snowballstemmer==2.2.0
      - sphinx==8.1.3
      - sphinxcontrib-applehelp==2.0.0
      - sphinxcontrib-devhelp==2.0.0
      - sphinxcontrib-htmlhelp==2.1.0
      - sphinxcontrib-jsmath==1.0.1
      - sphinxcontrib-qthelp==2.0.0
      - sphinxcontrib-serializinghtml==2.0.0
      - urllib3==2.2.3
      - wheel==0.38.1

More info :

VASP

VASP. The Vienna Ab initio Simulation Package: atomic scale materials modelling from first principles. Installaed version VASP 6.4.0.

Usage

Example script : testVasp.sh

#!/bin/bash

#SBATCH -J test-vasp
#SBATCH -o test-vasp_output_%j.out
#SBATCH -e test-vasp_error_%j.err
#SBATCH -p thin
#SBATCH --nodes=1
#SBATCH -n 16
#SBATCH --mem-per-cpu=4G

module load vasp/6.4.0 &>/dev/null
module load mkl/2023.2.0 &>/dev/null
module load openmpi/4.1.5 &>/dev/null

export OMP_NUM_THREADS=1

echo "Starting run at: `date`"

vasp_std > vasp.out

echo "Job finished at: `date`"

Submit with :

sbatch --account=your_project_ID testVasp.sh

More info :

Ollama

Get up and running with large language models. Ollama is a client to use AI models. It is also possible to use DeepSeek models.

Usage

This example uses some ollama models and DeepSeek model. The scripts creates two prompts in spanish and download the models to set the prompts, after that the answers are saved in separeted files.

Available ollama versions:

  • ollama 0.3.14
  • ollama 0.5.7

Example script : test-ollama-hpc.sh

#!/bin/bash

#SBATCH -J ollama-gpu-test
#SBATCH -e ollama-test%j.err
#SBATCH -o ollama-test%j.msg
#SBATCH -p hopper # queue (partition)
#SBATCH --nodelist=agpuh02
#SBATCH --gres=gpu:4

# load ollama
#module load ollama/0.3.14
module load ollama/0.5.7

python --version

nvidia-smi
echo "Current path: $(pwd)"

# export variables
export BASE_OLLAMA_TEST=/fs/agustina/$(whoami)/test-ollama
export PROMPTS_PATH=$BASE_OLLAMA_TEST/prompts
export OLLAMA_BIN=$OLLAMA_ROOT/bin

echo "OLLAMA PATH: $OLLAMA_BIN"

export OLLAMA_NUMPARALLEL=4
export OLLAMA_LOAD_TIMEOUT=900
export OLLAMA_MODELS=$BASE_OLLAMA_TEST/models-ollama

# create folders for models and prompts
mkdir -p $PROMPTS_PATH
mkdir -p $OLLAMA_MODELS

# prompts creation
echo "Dime a que instituto de UNIZAR corresponden las siglas BiFi. Describe su investigacion reciente." > $PROMPTS_PATH/prompt1.txt
echo "cual es el camino mas interesante de sevilla a barcelona pasando por madrid?" > $PROMPTS_PATH/prompt2.txt

# start ollama
$OLLAMA_BIN/./ollama serve &
$OLLAMA_BIN/./ollama list

# 1 - Download models (also DeepSeek)
# 2 - Pass the prompts to the models
# 3 - Get the answers and save them to a file
for i in llama3.1:8b-instruct-q2_K \
         llama3.1:8b-instruct-q8_0 \
         deepseek-r1:7b; do
        touch answer1-$i.txt && >answer1-$i.txt
        echo "" >> answer1-$i.txt
        echo "PROMPT:" >> answer1-$i.txt
        echo "" >> answer1-$i.txt
        cat $PROMPTS_PATH/prompt1.txt >> answer1-$i.txt
        echo "" >> answer1-$i.txt
        echo "ANSWER:" >> answer1-$i.txt
        echo "" >> answer1-$i.txt
        echo $(< $PROMPTS_PATH/prompt1.txt) | $OLLAMA_BIN/./ollama run $i >> answer1-$i.txt
        echo "" >> answer1-$i.txt
        echo "--------------------------------" >> answer1-$i.txt
        echo "" >> answer1-$i.txt
        echo "PROMPT:" >> answer1-$i.txt
        echo "" >> answer1-$i.txt
        cat $PROMPTS_PATH/prompt2.txt >> answer1-$i.txt
        echo "" >> answer1-$i.txt
        echo "ANSWER:" >> answer1-$i.txt
        echo "" >> answer1-$i.txt
        echo $(< $PROMPTS_PATH/prompt2.txt) | $OLLAMA_BIN/./ollama run $i >> answer1-$i.txt
done

$OLLAMA_BIN/./ollama list

echo "DONE!"

Submit with :

sbatch --account=your_project_ID test-ollama-hpc.sh

It is also possible to use other DeepSeek models like the one to create programming code like the Deepseek-v2:16b. We can request a Brainfuck programming language interpreter made in Python:

Example script : test-deepseek-hpc.sh

#!/bin/bash

#SBATCH -J deepseek-gpu-test
#SBATCH -e deepseek-test%j.err
#SBATCH -o deepseek-test%j.msg
#SBATCH -p hopper # queue (partition)
#SBATCH --nodelist=agpuh02
#SBATCH --gres=gpu:1

# load ollama
module load ollama/0.5.7

export BASE_OLLAMA_TEST=/fs/agustina/$(whoami)/deepseek-test
export ANSWERS_PATH=$BASE_OLLAMA_TEST/answers
export OLLAMA_BIN=$OLLAMA_ROOT/bin

echo "OLLAMA PATH: $OLLAMA_BIN"

export OLLAMA_NUMPARALLEL=1
export OLLAMA_LOAD_TIMEOUT=900
export OLLAMA_MODELS=$BASE_OLLAMA_TEST/models

mkdir -p $OLLAMA_MODELS
mkdir -p $ANSWERS_PATH

$OLLAMA_BIN/./ollama serve &

PROMPT="create a brainfuck interpreter made in python"

touch $ANSWERS_PATH/answer.txt
truncate -s 0 $ANSWERS_PATH/answer.txt

echo $PROMPT | $OLLAMA_BIN/./ollama run deepseek-coder-v2:16b >> $ANSWERS_PATH/answer.txt

Submit with :

sbatch --account=your_project_ID test-deepseek-hpc.sh

DeepSeek-v2:16b response:

Creating a Brainfuck interpreter in Python is an interesting exercise. Below, I'll provide a simple implementation of a Brainfuck interpreter
that can execute Brainfuck commands. This interpreter will handle the basic set of commands defined by Brainfuck, including moving pointers,
modifying values at the pointer location, and looping based on the value at the current cell.

Here's a basic implementation:
def brain_fuck(code):
   tape = [0] * 30000  # Initialize a tape of 30,000 cells with all values set to 0
   pointer = 0  # The current cell of the tape being pointed by the interpreter
   code_pointer = 0  # Position in the Brainfuck code string
   brackets_stack = []  # Stack for keeping track of bracket positions

   while code_pointer < len(code):
       command = code[code_pointer]

       if command == '>':
           pointer += 1
           if pointer >= len(tape):
               raise IndexError("Tape pointer out of bounds.")

       elif command == '<':
           pointer -= 1
           if pointer < 0:
               raise IndexError("Tape pointer out of bounds.")

       elif command == '+':
           tape[pointer] += 1
           if tape[pointer] > 255:
               tape[pointer] = 0

       elif command == '-':
           tape[pointer] -= 1
           if tape[pointer] < 0:
               tape[pointer] = 255

       elif command == '.':
           print(chr(tape[pointer]), end='')

       elif command == ',':
           tape[pointer] = ord(input()[0]) if input() else 0

       elif command == '[':
           if tape[pointer] == 0:
               bracket_nesting = 1
               while bracket_nesting > 0:
                   code_pointer += 1
                   if code[code_pointer] == '[':
                       bracket_nesting += 1
                   elif code[code_pointer] == ']':
                       bracket_nesting -= 1
           else:
               brackets_stack.append(code_pointer)

       elif command == ']':
           if tape[pointer] != 0:
               code_pointer = brackets_stack[-1]
           else:
               brackets_stack.pop()

       code_pointer += 1

# Example usage:
brain_fuck("++++++++++[>+>+++>+++++++>++++++++++<<<<-]>>>++.>+.+++++++..+++.<<++.>+++++++++++++++.>.+++.------.--------.")
This implementation covers the basic commands of Brainfuck and handles input/output as specified in the standard Brainfuck language.
It uses a simple stack to handle loops, jumping over the code when the condition is not met. The tape size is fixed at 30,000 cells,
which can be adjusted based on requirements.

Keep in mind that this implementation does not include error handling for malformed Brainfuck code or
runtime errors (like accessing out of bounds memory). You might want to add checks and exceptions to make the interpreter more
robust and user-friendly.

More info :

Hisat 2.2.1

Hisat2 is a fast and sensitive alignment program for mapping next-generation sequencing reads (both DNA and RNA) to a population of human genomes as well as to a single reference genome. Based on an extension of BWT for graphs (Sirén et al. 2014), we designed and implemented a graph FM index (GFM), an original approach and its first implementation. In addition to using one global GFM index that represents a population of human genomes, HISAT2 uses a large set of small GFM indexes that collectively cover the whole genome. These small indexes (called local indexes), combined with several alignment strategies, enable rapid and accurate alignment of sequencing reads. This new indexing scheme is called a Hierarchical Graph FM index (HGFM).

Usage

Example script : test-hisat.sh

#!/bin/bash

#SBATCH -J hisat-test
#SBATCH -e hisat-test%j.err
#SBATCH -o hisat-test%j.msg
#SBATCH -p thin # queue (partition)

# load hisat
module load hisat/2.2.1

# list hisat root folder
ls -al $HISAT_ROOT

hisat2 --help

echo "DONE!"

Submit with :

sbatch --account=your_project_ID test-hisat.sh

More info :

Bedtools 2.31.0

Collectively, the bedtools utilities are a swiss-army knife of tools for a wide-range of genomics analysis tasks. The most widely-used tools enable genome arithmetic: that is, set theory on the genome. For example, bedtools allows one to intersect, merge, count, complement, and shuffle genomic intervals from multiple files in widely-used genomic file formats such as BAM, BED, GFF/GTF, VCF. While each individual tool is designed to do a relatively simple task (e.g., intersect two interval files), quite sophisticated analyses can be conducted by combining multiple bedtools operations on the UNIX command line.

Bedtools is developed in the Quinlan laboratory at the University of Utah and benefits from fantastic contributions made by scientists worldwide.

Usage

Example script : test-bedtools.sh

#!/bin/bash

#SBATCH -J bedtools-test
#SBATCH -e bedtools-test%j.err
#SBATCH -o bedtools-test%j.msg
#SBATCH -p thin # queue (partition)

# load bedtools
module load bedtools/2.31.0

# list bedtools root folder
ls -al $BEDTOOLS_ROOT

bedtools --help

echo "DONE!"

Submit with :

sbatch --account=your_project_ID test-bedtools.sh

More info :

Samtools 2.2.1

Samtools is a suite of programs for interacting with high-throughput sequencing data. Used for Reading/writing/editing/indexing/viewing SAM/BAM/CRAM format

Usage

Example script : test-samtools.sh

#!/bin/bash

#SBATCH -J samtools-test
#SBATCH -e samtools-test%j.err
#SBATCH -o samtools-test%j.msg
#SBATCH -p thin # queue (partition)

# load samtools
module load samtools/1.21

# list samtools root folder
ls -al $SAMTOOLS_ROOT

# list samtools binaries
ls -al $SAMTOOLS_ROOT/compiled/bin/

samtools --help

echo "DONE!"

Submit with :

sbatch --account=your_project_ID test-samtools.sh

More info :

llama.cpp

llama.cpp is an open source software library that performs inference on various large language models such as Llama. It is co-developed alongside the GGML project, a general-purpose tensor library. Inference of Meta's LLaMA model (and others) in pure C/C++. Command-line tools are included with the library.

Usage

llama.cpp has been compiled with GPU support, it is available for GPU queues ADA and Hopper.

Available ollama versions:

  • llama.cpp v1 (b4234)
  • llama.cpp v1 (b4706)

Example script : test-llamacpp-hpc.sh

#!/bin/bash

#SBATCH -J llamacpp-gpu-test
#SBATCH -e llamacpp-test%j.err
#SBATCH -o llamacpp-test%j.msg
#SBATCH -p ada # queue (partition)
#SBATCH --nodelist=agpul08
#SBATCH --gres=gpu:1

#module load llama.cpp/b4234
module load llama.cpp/b4706

BASE_PATH=/fs/agustina/$(whoami)/llamacpp
MODELS_PATH=$BASE_PATH/models

mkdir -p $MODELS_PATH

if [ -f "$MODELS_PATH/tiny-vicuna-1b.q5_k_m.gguf" ]; then
  echo "Model tiny-vicuna-1b.q5_k_m.gguf exists"
else
  echo "Model tiny-vicuna-1b.q5_k_m.gguf does not exist, downloading now..."

  # model download
  wget https://huggingface.co/afrideva/Tiny-Vicuna-1B-GGUF/resolve/main/tiny-vicuna-1b.q5_k_m.gguf -P $MODELS_PATH
fi

mkdir -p $BASE_PATH/answers

PROMPT='I think the meaning of life is'

# use llama.cpp client
$LLAMACPP_BIN/./llama-cli --help
$LLAMACPP_BIN/./llama-cli --version
$LLAMACPP_BIN/./llama-cli --list-devices
$LLAMACPP_BIN/./llama-cli -m $MODELS_PATH/tiny-vicuna-1b.q5_k_m.gguf \
                          -p "$PROMPT" \
                          --n-predict 128 \
                          --n-gpu-layers -1 \
                          -dev CUDA0 > $BASE_PATH/answers/llama-cli-answer.txt

cat $BASE_PATH/answers/llama-cli-answer.txt

# list llama.cpp directory to find out more tools
ls -al $LLAMACPP_BIN

echo "DONE!"

Submit with :

sbatch --account=your_project_ID test-llamacpp-hpc.sh

More info :

PHI

PHI is a computer program designed for the calculation of the magnetic properties of paramagnetic coordination complexes. PHI is written in FORTRAN 90/95 and has been tested on Windows, MacOS and Linux. The source code, pre-compiled binaries, the user manual and tutorial material are available for download below. Please refer to the User Manual and Installation Guide for further information.

Usage

Example script : test-phi-hpc.sh

#!/bin/bash

#SBATCH -J phi-test
#SBATCH -e phi-test%j.err
#SBATCH -o phi-test%j.msg
#SBATCH -p thin # queue (partition)
#SBATCH --nodelist=athin16

module load phi/3.1.6

$PHI_ROOT/phi_v3.1.6.x --help

echo "DONE!"

Submit with :

sbatch --account=your_project_ID test-phi-hpc.sh

More info :

QuantumEspresso

QuantumEspresso. It is an integrated suite of Open-Source computer codes for electronic-structure calculations and materials modeling at the nanoscale. It is based on density-functional theory, plane waves, and pseudopotentials.

Usage

Example script : test-quantum-espresso.sh

#!/bin/bash

#SBATCH --job-name=qe_test
#SBATCH -o qe_out%j.out
#SBATCH -e qe_err%j.err
#SBATCH -N 4
#SBATCH --ntasks-per-node=4
#SBATCH -p thin

export OMP_NUM_THREADS=4
export MKL_NUM_THREADS=4

# loads OpenMPI or MPICH and Quantum Espresso modules
# load one of them
#module load mpich/3.4.3-ucx
module load openmpi4/4.1.4

# load QuantumEspresso module
module load quantum-espresso/7.3.1

# run the application
#mpirun $QE_BIN/./pw.x -i nscf.in > nscf.out
mpirun pw.x -i nscf.in > nscf.out

Submit with :

sbatch --account=your_project_ID test-quantum-espresso.sh

More info :

Procedimientos básicos en Cierzo

Información acerca de los procedimientos de gestión en Cierzo

Ingreso en el sistema

ssh idUsuario@cierzo.bifi.unizar.es

Trabajos

Enviar un trabajo :

Lanzamiento job normal (utilizando el binario xhpl como ejemplo) Ejemplo de script :

#!/bin/bash
#
#SBATCH -J linpack # job name
#SBATCH -o linpack.o%j # output and error file name (%j expands to jobID)
#SBATCH -N 12 # total number of nodes
#SBATCH --ntasks-per-node=24 # number of cores per node (maximum 24)
#SBATCH --exclusive
#SBATCH -p bifi # queue (partition)
#SBATCH --distribution=block # Para rellenar los nodos en modo fillup
module add shared
module load openmpi/intel/1.10.2
module load intel/compiler/64/15.0.6/2015.6.233
ulimit -s unlimited
mpirun -np $SLURM_NPROCS programa

Ejecución del script :

sbatch script.sh

Otro ejemplo de script incluyendo opciones de notificación y número de cores :

#!/bin/bash
#
#SBATCH -N 3
#SBATCH -c 6 # number of cores
#SBATCH --mem 100 # memory pool for all cores
#SBATCH -o slurm.%N.%j.out # STDOUT
#SBATCH -e slurm.%N.%j.err # STDERR
#SBATCH --mail-type=begin #Envía un correo cuando el trabajo inicia
#SBATCH --mail-type=end #Envía un correo cuando el trabajo finaliza
#SBATCH --mail-user=john.diaz@bifi.es #Dirección a la que se enví
for i in {1..300000}
do
	echo $RANDOM >> SomeRandomNumbers.txt
done
sort SomeRandomNumbers.txt > SortedRandomNumbers.txt

Envío a GPU :

Lanzamiento de jobs con todas las GPUS Ejemplo de script. Importante, añadir la orden “module load cuda75”

#!/bin/bash
#Ejemplo para lanzar test linpack con las GPUs nodos81-84
#SBATCH -J linpackiGPU # job name
#SBATCH -o linpack.o%j # output and error file name (%j expands to jobID)
#SBATCH -N 4 # total number of nodes
#SBATCH --ntasks-per-node=2 # number of cores per node (maximum 24)
#SBATCH --exclusive
#SBATCH -p gpu # queue (partition)
#SBATCH --gres=gpu:2
#SBATCH -t 24:00:00 # run time (hh:mm:ss)
#SBATCH --distribution=block # Para rellenar los nodos en modo fillup

export LD_LIBRARY_PATH=../hpl-2.0_FERMI_v15/src/cuda/:$LD_LIBRARY_PATH
module add shared
module load cuda75
export MKL_NUM_THREADS=1
export OMP_NUM_THREADS=1

#Específico Linpack
export CUDA_DGEMM_SPLIT=.999
export CUDA_DTRSM_SPLIT=.999
export MKL_CBWR=AVX2
export I_MPI_PIN_DOMAIN=socket

mpirun -np 8 ./xhpl >& xhpl.log

Ejecución del script :

sbatch script.sh

Envío a Xeon Phi :

Ejemplo de script:

#!/bin/bash
#SBATCH -n 24 # total number of cores requested
#SBATCH --ntasks-per-node=24 # number of cores per node (maximum 24)
#SBATCH --exclusive
#SBATCH -p phi # queue (partition)
#SBATCH -t 12:00:00 # run time (hh:mm:ss)
module add shared
module load intel-cluster-runtime/intel64/3.7
ulimit -s unlimited
xhpl_offload_intel64

Ejecución del script :

sbatch script.sh

Cancelar un trabajo :

scancel job_id

Información sobre las colas :

sinfo

Ver los trabajos en cola:

squeue

Información detallada sobre las colas :

scontrol show partition

Ver información detallada de un trabajo:

scontrol show job identificadorJob

Estimación del tiempo de arranque de un proceso encolado :

squeue --start

Información de accounting de los trabajos :

sacct

Códigos de estado :

https://confluence.cscs.ch/display/KB/Meaning+of+Slurm+job+state+codes

Enlaces de interés :

Guía del usuario : https://slurm.schedmd.com/quickstart.html

Módulos

El paquete Modules es una herramienta que simplifica la inicialización de shells y permite a los usuarios moduficar fácilmente su entorno durante la sesión usando modulefiles.

Cada modulefile contiene la información que permite configurar el shell para una aplicación. Una vez que el paquete Modules se ha inicializado, el entorno puede ser modificado usando el comando module para interpretar dichos modulefiles

Listado de módulos disponibles:

module avail

Listado de módulos cargados:

module list

Cargar un módulo:

module load nombreModulo

Descargar un módulo:

module unload nombreModulo

GAUSSIAN

Envío de trabajo Gaussian :

Partiendo del archivo :

TSHX2_boro_NBOs.gjf

Ejemplo de script:

#!/bin/bash

#SBATCH -e /home/jdiazlag/testGaussian/TSHX2_boro_NBOs%j.err
#SBATCH -o /home/jdiazlag/testGaussian/TSHX2_boro_NBOs%j.msg
#SBATCH -p bifi
#SBATCH -N 2
#SBATCH --ntasks-per-node=24

module load gaussian/g09_D01

g09 < /home/jdiazlag/testGaussian/TSHX2_boro_NBOs.gjf  

Ejecución del script :

sbatch script.sh

GROMACS

Envío de trabajo Gromacs :

Partiendo de los archivos :

folded-equil.ndx  folded-prod.gro  folded-test-protein.sh  folded.top  test-md-vv_prod_nvt.mdp

Ejemplo de script:

#!/bin/sh
#SBATCH -J bar-nvt-vv-fol
#SBATCH -o bar-nvt-vv-fol-%j.out
#SBATCH -p bifi                    # queue (partition)
#SBATCH -N 4                       # total number of nodes
#SBATCH --ntasks-per-node=24       # number of cores per node (maximum 24)


#### Cambiar según módulos necesarios para lanzar la versión de GROMACS que se desea testar
module add shared
module load openmpi/intel/1.10.2
module load intel/compiler/64/15.0.6/2015.6.233
module load gromacs/2018.4
ulimit -s unlimited
###################


COMMAND="mpirun -np $SLURM_NPROCS gmx_mpi mdrun"
NAME=folded

################## MAIN BODY #######################

echo -n "The simulation is running in: " > RUNNING_INFORMATION.info
echo $HOSTNAME >> RUNNING_INFORMATION.info
echo "Initial time: " >> RUNNING_INFORMATION.info
date >> RUNNING_INFORMATION.info

## Production step
gmx_mpi grompp -v -f test-md-vv_prod_nvt.mdp -po mdp_out.mdp -c ${NAME}-prod.gro -r ${NAME}-prod.gro -p ${NAME}.top -n ${NAME}-equil.ndx -o ${NAME}-nvt-vv.tpr -maxwarn 10

$COMMAND -s ${NAME}-nvt-vv.tpr -x ${NAME}-nvt-vv.xtc -c ${NAME}-nvt-vv.gro -o ${NAME}-nvt-vv.trr -e ${NAME}-nvt-vv.edr -nice 19 -v

gmx_mpi trjconv -f ${NAME}-nvt-vv.xtc -s ${NAME}-nvt-vv.tpr -o ${NAME}-nvt-vv_proc.xtc -pbc mol -ur compact -center <<EOF
1
1
EOF

gmx_mpi trjconv -f ${NAME}-nvt-vv_proc.xtc -s ${NAME}-nvt-vv.tpr -o ${NAME}-rot+trans_prot.xtc -fit rot+trans <<EOF
4
1
EOF

gmx_mpi covar -f ${NAME}-rot+trans_prot.xtc -s ${NAME}-nvt-vv.tpr -o eigenval.xvg -v eigenvec.trr -ascii covar.dat -xpm covar.xpm -b 0 -e 2000  <<EOF
4
1
EOF

echo "Final time: " >> RUNNING_INFORMATION.info
date >> RUNNING_INFORMATION.info

exit

Ejecución del script :

sbatch script.sh

QUANTUM EXPRESSO

Envío de trabajo Quantum Expresso :

Partiendo del archivo :

file.in

Ejemplo de script:

#!/bin/bash

#SBATCH --job-name=qe_test
#SBATCH -o qe_out%j.out
#SBATCH -e qe_err%j.err
#SBATCH -N 2
#SBATCH --ntasks-per-node=4
#SBATCH -p bifi

export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
export MKL_NUM_THREADS=$SLURM_CPUS_PER_TASK

echo -e '\n submitted Quantum Espresso job'
echo 'hostname'
hostname

# loads Open MPI and Quantum Espresso modules
module load intel/mkl/64/17.0.1/2017
module load mpich/intel/3.2

# run Quantum Espresso using Open MPI's mpirun
# results will be printed to output.file
mpirun /cm/shared/apps/qexpresso/6.2.1/bin/pw.x -i file.in

Ejecución del script :

sbatch script.sh

Modules

Most software packages are made available via Linux Environment Modules. These modules allow us to maintain an enormous software library without users having to worry about details such as paths to different software versions or libraries; modules will set or unset the right paths and environment variables for you.

Each module contains the information needed to configure the shell and environment for a specific application.

Usage

You can see a list of these be running the module avail command.

-------------------------------------------------------------------------------------------------------------------- /opt/ohpc/pub/moduledeps/gnu12-openmpi4 ---------------------------------------------------------------------------------------------------------------------
   adios/1.13.1         extrae/3.8.3        hypre/2.18.1    mumps/5.2.1             netcdf/4.9.0           petsc/3.18.1      ptscotch/7.0.1      scalapack/2.2.0    sionlib/1.7.7         tau/2.31.1
   boost/1.80.0  (D)    fftw/3.3.10  (D)    imb/2021.3      netcdf-cxx/4.3.1        omb/6.1                phdf5/1.10.8      py3-mpi4py/3.1.3    scalasca/2.5       slepc/3.18.0          trilinos/13.4.0
   dimemas/5.4.2        geopm/1.1.0         mfem/4.4        netcdf-fortran/4.6.0    opencoarrays/2.10.0    pnetcdf/1.12.3    py3-scipy/1.5.4     scorep/7.1         superlu_dist/6.4.0

------------------------------------------------------------------------------------------------------------------------- /opt/ohpc/pub/moduledeps/gnu12 -------------------------------------------------------------------------------------------------------------------------
   R/4.2.1      hdf5/1.10.8       likwid/5.2.2        mpich/3.4.3-ofi        mvapich2/2.3.7     openmpi4/4.1.4   (L)    plasma/21.8.29      scotch/6.0.6
   gsl/2.7.1    impi/2021.10.0    metis/5.1.0  (D)    mpich/3.4.3-ucx (D)    openblas/0.3.21    pdtoolkit/3.25.1        py3-numpy/1.19.5    superlu/5.2.1

--------------------------------------------------------------------------------------------------------------------------- /opt/ohpc/pub/modulefiles ----------------------------------------------------------------------------------------------------------------------------
   EasyBuild/4.6.2          charliecloud/0.15    crest/3.0.1        gnu12/12.2.0   (L)    libfabric/1.13.0    (L)    ohpc            (L)    papi/6.0.0                qibo/0.1.12          ucx/1.11.2      (L)    xtb/6.7.0 (D)
   SqueezeMeta/1.6.3        cmake/3.24.2         fhi-aims/240507    hwloc/2.7.0           miniconda3/3.12            openbabel/3.1.1        pennylane/0.33.1          qiskit/0.45.0        valgrind/3.19.0
   amber/24                 comsol/6.1           gaussian/g16       intel/2023.2.1        namd/3.0b6          (L)    orca/5.0.4             prun/2.2           (L)    qiskit/1.1.0  (D)    vasp/6.4.0
   autotools         (L)    cp2k/2024.1          gimic/2.2.1        julia/1.10.3          nvidia-hpc-sdk/24.5        os                     python-math/3.11.4        rosetta/3.7.1        vina/1.2.5      (L)

-------------------------------------------------------------------------------------------------------------- /opt/amd/spack/share/spack/modules/linux-rocky8-zen3 --------------------------------------------------------------------------------------------------------------
   adios2/2.9.1-aocc-4.1.0-zvzmkdi                                flex/2.6.3-gcc-12.2.0-37z4p5d                            libidn2/2.3.4-aocc-4.1.0-f4g5gbq                 mkfontscale/1.2.2-gcc-12.2.0-2yzm2ne               py-six/1.16.0-gcc-12.2.0-wbggnht
   amd-aocl/4.1-aocc-4.1.0-eddrifi                                flex/2.6.4-aocc-4.1.0-5uxrqpu                   (D)      libidn2/2.3.4-gcc-12.2.0-zx6hiu3        (D)      molden/6.7-gcc-12.2.0-oymp2os                      py-wheel/0.37.1-aocc-4.1.0-dnkz37i
   amdblis/4.1-aocc-4.1.0-mo3sjch                                 font-util/1.4.0-gcc-12.2.0-nk2uv26                       libint/2.6.0-aocc-4.1.0-ccz5tw6                  mpfr/4.2.0-aocc-4.1.0-ttzi577                      py-wheel/0.37.1-gcc-12.2.0-qvbbw7a        (D)
   amdblis/4.1-aocc-4.1.0-ngsxfub                        (D)      fontconfig/2.14.2-gcc-12.2.0-kas3xzk                     libjpeg-turbo/3.0.0-gcc-12.2.0-ba3ktnw           mpfr/4.2.0-gcc-12.2.0-ah54r3p             (D)      python/3.10.12-aocc-4.1.0-aqsl3it
   amdfftw/4.1-aocc-4.1.0-cp7genw                                 fontsproto/2.1.3-gcc-12.2.0-626xa7o                      libmd/1.0.4-aocc-4.1.0-hb27dor                   nasm/2.15.05-gcc-12.2.0-kes6giu                    python/3.10.12-gcc-12.2.0-ynvxsel         (D)
   amdfftw/4.1-aocc-4.1.0-mok4ghe                                 freetype/2.11.1-gcc-12.2.0-xdcwdnh                       libmd/1.0.4-gcc-12.2.0-kjrfbts          (D)      ncurses/6.4-aocc-4.1.0-whgpitb                     quantum-espresso/7.2-aocc-4.1.0-42rnxdd
   amdfftw/4.1-aocc-4.1.0-suvpj3m                        (D)      fribidi/1.0.12-gcc-12.2.0-zaofwfs                        libpciaccess/0.17-aocc-4.1.0-o5mx7uk             ncurses/6.4-gcc-12.2.0-jhqnvn5            (L,D)    randrproto/1.5.0-gcc-12.2.0-c543i4d
   amdlibflame/4.1-aocc-4.1.0-ddal2dl                             gdbm/1.23-aocc-4.1.0-qpy5h7a                             libpciaccess/0.17-gcc-12.2.0-7hfodbw    (L,D)    netlib-scalapack/2.2.0-gcc-12.2.0-dwhjns4 (L)      re2c/2.2-gcc-12.2.0-ozg7r26
   amdlibflame/4.1-aocc-4.1.0-4bh4zio                    (D)      gdbm/1.23-gcc-12.2.0-giacys7                    (D)      libpng/1.6.39-aocc-4.1.0-6ko3jub                 nettle/3.9.1-gcc-12.2.0-7dj6cx5                    readline/8.2-aocc-4.1.0-o2mfebq
   amdlibm/4.1-aocc-4.1.0-my4wori                                 gettext/0.21.1-aocc-4.1.0-qnfns7q                        libpng/1.6.39-gcc-12.2.0-igvrmd7        (D)      nghttp2/1.48.0-gcc-12.2.0-wa3stfg                  readline/8.2-gcc-12.2.0-cscczq2           (D)
   amdscalapack/4.1-aocc-4.1.0-egeizcg                            gettext/0.21.1-gcc-12.2.0-opw3wpj               (L,D)    libpthread-stubs/0.4-gcc-12.2.0-gcifh7g          nghttp2/1.52.0-aocc-4.1.0-7orbvet                  renderproto/0.11.1-gcc-12.2.0-m4mjmcd
   amdscalapack/4.1-aocc-4.1.0-josz6jn                   (D)      git/2.41.0-aocc-4.1.0-teucojb                            libsigsegv/2.14-aocc-4.1.0-732wcft               nghttp2/1.52.0-gcc-12.2.0-p6tlqiw         (D)      rsync/3.2.7-gcc-12.2.0-iekndxs
   aocc/4.1.0-gcc-12.2.0-py6hbh6                                  glib/2.76.4-gcc-12.2.0-7wqva7h                           libsigsegv/2.14-gcc-12.2.0-wqdemln      (D)      ninja/1.11.1-gcc-12.2.0-peyhbvn                    rust-bootstrap/1.70.0-gcc-12.2.0-3nahfp5
   aocl-sparse/4.1-aocc-4.1.0-kg3h6v2                             glproto/1.4.17-gcc-12.2.0-wgcdvqm                        libsm/1.2.3-gcc-12.2.0-vbycdbn                   numactl/2.0.14-aocc-4.1.0-lrvf6fd                  rust/1.70.0-gcc-12.2.0-7ev4du4
   aocl-utils/4.1-aocc-4.1.0-ejmeoo5                              glx/1.4-gcc-12.2.0-ij5zwgo                               libssh2/1.10.0-gcc-12.2.0-sjmm3e2                numactl/2.0.14-gcc-12.2.0-3n6dozi         (L,D)    scons/4.5.2-aocc-4.1.0-kpjfvxd
   autoconf-archive/2023.02.20-aocc-4.1.0-bsdhizg                 gmake/4.4.1-aocc-4.1.0-wvvw6b2                           libtiff/4.5.1-gcc-12.2.0-5mm636l                 nwchem/7.2.0-aocc-4.1.0-oxffn65                    scotch/7.0.3-aocc-4.1.0-ehtnrdo           (D)
   autoconf-archive/2023.02.20-gcc-12.2.0-7q2ibvd        (D)      gmake/4.4.1-gcc-12.2.0-x3ya7i2                  (D)      libtool/2.4.7-aocc-4.1.0-hfnvkuo                 openblas/0.3.23-gcc-12.2.0-nrvtegk                 sed/4.9-aocc-4.1.0-pftfzxn
   autoconf/2.69-aocc-4.1.0-q3jpzdv                               gmp/6.2.1-aocc-4.1.0-32gzj3k                             libtool/2.4.7-gcc-12.2.0-37qrebs        (D)      openblas/0.3.23-gcc-12.2.0-u5t6fcp        (L)      snappy/1.1.10-aocc-4.1.0-xz5us7u
   autoconf/2.69-gcc-12.2.0-5uoofqo                      (D)      gmp/6.2.1-gcc-12.2.0-vydqxo6                    (D)      libunistring/1.1-aocc-4.1.0-gb7d3dk              openblas/0.3.23-gcc-12.2.0-wxtyym3        (D)      sqlite/3.42.0-aocc-4.1.0-rmhj4lr
   automake/1.16.5-aocc-4.1.0-dqjmmo2                             gnuplot/5.4.3-gcc-12.2.0-d5u375y                         libunistring/1.1-gcc-12.2.0-qlu3ynd     (D)      openfoam/2306-aocc-4.1.0-ai2xyp7                   sqlite/3.42.0-gcc-12.2.0-uriaxpa          (D)
   automake/1.16.5-gcc-12.2.0-3or6f4d                    (D)      gnutls/3.7.8-gcc-12.2.0-jviytgv                          libunwind/1.6.2-gcc-12.2.0-xt7xp3q               openlibm/0.8.1-gcc-12.2.0-3bq24ng                  stream/5.10-aocc-4.1.0-3jgyitd
   bc/1.07.1-gcc-12.2.0-qmpyq4l                                   gobject-introspection/1.76.1-gcc-12.2.0-lxalmn7          libuv-julia/1.44.2-gcc-12.2.0-qror4x2            openmolcas/23.06-gcc-12.2.0-fwmhgfh                suite-sparse/5.13.0-gcc-12.2.0-5o4l2un
   bdftopcf/1.1-gcc-12.2.0-lvhyk63                                gperf/3.1-gcc-12.2.0-yaq7z2u                             libwhich/1.1.0-gcc-12.2.0-fechf4c                openmpi/4.1.2-gcc-12.2.0-fosm5wz                   swig/4.0.2-fortran-gcc-12.2.0-fwwd6vy
   berkeley-db/18.1.40-aocc-4.1.0-wkgac3r                         gromacs/2023.1-aocc-4.1.0-q64smwb                        libx11/1.8.4-gcc-12.2.0-l67amdb                  openmpi/4.1.5-aocc-4.1.0-3shbrq7                   sz/2.1.12.5-aocc-4.1.0-5tsprbv
   berkeley-db/18.1.40-gcc-12.2.0-4k5x4o2                (D)      gzip/1.12-gcc-12.2.0-wrpdaac                             libxau/1.0.8-gcc-12.2.0-zs7tyzs                  openmpi/4.1.5-aocc-4.1.0-7skcnum                   tar/1.34-aocc-4.1.0-qtq366h
   binutils/2.40-aocc-4.1.0-e4g3ygq                               harfbuzz/7.3.0-gcc-12.2.0-cck2ptk                        libxc/6.2.2-aocc-4.1.0-2fpwojt                   openmpi/4.1.5-gcc-12.2.0-vo6j57n          (L,D)    tar/1.34-gcc-12.2.0-3yixdgo               (L,D)
   binutils/2.40-aocc-4.1.0-u55wd4p                               hdf5/1.14.2-aocc-4.1.0-fvcc7jr                           libxcb/1.14-gcc-12.2.0-q2pnczm                   openmx/3.9-gcc-12.2.0-yewsx2z             (L)      tblite/0.3.0-gcc-12.2.0-7vicqa5
   binutils/2.40-gcc-12.2.0-jmniree                      (D)      hdf5/1.14.2-gcc-12.2.0-awlmxmw                  (D)      libxcrypt/4.4.35-aocc-4.1.0-df3srd5              openssh/9.3p1-aocc-4.1.0-rzlxlsb                   tcl/8.6.12-gcc-12.2.0-5h4lm37
   bison/3.8.2-aocc-4.1.0-5w276xj                                 help2man/1.49.3-aocc-4.1.0-wzir3h3                       libxcrypt/4.4.35-gcc-12.2.0-qg3btfg     (L,D)    openssh/9.3p1-gcc-12.2.0-dl2ax6z          (L,D)    texinfo/7.0.3-aocc-4.1.0-yd5yqhy
   bison/3.8.2-gcc-12.2.0-iz3pbvn                        (D)      hpcg/3.1-aocc-4.1.0-gklvmnx                              libxdmcp/1.1.4-gcc-12.2.0-s3q2pwr                openssl/3.1.2-aocc-4.1.0-s2rjfac                   texinfo/7.0.3-gcc-12.2.0-hgw4wmn          (D)
   boost/1.83.0-aocc-4.1.0-nh2772c                                hpl/2.3-aocc-4.1.0-ih7qagb                               libxext/1.3.3-gcc-12.2.0-tpnvv63                 openssl/3.1.2-gcc-12.2.0-g5zqvdc          (L,D)    ucx/1.14.1-aocc-4.1.0-fziwyqj             (D)
   bzip2/1.0.8-aocc-4.1.0-ossky43                                 hwloc/2.9.1-aocc-4.1.0-z6b6is7                           libxfont/1.5.4-gcc-12.2.0-2bfu5qb                p7zip/17.05-gcc-12.2.0-5ovk53v                     unzip/6.0-gcc-12.2.0-tt4uyp5
   bzip2/1.0.8-gcc-12.2.0-t444f4w                        (L,D)    hwloc/2.9.1-gcc-12.2.0-nn43cce                  (L,D)    libxml2/2.10.3-aocc-4.1.0-yo4awgg                pango/1.50.13-gcc-12.2.0-7icqyer                   utf8proc/2.8.0-gcc-12.2.0-lvh424d
   c-blosc2/2.10.2-aocc-4.1.0-2csq4ov                             icu4c/67.1-gcc-12.2.0-dlql6hn                            libxml2/2.10.3-gcc-12.2.0-ymewhot       (L,D)    patchelf/0.18.0-gcc-12.2.0-ixbyznc                 util-linux-uuid/2.38.1-aocc-4.1.0-gapx3ew
   ca-certificates-mozilla/2023-05-30-aocc-4.1.0-uscfebu          inputproto/2.3.2-gcc-12.2.0-rdh6c22                      libxmu/1.1.4-gcc-12.2.0-4o4yoot                  pcre/8.45-gcc-12.2.0-q2g45cb                       util-linux-uuid/2.38.1-gcc-12.2.0-4hsqihx (D)
   ca-certificates-mozilla/2023-05-30-gcc-12.2.0-nyok33z (D)      json-glib/1.6.6-gcc-12.2.0-727dve2                       libxpm/3.5.12-gcc-12.2.0-qy5l55t                 pcre2/10.42-aocc-4.1.0-ncmzz7w                     util-macros/1.19.3-aocc-4.1.0-kpu3ga3
   cairo/1.16.0-gcc-12.2.0-7g4e3qg                                kbproto/1.0.7-gcc-12.2.0-a2nh2z2                         libxrandr/1.5.3-gcc-12.2.0-jsrrrx5               pcre2/10.42-gcc-12.2.0-ksputr6            (D)      util-macros/1.19.3-gcc-12.2.0-xpecyvu     (D)
   cgal/4.13-aocc-4.1.0-7n7m5ph                                   krb5/1.20.1-aocc-4.1.0-7zm2qsm                           libxrender/0.9.10-gcc-12.2.0-e4dmj36             perl-data-dumper/2.173-gcc-12.2.0-dxqzg2c          vim/9.0.0045-gcc-12.2.0-lgfvv7h
   charmpp/6.10.2-gcc-12.2.0-3rctwa3                              krb5/1.20.1-gcc-12.2.0-j5m5mgt                  (L,D)    libxsmm/1.17-aocc-4.1.0-wisqwdt                  perl/5.38.0-aocc-4.1.0-hqse4iz                     which/2.21-gcc-12.2.0-2rnrm2t
   cmake/3.27.4-aocc-4.1.0-jrfow2u                                libarchive/3.7.1-aocc-4.1.0-c34azkd                      libxt/1.1.5-gcc-12.2.0-yzxyqrr                   perl/5.38.0-gcc-12.2.0-l2tpepz            (D)      xcb-proto/1.15.2-gcc-12.2.0-vibawzi
   cmake/3.27.4-gcc-12.2.0-q2bvw3t                                libblastrampoline/5.8.0-gcc-12.2.0-orgt3d7               lizard/1.0-aocc-4.1.0-pl7kpq4                    pigz/2.7-aocc-4.1.0-cdcuqi3                        xextproto/7.3.0-gcc-12.2.0-lm5dhc3
   cmake/3.27.4-gcc-12.2.0-slwphxg                       (D)      libbsd/0.11.7-aocc-4.1.0-sr52wjf                         llvm/14.0.6-gcc-12.2.0-qaplq7j                   pigz/2.7-gcc-12.2.0-kuefe4c               (L,D)    xproto/7.0.31-gcc-12.2.0-zjztwbu
   curl/8.1.2-aocc-4.1.0-t5nflbt                                  libbsd/0.11.7-gcc-12.2.0-4dd6bdg                (D)      lua/5.3.6-gcc-12.2.0-ktdrbid                     pixman/0.42.2-gcc-12.2.0-p5jjljm                   xrandr/1.5.0-gcc-12.2.0-xplrtnx
   curl/8.1.2-gcc-12.2.0-hudhko7                                  libcatalyst/2.0.0-rc3-aocc-4.1.0-5f6p4r6                 lua/5.3.6-gcc-12.2.0-5p6wjj7            (D)      pkgconf/1.9.5-aocc-4.1.0-ifteckx                   xtb/6.6.0-gcc-12.2.0-cmfhju3
   curl/8.1.2-gcc-12.2.0-ihpwqs7                         (D)      libcerf/1.3-gcc-12.2.0-mcvvbrl                           lz4/1.9.4-aocc-4.1.0-d4ujqy3                     pkgconf/1.9.5-gcc-12.2.0-3ycmepj          (D)      xtrans/1.4.0-gcc-12.2.0-cdynl7g
   diffutils/3.9-aocc-4.1.0-3c57xx7                               libedit/3.1-20210216-aocc-4.1.0-udqpr3r                  lz4/1.9.4-gcc-12.2.0-canqgmg            (D)      pmix/4.2.4-aocc-4.1.0-4tiisex                      xxhash/0.8.1-gcc-12.2.0-x6meh3q
   diffutils/3.9-gcc-12.2.0-5ysg3hh                      (D)      libedit/3.1-20210216-gcc-12.2.0-tnzddsx         (L,D)    lzma/4.32.7-gcc-12.2.0-wdos4on          (L)      pmix/4.2.4-gcc-12.2.0-dvyzksa             (L,D)    xz/5.4.1-aocc-4.1.0-ruuam4e
   dsfmt/2.2.5-gcc-12.2.0-lx454m3                                 libevent/2.1.12-aocc-4.1.0-y5a6igm                       lzo/2.10-aocc-4.1.0-travabd                      popt/1.16-gcc-12.2.0-vm6zya4                       xz/5.4.1-gcc-12.2.0-jrhrwix               (L,D)
   ed/1.4-gcc-12.2.0-snqqfmb                                      libevent/2.1.12-gcc-12.2.0-2ww5546              (L,D)    m4/1.4.19-aocc-4.1.0-s43xmnv                     protobuf/3.21.12-aocc-4.1.0-j2ezggw                yaml-cpp/0.7.0-aocc-4.1.0-uapjv53
   eigen/3.4.0-aocc-4.1.0-glbn5ts                                 libfabric/1.18.1-aocc-4.1.0-hdl2u5r             (D)      m4/1.4.19-gcc-12.2.0-quj6b4c            (D)      py-flit-core/3.9.0-gcc-12.2.0-vg6rbwj              zfp/0.5.5-aocc-4.1.0-e2ft2wo
   elfutils/0.189-gcc-12.2.0-n6tcwy3                              libffi/3.4.4-aocc-4.1.0-llpsodo                          makedepend/1.0.8-gcc-12.2.0-3bxd3h7              py-fypp/3.1-aocc-4.1.0-pvyqsy5                     zlib-ng/2.1.3-aocc-4.1.0-lr52tqo
   elpa/2021.11.001-aocc-4.1.0-5hnbxob                            libffi/3.4.4-gcc-12.2.0-vuvuepl                 (D)      mbedtls/2.28.2-gcc-12.2.0-nyt5z2t                py-mako/1.2.4-gcc-12.2.0-aidws7u                   zlib-ng/2.1.3-gcc-12.2.0-h4oi2el          (L,D)
   emacs/29.1-gcc-12.2.0-hddpgux                                  libfontenc/1.1.7-gcc-12.2.0-oekdn7p                      mesa-glu/9.0.2-gcc-12.2.0-237xkco                py-markupsafe/2.1.3-gcc-12.2.0-qhvx2m5             zstd/1.5.5-aocc-4.1.0-4nxilqy
   expat/2.5.0-aocc-4.1.0-stzr7gl                                 libgd/2.3.3-gcc-12.2.0-6g7ndns                           mesa/23.0.3-gcc-12.2.0-xiuk22a                   py-pip/23.1.2-aocc-4.1.0-43et5ls                   zstd/1.5.5-gcc-12.2.0-bigvwlv             (L,D)
   expat/2.5.0-gcc-12.2.0-ur77lsf                        (D)      libgit2/1.5.0-gcc-12.2.0-gpbyrmb                         meson/1.2.0-gcc-12.2.0-p3im7ei                   py-pip/23.1.2-gcc-12.2.0-pave2pr          (D)
   fftw/3.3.10-gcc-12.2.0-b2rzfr4                        (L)      libice/1.0.9-gcc-12.2.0-7j63mo6                          metis/5.1.0-gcc-12.2.0-mauvuef                   py-pyparsing/3.0.9-gcc-12.2.0-f5t72qo
   findutils/4.9.0-aocc-4.1.0-te5ureg                             libiconv/1.17-aocc-4.1.0-2boehfu                         mgard/2023-03-31-aocc-4.1.0-vfntqe6              py-setuptools/68.0.0-aocc-4.1.0-rdyt3ic
   findutils/4.9.0-gcc-12.2.0-qo3c6ku                    (D)      libiconv/1.17-gcc-12.2.0-k6gski3                (L,D)    mkfontdir/1.0.7-gcc-12.2.0-icaf6hl               py-setuptools/68.0.0-gcc-12.2.0-qe67njw   (D)

  Where:
   D:  Default Module
   L:  Module is loaded

If the avail list is too long consider trying:

"module --default avail" or "ml -d av" to just list the default modules.
"module overview" or "ml ov" to display the number of modules for each name.

Use "module spider" to find all possible modules and extensions.
Use "module keyword key1 key2 ..." to search for all possible modules matching any of the "keys".

We can obtain the list of loaded modules with the command module list.

    Currently Loaded Modules:
      1) autotools          6) openmpi4/4.1.4                  11) libpciaccess/0.17-gcc-12.2.0-7hfodbw  16) ncurses/6.4-gcc-12.2.0-jhqnvn5     21) zstd/1.5.5-gcc-12.2.0-bigvwlv      26) libedit/3.1-20210216-gcc-12.2.0-tnzddsx  31) openmpi/4.1.5-gcc-12.2.0-vo6j57n
      2) prun/2.2           7) ohpc                            12) libiconv/1.17-gcc-12.2.0-k6gski3      17) hwloc/2.9.1-gcc-12.2.0-nn43cce     22) tar/1.34-gcc-12.2.0-3yixdgo        27) libxcrypt/4.4.35-gcc-12.2.0-qg3btfg      32) fftw/3.3.10-gcc-12.2.0-b2rzfr4
      3) gnu12/12.2.0       8) vina/1.2.5                      13) xz/5.4.1-gcc-12.2.0-jrhrwix           18) numactl/2.0.14-gcc-12.2.0-3n6dozi  23) gettext/0.21.1-gcc-12.2.0-opw3wpj  28) openssh/9.3p1-gcc-12.2.0-dl2ax6z         33) openblas/0.3.23-gcc-12.2.0-u5t6fcp
      4) ucx/1.11.2         9) namd/3.0b6                      14) zlib-ng/2.1.3-gcc-12.2.0-h4oi2el      19) bzip2/1.0.8-gcc-12.2.0-t444f4w     24) openssl/3.1.2-gcc-12.2.0-g5zqvdc   29) libevent/2.1.12-gcc-12.2.0-2ww5546       34) netlib-scalapack/2.2.0-gcc-12.2.0-dwhjns4
      5) libfabric/1.13.0  10) lzma/4.32.7-gcc-12.2.0-wdos4on  15) libxml2/2.10.3-gcc-12.2.0-ymewhot     20) pigz/2.7-gcc-12.2.0-kuefe4c        25) krb5/1.20.1-gcc-12.2.0-j5m5mgt     30) pmix/4.2.4-gcc-12.2.0-dvyzksa            35) openmx/3.9-gcc-12.2.0-yewsx2z

If you need to search for a specific module, but do not know the hierarchy of modules that it depends on, use the module spider command.

You can load a module by running the following: module load modulename, E.G : module load julia/1.10.3

Unload a module using : module unload modulename

Or unload all modules with : module purge

More info : https://modules.readthedocs.io/en/latest/index.html https://docs.rcc.fsu.edu/hpc/environment-modules/#searching-for-modules

Procedimientos básicos en Bridge

Información acerca de los procedimientos de gestión en Bridge

Uso

Bridge se establece como una plataforma a través de la que ingresar en el sistema Cierzo y su propósito es el de servir como pasarela para conectar desde equipos que no dispongan de una dirección IP fija1 que pueda añadirse al firewall (trabajo en diferentes centros, proveedores de Internet que no proporcionan esa funcionalidad...).
No está pensado para ser un sistema de trabajo o almacenamiento, por lo que la cuota de disco de usuario es mínima (5MB), aunque suficiente para hacer login o copiar una clave pública que nos permita acc eder más cómodamente.

Si es tu caso, puedes solicitarnos una cuenta dirigiéndote a cierzo@bifi.es explicando la situación, indicando en el encabezado "Petición cuenta Bridge"

Conexión en un solo paso :

En caso de tener una cuenta del sistema Cierzo, podemos ingresar en ella a través de Bridge en un solo paso con el comando :

ssh -X -J idUsuario@bridge.bifi.unizar.es idUsuario@cierzo.bifi.unizar.es

Tendremos que ingresar los passwords de Bridge y Cierzo

En caso de problemas relacionados con rsa/dss

Unable to negotiate with UNKNOWN port 65535: no matching host key type found. Their offer: ssh-rsa,ssh-dss

Añadiremos al comando la opción

-o HostKeyAlgorithms=ssh-rsa,ssh-dss

Copia de archivos a través de Bridge

Podremos copiar archivos desde Cierzo a nuestro equipo local a través de Bridge adaptando a nuestras necesidades el siguiente comando :

scp -oHostKeyAlgorithms=ssh-rsa,ssh-dss -oProxyCommand="ssh -W %h:%p usuario@bridge.bifi.unizar.es" usuario@cierzo.bifi.unizar.es:/trayectoria/origen/archivo /trayectoria/destino

Para copiar desde nuestro equipo local al directoro scratch de Agustina :

scp -oHostKeyAlgorithms=ssh-rsa,ssh-dss -oProxyCommand="ssh -W %h:%p usuario@bridge.bifi.unizar.es" archivo_local_a_copiar  usuario@agustina.bifi.unizar.es:/fs/agustina/usuario
1

La IP fija es un identificador único que permite disponer de una dirección exclusiva y reconocible en Internet como si de un número de teléfono se tratara. Probablemente la necesites para montar un servidor web, de correo, etc.

Procedimientos de notificación de incidencias

Buenas prácticas y cómo notificar correctamente una incidencia.

Procedimientos

Buenas prácticas

Dado que trabajamos con sistemas complejos, es habitual que se den incidencias en el uso, como por ejemplo :

  • Contraseñas caducadas.
  • Cuentas bloqueadas.
  • Cuotas de disco insuficientes.
  • Fallos de software.
  • Etc.

Algunas de estas circunstancias son fácilmente evitables con medidas muy sencillas, como por ejemplo, ejercer una política de contraseñas adecuada (guardarlas de forma segura, renovarlas...) o intentar mantener los archivos y sus tamaños dentro de límites razonables (eliminando todo aquello que no sea necesario almacenar). Aplicar dichas medidas elementales beneficia a todos los usuarios, además de al Departamento de Sistemas, por lo que se recomienda encarecidamente una gestión responsable de las cuentas de usuario.

Notificación de incidencias

En caso de producirse alguna incidencia, existen dos direcciones de correo a la que comunicarlas :

cierzo@bifi.es agustina@bifi.es

cada una específica de uno de los sistemas del centro de computación.

Para solucionar los problemas, es necesario contexto, por lo que las notificaciones deberán de seguir un cierto formato.

Ejemplo :

Forma incorrecta de notificar una incidencia :

Hola. No tengo memoria.

Forma correcta de notificar una incidencia :

Buenos días.

Mi nombre es XXX, con usuario YYY en el sistema ZZZ.

Descripción del problema (software implicado, circunstancias, herramientas usadas, ip desde la que se conecta si procede, etc). En general, todos los datos que clarifiquen la incidencia.

Salida de los comandos (códigos de error, informes, capturas de pantalla, logs o lo que proceda)

El usuario debe tener en cuenta que, cuantos más datos aporte, más fácil será la resolución.

Una vez recibida la notificación, se encolará con una prioridad asignada. Como es lógico, los tiempos de resolución varían mucho y, por lo general, hay incidencias de varios tipos abiertas de forma simultánea, por lo que se recomienda dar al Departamento de Sistemas un tiempo razonable antes de reclamar información sobre algún tema en particular.

En caso de que no se reciba ningún tipo de contestación transcurridos algunos días, se recomienda contactar de nuevo con los correos anteriormente citados o dirigirse directamente a los miembros del Departamento de Sistemas.

Almacenamiento de objetos S3

El servicio de storage en S3 está pensado para almacenar datos fríos, es decir, orientado a grandes cantidades de información de uso muy infrecuente que deben permanecer congelados por largos periodos de tiempo. S3 no puede utilizarse como un espacio de scratch o para trabajar habitualmente con los archivos almacenados, ya que los archivos no se pueden modificar una vez guardados. Para hacerlo, hay que descargarlos, llevar a cabo los cambios y volver a subirlos al almacén.

CONFIGURACIÓN DEL CLIENTE

Aunque existen otras alternativas, la documentación se basa en el uso de aws cli, descargable en : https://docs.aws.amazon.com/es_es/cli/latest/userguide/getting-started-install.html

Una vez entregados los credenciales al usuario, éste procederá a configurar la herramienta cliente con :

aws configure --profile=nombre_perfil

Es recomendable trabajar con perfiles, dado que así podremos configurar diferentes juegos de credenciales para acceder a distintos almacenamientos S3.

Indicando :

AWS Access Key ID AWS Secret Access Key

y dejando por defecto las opciones de :

Default region name [None]:
Default output format [None]:

COMANDOS BÁSICOS

Listado de buckets :

aws s3 ls --endpoint=http://url_del_endpoint --profile=nombre_perfil

Listado de elementos de un bucket :

aws s3 ls nombre_bucket --endpoint=http://url_del_endpoint --profile=nombre_perfil

Subir archivo a un bucket :

aws s3 cp nombre_archivo s3://nombre_bucket/ --endpoint=http://url_del_endpoint --profile=nombre_perfil

** NOTA IMPORTANTE: En caso de que el tamaño del archivo sea superior a 50Gb, será necesario incluir el parámetro**

--expected-size num_bytes

de lo contrario, el proceso de upload puede fallar. https://docs.aws.amazon.com/cli/latest/reference/s3/cp.html*

Descargar un archivo desde el bucket a nuestro equipo :

aws s3 cp s3://nombre_bucket/nombre_archivo carpeta_local --endpoint=http://url_del_endpoint --profile=nombre_perfil

Borrar elemento de un bucket :

aws s3 rm s3://nombre_bucket/nombre_archivo --endpoint=http://url_del_endpoint --profile=nombre_perfil

COMANDOS AVANZADOS

Ver : https://docs.aws.amazon.com/AmazonS3/latest/API/API_Object.html

Enviar un archivo indicando su md5 para verificación :

1.Calcular el md5

openssl md5 nombre_archivo

2.Calcular el md5 en formato Base64

openssl md5 --binary nombre_archivo |base64

3.Subir el archivo indicando el valor para comprobación. El sistema verificará que el valor de md5 al terminar el upload del archivo sea el que hemos indicado :

aws s3api put-object --bucket nombre_bucket --key nombre_archivo --body nombre_archivo --content-md5 "md5_formato_base64" --endpoint=https://url_del_endpoint --profile=nombre_perfil

Si se produce algún error, el sistema lo indicará; de lo contrario nos mostrará información acerca del md5 del archivo, aunque no en base 64. Corresponderá al valor calculado en el paso 1.

    {
        "ETag": "\"valor_calculado_en_paso_uno\""
    }

Obtener los md5sum de los archivos de un bucket :

aws s3api list-objects --bucket nombre_del_bucket --endpoint=http://url_del_endpoint --profile=nombre_perfil

Obtener el md5sum de un archivo :

aws s3api head-object --bucket nombre_del_bucket --key nombre_archivo --query ETag --output text --endpoint=http://url_del_endpoint --profile=nombre_perfil

** NOTA : El valor de ETag difiere del archivo original si se sube con aws put. Para usar códigos md5 de comprobación, debería utilizarse aws s3api put-object**

User guide for the access to Aragon Supercomputing Center (CESAR) Services

Introduction

This document explains how to proceed to apply for the creation, activation or vinculation of projects and users in the Aragon Supercomputing Center (CESAR) located in the Institute for Biocomputing and Physics of Complex Systems (BIFI) of the University of Zaragoza. This information is suitable for Principal Investigators (PI) of any research project or group, in or outside the University of Zaragoza, and users related to them. The corresponding application forms are available at:

https://soporte.bifi.unizar.es/forms/form.php

There are four types of application forms corresponding to different objectives:

  • Project creation form (to be filled by Principal Investigators)
  • Account creation form (to be filled by users in general)
  • Account activation form (to be filled by users in general)
  • Account - project vinculation form (to be filled by Principal Investigators)

This document will guide through the process to fill in each of them. The main concepts are the following. Any person (in general, a researcher from a research center or company) willing to use CESAR services will need a user account. A typical service is the use of the computing infrastructure, but these user accounts can also grant access to data storing or to some web services provided by the datacenter, among others.

However, the use of these services has to be somehow organized and, in some cases, paid for. In this sense, the Project creation form, when filled, provides the necessary information about the objectives of the research as well as about the person (PI) that takes the responsibility for the correct use of the service and/or infrastructure (that will also be assumed by the users) and for the corresponding payment when necessary.

1.- Project creation form

https://soporte.bifi.unizar.es/forms/form.php?idForm=1

This form is intended to be filled in by the Principal Investigators. When approved, a new project will be registered in the CESAR system. The fields of this form are:

  • Researcher e-mail: Contact email of the PI or main responsible of the project/group.
  • Telephone: Main contact telephone.
  • Project title: Title of the project to be registered.
  • Project ID: Unique identifier of the project (it can be chosen by the PI, but the system will verify that it does not exist yet)
  • Research project reference: Reference given by an official institution (it can be the reference provided by a granting entity or an internal code of the University, for example; please inform which is the case)
  • Research proposal: Main explanation, goals and research track that will tackle the project, why the need or wish to use CESAR services...
  • Comments: Further comments complementing the previous information. software needed or any other peculiarity to be taken into account.

After these fields are filled, the Principal Investigator will download the agreement document by clicking the blue button "Download document for signing", then s/he has to sign this document, upload it and press the "Request project" button.

The lack of information in any field could invalidate the request; please make sure you send all the information needed.

The CESAR access committee will manually review the request and they will give an answer according to the provided information.

2.- User account creation form

https://soporte.bifi.unizar.es/forms/form.php?idForm=2

This form can be used by anyone that needs a new account to access CESAR services. The fields for this request are:

  • Name: Person who makes the request.
  • Email: Person contact email to communicate the final response.
  • Telephone: Contact phone of the current person that makes the request.
  • Desired username: Username to access the CESAR system (computing and storing infrastructures, etc.). This username must be plain text without special characters (like @ # ~ $).

After these fields are filled, the user will download the agreement document by clicking the blue button "Download document for signing", then s/he has to sign this document, upload it and press the "Request creation" button.

The lack of information in any field could invalidate the request; please make sure you send all the information needed.

The IT team will contact the user to bring the response to the request on the given email address.

The creation of a user account does not automatically allow the user to access CESAR services, as some of them (in particular those subject to payment) will require their vinculation to a Project.

3.- User account to project vinculation form

https://soporte.bifi.unizar.es/forms/form.php?idForm=4

This request has to be performed by the Principal Investigator of a Project already existing in the CESAR system. This is intended to add a new member (user) into the Project which, in this way, will grant access to the CESAR services associated with the said Project.

  • Principal researcher name: Principal Investigator of the research Project already existing in the CESAR system.
  • Email: Contact email of the PI.
  • Username to vinculate: Username to be vinculated to the Project.
  • Project ID to vinculate to: ProjectID of the Project.
  • Project title: Title of the Project.

After these fields are filled, the Principal Investigator will download the agreement document by clicking the blue button "Download document for signing", then s/he has to sign this document, upload it and press the "Request vinculation" button.

With this vinculation, the PI will take the responsibility for the correct use of the service and/or infrastructure (that will also be assumed by the users) and for the corresponding payment when necessary.

The lack of information in any field could invalidate the request; please make sure you send all the information needed.

The CESAR team will review the request and they will give an answer according to the provided information.

This document is subject to change. We are committed to updating it in case the registration process varies.

4.- Account activation (OPTIONAL)

https://soporte.bifi.unizar.es/forms/form.php?idForm=3

The following form will provide the information needed to activate an account that is inactive (due to different reasons, the passing of time, existing accounts still not activated, etc.). You must bear in mind that accounts will be temporarily suspended if there is a lack of activity during a long time.

  • Name: Name of the person that makes the request.
  • Email: Contact email to dispatch the final response over this request.
  • Username: Currently suspended username that should be activated.

After these fields are filled, the user will download the agreement document by clicking the blue button "Download document for signing", then s/he has to sign this document, upload it and press the "Request activation" button.

The lack of information in any field could invalidate the request; please make sure you send all the information needed.

The IT team will contact the user to bring the response to the request on the given email address.

Tarifas de servicios de cómputo en infraestructura Agustina

La Universidad de Zaragoza aprobó en 2018 las tarifas de uso de los servicios de computación de altas prestaciones en el Centro de Supercomputación de Aragón (CESAR), centro de proceso de datos mantenido y gestionado por el Instituto BIFI. Las tarifas vigentes para el curso 2024-205 están publicadas en el siguiente enlace:

https://vgeconomica.unizar.es/sites/vgeconomica/files/archivos/PCC/precios_publicos/2024-2025/BIFI-CESAR%202024.pdf

Las tarifas quedan definidas de la siguiente forma.

Tarifas
PRESTACIÓNTARIFA INTERNATARIFA OPITARIFA EXTERNA
Unidades de cómputo (incluye computación y almacenamiento local necesario para el cálculo)0,014€/(core*hora)0,028€/(core*hora)0,045€/(core*hora)

Estas tarifas pueden ser bonificadas, según detalla el propio documento de regularización del servicio.

Según vemos en la tabla superior, la tarifa base para los investigadores de Unizar es de 0,014€ por core y hora. A pesar de que dicha tarifa puede considerarse muy competitiva con relación a cualquier tarifa externa, existen proyectos de investigación y simulaciones que requieren de un número muy elevado de horas de cálculo, para los cuales difícilmente los investigadores podrían abordar el pago de dicha tarifa base. Con el objetivo de incentivar (o, al menos, de no desincentivar) el uso de una infraestructura que está concebida precisamente para que los investigadores la utilicen y obtengan el máximo rendimiento de ella, se han definido unas bonificaciones que reducen el precio unitario cuando el consumo de horas de cómputo es muy elevado, aplicable a todos los usuarios de UNIZAR y centros mixtos que han apoyado la adquisición de infraestructuras del CESAR, ya que este apoyo se considera una aportación en especie a la financiación del centro.

Para ello, se han definido unos tramos en el número de horas de uso, de forma que, por un lado, hasta un determinado número de horas el uso sea gratuito (puede considerarse una prueba de concepto) y a partir de cierto número de horas la tarifa sufre un descuento considerable, de modo que la infraestructura no quede infrautilizada por ser un coste inasumible para los investigadores. Los tramos se han definido de la siguiente forma:

Tarifa Base (€/(core*hora))
0,014
Tipo de tramoInicio Tramo (Horas CPU)Fin Tramo (Horas CPU)Factor reductorCoste Max (€)
Tramo 111000000
Tramo 21000110000011260
Tramo 310000110000000,22520
Tramo 41000001100000000,112600

Grupos de investigación que realicen otro tipo de aportaciones pueden obtener bonificaciones adicionales. En tal caso, contactar con agustina@bifi.es.

Tarifas Agustina GPUs

Para el cálculo del precio de uso de las GPU, se realiza una equivalencia con la parte proporcional de cores de CPU existentes en cada nodo:

  • Una hora de GPU L40S equivale a una hora de 16 cores de CPU
  • Una hora de GPU H100 equivale a tres horas de 16 cores de CPU

Por lo tanto, cuando el usuario lanza un trabajo a una de las colas de GPU, debe reservar obligatoriamente 16 cores de CPU multiplicado por el número de GPUs que desea usar.

Ejemplo de cabecera de script bash, usamos 4 GPUs de tipo L40S (pedimos un nodo completo de GPUs):

#SBATCH –N 1 # usamos un nodo, cada nodo dispone de 4GPUs
#SBATCH --ntasks-per-node=1 # una tarea por nodo
#SBATCH --gres=gpu:4 # seleccionadmos las GPUs a usar, 4 GPUs (un nodo completo)
#SBATCH --cpus-per-task=64 # 16 cores por GPU * 4GPUs
#SBATCH -p ada # nombre de la cola de GPUs L40S

Pimp my prompt

Default style in bash prompt is boring and green colour is also old-fashioned, you can change the style of your bash prompt with a couple of lines.

Bash prompt styling with one line

The PS1 variable change the style of bash prompt, for instance you can set the style to your prompt by setting the variable as follows in your terminal:

export PS1="\[\e[31m\][\[\e[m\]\[\e[33m\]\u\[\e[m\]@\[\e[36m\]\h\[\e[m\]\[\e[31m\]]\[\e[m\]:\[\e[33;40m\]\w\[\e[m\]\[\e[40m\] \[\e[m\]\[\e[32;40m\]\\$\[\e[m\] "

Set the line above at the end of your .bashrc file available in your home directory to make the style permanent every time you login. Reload the configuration file by using:

source .bashrc

alt text

This is just an example with fits nicely with console's black background, but you can set your own bash prompt style by using this web:

https://robotmoon.com/bash-prompt-generator/

Enhancing output colours on listing files and folders

By default the cluster provides a light colour style for the ls output command, you can set a more colourful style by using this project:

https://github.com/trapd00r/LS_COLORS

To get the new colour style just clone the project inside your home directory:

git clone https://github.com/trapd00r/LS_COLORS.git

The add the following line at the end of your .bashrc file present in your hone directory:

source "$HOME/LS_COLORS/lscolors.sh"

Save file changes and reload the configuration file by using:

source .bashrc