Skip to content

Supported Software

C++ and Fortran

Compilers are managed via modules. Always match your compiler version between compilation and execution.

# Load the GCC toolchain (includes gcc, g++, gfortran)
module load gcc/13

# Compile a C++ program with optimization
g++ -O3 -march=native -std=c++17 -fopenmp mycode.cpp -o myprogram

# Compile a Fortran program
gfortran -O3 -march=native mycode.f90 -o myprogram_f

# Check what flags your binary was built with (sanity check)
objdump -d myprogram | head

💡 On optimization flags.

-O3 -march=native enables aggressive optimization for the specific CPU architecture of the compute node where you compile. This can give significant speedups for numerically intensive calculations, but be aware that the resulting binary may not run on nodes with a different CPU generation. If you need portability across all cluster nodes, use -O2 -march=x86-64 instead.

Python

Rather than relying on a system-wide Python installation or a conda-based distribution, we recommend uv — a fast, self-contained Python package and project manager. Every user installs and manages their own Python environment entirely within their home directory, with no module loading required and no administrator intervention.

uv replaces pip, venv, pyenv, and conda in a single tool. It is meaningfully faster than all of them, handles Python version management itself, and produces fully reproducible environments via lockfiles.

One-Time Setup — Install uv

This is a user-space install. Run it once in an salloc session or as a short batch job — it is a small download and requires no compilation.

# Install uv into ~/.local/bin  (no sudo, no modules required)
curl -LsSf https://astral.sh/uv/install.sh | sh

# Reload your shell environment so ~/.local/bin is on your PATH
source $HOME/.local/bin/env

# Verify the installation
uv --version

Add the following to your ~/.bashrc so uv is available in every future session and in your batch jobs:

# ~/.bashrc  — add this block
export PATH="$HOME/.local/bin:$PATH"

Managing Python Versions

uv downloads and manages Python interpreters entirely in user space. You do not need to request a specific Python version from the sysadmin team.

# Install a specific Python version
uv python install 3.12

# List all Python versions uv has installed
uv python list

Creating and Using a Project Environment

The recommended workflow is one virtual environment per project, created with uv and stored under your project directory or in /home.

# Navigate to your project
cd /home/your_username/qcd_analysis

# Create a virtual environment using a specific Python version
uv venv --python 3.12 .venv

# Activate the environment
source .venv/bin/activate

# Install packages — uv resolves and installs significantly faster than pip
uv pip install numpy scipy matplotlib h5py iminuit

# Or, if your project has a pyproject.toml, simply:
uv sync

# Deactivate when done interactively
deactivate

Using Your Environment in a Batch Job

Because uv installs everything in user space, no module load is needed. You only need to activate the environment and ensure ~/.local/bin is on your PATH.

#!/bin/bash
#SBATCH --job-name=pdf_analysis
#SBATCH --partition=general
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=4
#SBATCH --mem=16G
#SBATCH --time=02:00:00
#SBATCH --output=logs/python_%j.out
#SBATCH --error=logs/python_%j.err

# uv lives in user space — no module load required
export PATH="$HOME/.local/bin:$PATH"

# Activate the project environment
source /home/your_username/qcd_analysis/.venv/bin/activate

# Confirm the interpreter in use (useful for debugging)
echo "Python: $(which python)$(python --version)"

# Run your analysis
python compute_distributions.py \
    --config configs/run_nlo.yaml \
    --output /scratch/your_username/pdf_results/

⚠️ Installing packages with compiled extensions requires a compute node.

Most common scientific packages — numpy, scipy, lhapdf — ship pre-built wheels and install instantly via uv pip install with no compilation at all. However, if a package has no available wheel and must be compiled from source, that installation step requires a compute node. Use an salloc session in that case. When in doubt, try on the login node and abort immediately if you see compiler activity.

Reproducibility — Locking Your Environment

For research code that must be re-runnable months or years later — or shared with collaborators — pin your exact dependency versions with a lockfile:

# Export a fully pinned, platform-specific lockfile
uv pip freeze > requirements.lock

# Recreate the identical environment from the lockfile on any node
uv pip install --requirements requirements.lock

For more structured projects, uv also supports pyproject.toml with automatic lockfile management via uv lock and uv sync — the recommended approach for any code that will be published or shared.

Mathematica

Mathematica is available for both interactive and batch use.

#!/bin/bash
#SBATCH --job-name=feynman_integrals
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=8
#SBATCH --mem=32G
#SBATCH --time=12:00:00
#SBATCH --output=logs/mathematica_%j.out

module purge

# Run a Mathematica script non-interactively
math -noprompt -script /home/your_username/calc/integral_evaluation.wl

# Alternatively, using wolframscript:
wolframscript -file /home/your_username/calc/integral_evaluation.wl

Your .wl script should write results to a file explicitly:

(* integral_evaluation.wl *)
Print["Starting calculation at: ", DateString[]];

result = NIntegrate[
  Exp[-x^2] BesselJ[0, x],
  {x, 0, Infinity},
  PrecisionGoal -> 12,
  Method -> "GaussKronrodRule"
];

Export["/home/your_username/results/integral_result.csv",
  {{"result", result}}, "CSV"];

Print["Done. Result = ", result];

Interactive Mathematica (via salloc)

# Get a compute node allocation
srun --ntasks=1 --cpus-per-task=4 --mem=16G --time=01:00:00 --pty bash

# Load and launch Mathematica terminal interface
math

# when done
exit