User Submitted Documentation

This is user submitted documentation. This will eventually contain tip and tricks from users.

VPN alternatives

While the official way to connect to Rāpoi is using the Cisco Anyconnect client, there are some alternatives if that is causing problems.

ECS ssh bastions

ECS users can come in via the ECS ssh bastions greta-pt.ecs.vuw.ac.nz or barretts.ecs.vuw.ac.nz. Note that this will only work for users with valid ECS credentials.

The best way to do this is with a ProxyJump either in the ssh command directly, or you can add it to your ssh_config file.

Directly

ssh -J <ecs user>@barretts.ecs.vuw.ac.nz <raapoi user>@raapoi.vuw.ac.nz

Or via you ssh_config file

Host raapoi
    HostName raapoi.vuw.ac.nz
    ProxyJump <ECS user>@greta-pt.ecs.vuw.ac.nz
    User <Raapoi user>

OpenConnect

OpenConnect is an opensource implimentation of the Cisco anyconnect clients. Some users find this easier to use than the official client.

As all VPN connections currently require 2FA you need to use the openconnect-sso python package. This has only been tested in Linux, it should be possible to make this work in a windows WSL2 terminal as well as in MacOS, but it may require modifications.

Step 0: make sure you've already setup the Microsoft Authenticator App on your phone or setup SMS as the 2FA method.

sudo apt install pipx
pipx install "openconnect-sso[full]"  # can just use "openconnect-sso" if Qt 5.x already installed
pipx ensurepath

Last line required because of the 'Note' that displays when you run pipx install "openconnect-sso[full]" If you have Qt 5.x installed you can skip 2 to 3 and instead: pipx install openconect-sso]

Now you can connect using CLI:

openconnect-sso --server vpn.vuw.ac.nz --user <firstname.lastname>@vuw.ac.nz

It remembers the last used server so after the first time you can reconnect with just

openconnect-sso

You then just leave a tab of your command line open on this, and in a different tab connect to Raapoi.

If this doesn't work for you, ogten due to the difficulty in resolving the QT5 dependancies on macOS silicon you could try Alternative Openconnect-sso


ParaView (via OpenFOAM)

Friendly Reminder

HPC is built to serve powerful computational work which generally happens at pre-visualisation stage, and is not entirely meant to fulfill visualisation needs as discussed in FAQ section: Visualisation. Please proceed only if you think doing visualisation/plotting locally is not an option. Kindly note that these instructions are meant for Paraview usage alongside OpenFOAM.

To connect ParaView to Rāpoi, you will need 3 terminal windows: two to extend the virtual handshake from Rāpoi and from your local computer, and one to open ParaView.

Terminal window 1:

  1. Log in to Rāpoi.

  2. Initiate an interactive job to avoid putting strain on the login node (assign desired cpus and memory).

  3. Run command hostname -I | awk '{print $1}' to identify the IP address of the computing node.

  4. Run command ./pvserver

Terminal window 2:

  1. Run command ssh -N -L 11111:<IP address>:11111 <username>@raapoi.vuw.ac.nz (make sure to enter the IP address of the computing node you identified earlier).

Terminal window 3:

  1. Source OpenFOAM from your local computer (make sure the OpenFOAM version you source is the same version as what is installed on Rāpoi)

  2. Run command paraFoam to open ParaView.

  3. On ParaView, select File -> Connect...

  4. Highlight the desired server, and click Connect


Gview

Firstly, see the FAQ entry on visualisation (i.e. consider if this is something you really need to do remotely). If doing your visualisation/plotting locally is not an option, proceed.

Begin by logging into Rāpoi using -X flag when using graphical applications on your local machine.

ssh -X <username>@raapoi.vuw.ac.nz

Then, from the login node. Get an interactive session by passing --x11 flag.

<username>@raapoi-login:~$ srun --x11 --pty bash

Once a compute node has been allocated, load Gaussview module

<username>@amd01n01:~$ module load Gaussview/6.1

Launch gview and it should open graphical windows on your local device.

<username>@amd01n01:~$ gview

Ray

Ray is a powerful distributed computing framework that can be used to accelerate High Performance Computing (HPC) workloads.

For this exercise, I'll need two terminal windows and a browser.

# Terminal 1
# Start by requesting an interactive session
srun --cpus-per-task=4 --mem=8G --time=0-00:59:00 --pty bash

# Begin with a clear environment
module purge

# Create a python environment
module load gompi/2022a Python/3.10.4-bare
python -m venv env
source env/bin/activate

# Install Ray
pip install 'ray[default]'

# Verify installation
python -c "import ray;print(ray.__version__)";

# Start Ray head node by defining port,object_store_memory(osm),
# ncpus, dashboardhost; osm should be 80% of the requested mem 
# srun command. Here just using 20% 1.6G of 8G
PORT="6379"
OSM="1600000000"
NCPUS="4"
DBHOST="0.0.0.0"

ray start --head --port $PORT --object_store_memory $OSM --num-cpus $NCPUS --dashboard-host=$DBHOST

# A node name should be printed and
# Ray runtime started 
# with address to the dashboard
# In my case it was: 130.195.XX.XX:8265

# Terminal 2
# Now, leave this terminal running and 
# open a new terminal to that port and ip
# to start a tunnel
ssh -L 8265:130.195.XX.XX:8265 USERNAME@raapoi.vuw.ac.nz

# You should now open a browser to view your dashboard at
http://127.0.0.1:8265

# To submit a job
RAY_ADDRESS='http://130.195.XX.XX:8265' ray job submit --working-dir . -- python my_script.py

# To stop Ray
# Go back to terminal 1 and type
ray stop

VSCode

Tip

Windows users are recommended to use Git Bash for the following instructions to work.

The instructions below should let users run their VSCode session on a compute node.

Pre-requisite installations:

Step 1. On your local machine, create ssh keys (existing ssh keys can also be used - no need to create new ones)

user@local:~$ ssh-keygen

Follow the prompts on the terminal to generate ssh-key pair, and note the directory where keys are being saved.

Step 2. Send the public key to Rāpoi

user@local:~$ ssh-copy-id -i ~/path/to/public/key RAAPOI_USERNAME@raapoi.vuw.ac.nz

The path ~/path/to/public/key should be the same as displayed when generating the ssh-key ~/.ssh/id_rsa.pub in some cases.

Step 3. Test the new keys

user@local:~$ ssh -i ~/path/to/public/key RAAPOI_USERNAME@raapoi.vuw.ac.nz

Step 4. On your local machine, update ssh config file

Create ~/.ssh/config file if it does not exist. Add hostname details to it:

Host VSCode_Compute
    User <YOUR_RAAPOI_USERNAME>
    HostName amd01n01
    ProxyJump raapoi_login

Host raapoi_login
    HostName raapoi.vuw.ac.nz
    User <YOUR_RAAPOI_USERNAME>

Host *
    ForwardAgent yes
    ForwardX11 yes
    ForwardX11Trusted yes
    IdentityFile ~/.ssh/id_rsa # Add your own private key path here
    AddKeysToAgent yes
    StrictHostKeyChecking no
    UserKnownHostsFile /dev/null

Step 5. On your local machine, open a terminal window and login to Rāpoi normally

user@local:~$ ssh raapoi_login

Once logged in alloc resources for the VSCode session

RAAPOI_USERNAME@raapoi-login:~$ srun -t0-01:00:00 -wamd01n01 --mem=4G --pty bash

Tip

Extend the time to maximum 5 hours with -t0-04:59:59

Step 6. Connect VSCode session

Open VSCode window, and click on the bottom left corner that says Open a Remote Window, and then choose Connect to Host and then selecting VSCode_Compute as a host.

Once a connection is established, your VSCode session should be running on a compute node now.

Tip

To speed up VSCode, there are steps mentioned in the official VSCode docs. Below is just a part of it:

Once connected, update VSCode's /nfs/home/$USER/.vscode-server/data/Machine/settings.json, and add the following lines to it:

{
    "files.watcherExclude": {
        "**":true,
    },
    "files.exclude": {
        "**/.*": true,
    },
    "search.followSymlinks":false,
    "search.exclude": {
        "**":true,
    },
    "terminal.integrated.inheritEnv": false,
}

Step 7. To close VSCode session.

Go to File > Close Remote Connection


RStudio-Server

Pre-requisites:

This tutorial assumes an interactive session requested via: srun --mem=64G --cpus-per-task=8 -wamd01n04 --pty bash

RStudio-Server on the cluster instructions modifies [1] and [2]

Step 1. Create appropriate directories and pull singularity image to run RStudio-Server:

[user@amd01n04 ~]$ module purge; module load GCC/10.2.0 OpenMPI/4.0.5 Singularity/3.10.2
[user@amd01n04 ~]$ mkdir -p "$HOME/singularity-images"
[user@amd01n04 ~]$ singularity pull --dir="$HOME/singularity-images" --name=rstudio-server.sif docker://rocker/rstudio

Step 2. Create bind-mounts to later use inside the container.

[user@amd01n04 ~]$ workdir=$HOME/rstudio-server
[user@amd01n04 ~]$ mkdir -p "${workdir}"/{run,tmp,var/lib/rstudio-server}
[user@amd01n04 ~]$ chmod 700 "${workdir}"
[user@amd01n04 ~]$ cat > "${workdir}"/database.conf <<END
provider=sqlite
directory=/var/lib/rstudio-server
END
[user@amd01n04 ~]$ cat > "${workdir}"/rsession.sh <<END
#! /bin/sh
export OMP_NUM_THREADS=${SLURM_JOB_CPUS_PER_NODE}
export R_LIBS_USER=~/R/%p-library/%v-rocker-rstudio
exec /usr/lib/rstudio-server/bin/rsession "\${@}"
END
[user@amd01n04 ~]$ chmod +x "${workdir}"/rsession.sh
[user@amd01n04 ~]$ export SINGULARITY_BIND="${workdir}/run:/run,${workdir}/tmp:/tmp,${workdir}/database.conf:/etc/rstudio/database.conf,${workdir}/rsession.sh:/etc/rstudio/rsession.sh,${workdir}/var/lib/rstudio-server:/var/lib/rstudio-server"
[user@amd01n04 ~]$ export SINGULARITYENV_RSTUDIO_SESSION_TIMEOUT=0
[user@amd01n04 ~]$ export SINGULARITYENV_USER=$(id -un)
[user@amd01n04 ~]$ export SINGULARITYENV_PASSWORD=$(openssl rand -base64 15)
[user@amd01n04 ~]$ export PORT=$(/usr/bin/python3 -c 'import socket; s=socket.socket(); s.bind(("", 0)); print(s.getsockname()[1]); s.close()')
[user@amd01n04 ~]$ export IP=$(hostname -i)

Step 3. Run this to get instructions to connect to the RStudio-Server later.

[user@amd01n04 ~]$ cat 1>&2 <<END

A new instance of the RStudio Server was just launched. To access it,

1. SSH tunnel from your workstation using the following command from a terminal on your local workstation:

   IP="${IP}"; PORT="${PORT}"; ssh -L ${PORT}:${IP}:${PORT} ${SINGULARITYENV_USER}@raapoi.vuw.ac.nz

   and point your local web browser to <http://localhost:${PORT}>

2. Log in to RStudio Server using the following credentials:

   user: ${SINGULARITYENV_USER}
   password: ${SINGULARITYENV_PASSWORD}

When done, make sure to terminate everything by:

1. Exit the RStudio Session ("power" button in the top right corner of the RStudio window)

2. Cancel the job by issuing the following command on the login node:

      scancel -f ${SLURM_JOB_ID}

END

Step 4. Finally, start RStudio-Server

[user@amd01n04 ~]$ singularity exec --cleanenv --scratch /run,/tmp,/var/lib/rstudio-server --workdir ${workdir} ${HOME}/singularity-images/rstudio-server.sif rserver --www-port ${PORT} --auth-none=0 --auth-pam-helper-path=pam-helper --auth-stay-signed-in-days=30 --auth-timeout-minutes=0 --server-user=$(whoami) --rsession-path=/etc/rstudio/rsession.sh

Step 5. Inside a new terminal on your personal device, create a tunnel following the instructions of the above command.

[user@personal-device:~] IP="${IP}"; PORT="${PORT}"; ssh -L ${PORT}:${IP}:${PORT} ${SINGULARITYENV_USER}@raapoi.vuw.ac.nz

Note

Remember to note down output of the environment variables above.

Once the tunnel is set up successfully, go to your browser and with the port from the above output:

http://localhost:${PORT}/

For running this inside a batch script, submit the following via sbatch.

submit.sl

#! /bin/bash
#SBATCH --time=00-01:00:00
#SBATCH --ntasks=2
#SBATCH --mem=8G
#SBATCH --output=rstudio-server.%j
#SBATCH --error=rstudio-server.%j.err
#SBATCH --export=NONE
# customize --output path as appropriate (to a directory readable only by the user!)

# Load Singularity module
module purge
module load GCC/10.2.0 OpenMPI/4.0.5 Singularity/3.10.2

# Create temporary directory to be populated with directories to bind-mount in the container
# where writable file systems are necessary. Adjust path as appropriate for your computing environment.
workdir=$HOME/rstudio-server

mkdir -p "${workdir}"/{run,tmp,var/lib/rstudio-server}
chmod 700 "${workdir}"
cat > "${workdir}"/database.conf <<END
provider=sqlite
directory=/var/lib/rstudio-server
END

# Set OMP_NUM_THREADS to prevent OpenBLAS (and any other OpenMP-enhanced
# libraries used by R) from spawning more threads than the number of processors
# allocated to the job.
#
# Set R_LIBS_USER to a path specific to Rocker/RStudio to avoid conflicts with
# personal libraries from any R installation in the host environment

cat > "${workdir}"/rsession.sh <<END
#! /bin/sh
export OMP_NUM_THREADS=${SLURM_JOB_CPUS_PER_NODE}
export R_LIBS_USER=~/R/%p-library/%v-rocker-rstudio
exec /usr/lib/rstudio-server/bin/rsession "\${@}"
END

chmod +x "${workdir}"/rsession.sh

export SINGULARITY_BIND="${workdir}/run:/run,${workdir}/tmp:/tmp,${workdir}/database.conf:/etc/rstudio/database.conf,${workdir}/rsession.sh:/etc/rstudio/rsession.sh,${workdir}/var/lib/rstudio-server:/var/lib/rstudio-server"

# Do not suspend idle sessions.
# Alternative to setting session-timeout-minutes=0 in /etc/rstudio/rsession.conf
# https://github.com/rstudio/rstudio/blob/v1.4.1106/src/cpp/server/ServerSessionManager.cpp#L126
export SINGULARITYENV_RSTUDIO_SESSION_TIMEOUT=0

export SINGULARITYENV_USER=$(id -un)
export SINGULARITYENV_PASSWORD=$(openssl rand -base64 15)
# Get unused socket per https://unix.stackexchange.com/a/132524
# Tiny race condition between the python & singularity commands
export PORT=$(/usr/bin/python3 -c 'import socket; s=socket.socket(); s.bind(("", 0)); print(s.getsockname()[1]); s.close()')
export IP=$(hostname -i)
cat 1>&2 <<END

A new instance of the RStudio Server was just launched. To access it,

1. SSH tunnel from your workstation using the following command from a terminal on your local workstation:

   IP="${IP}"; PORT="${PORT}"; ssh -L ${PORT}:${IP}:${PORT} ${SINGULARITYENV_USER}@raapoi.vuw.ac.nz

   and point your local web browser to <http://localhost:${PORT}>

2. Log in to RStudio Server using the following credentials:

   user: ${SINGULARITYENV_USER}
   password: ${SINGULARITYENV_PASSWORD}

When done, make sure to terminate everything by:

1. Exit the RStudio Session ("power" button in the top right corner of the RStudio window)

2. Cancel the job by issuing the following command on the login node:

      scancel -f ${SLURM_JOB_ID}

END

singularity exec "$HOME/singularity-images/rstudio-server.sif" \
    rserver --www-port "$PORT" \
            --auth-none=0 \
            --auth-pam-helper-path=pam-helper \
            --auth-stay-signed-in-days=30 \
            --auth-timeout-minutes=0 \
            --rsession-path=/etc/rstudio/rsession.sh \
            --server-user="$USER"
printf 'RStudio Server exited\n' 1>&2

Step 2. Once your job starts note the JOBID, read the output file for instructions to connect to the running RStudio-Server.

[user@raapoi-login:~]$ sbatch submit.sl; vuw-myjobs
Submitted batch job <job_id> 
[user@raapoi-login:~]$ cat rstudio-server.%j # %j is the job id 

For any help, please contact one of our support team members: Support


MATLAB GUI via X-Forwarding

Friendly Reminder

HPC is built to serve powerful computational work largely via commandline interface, kindly read our FAQ section: Visualisation. Please proceed only if you think non-GUI MATLAB is not an option. Kindly make sure your personal device has X-Server installed and running.

Installing and running an X Server on Windows

This tutorial explains how to install an X-Server on Windows. We will use the VcXsrv, a free X-server for this purpose.

Steps:

  • Download the installer from here: vcxsrv

  • Run the installer.

    • Select Full under Installation Options and click Next
    • Select a target folder

To Run the Server:

  • Open the XLaunch program (most likely on your desktop)

  • Select Multiple Windows and click Next

  • Select Start no client and click Next

  • On the Extra settings window, click Next

  • On the Finish configuration page click Finish

You have now started your X Server.

Set up your console

In the Git bash or the windows command line (cmd) terminal, before you connect to an ssh server, you have to set the used display. Under normal circumstances, VcXsrv will start the Xserver as display 0.0. If for some reason the remote graphical user interface does not start later on, you can check, the actual display by right-clicking on the tray-icon of the X Server and select Show log. Search for DISPLAY in the log file, and you will find something like:

DISPLAY=127.0.0.1:0.0

In your terminal enter:

set DISPLAY=127.0.0.1:0.0

Now you are set up to connect to the server of your choice via:

ssh -Y <username>@raapoi.vuw.ac.nz

Notice, that on windows you will likely need the -Y flag for X Server connections, since it seems -X does not normally work.

On Rāpoi

Once logged in allocate resources for an interactive session

RAAPOI_USERNAME@raapoi-login:~$ srun --x11 -t0-01:00:00 -wamd01n01 --ntasks=8 --mem=32G --pty bash

Tip

Extend the time to maximum 5 hours with -t0-04:59:59

After your job starts and the prompt changes, run matlab and you'll see MATLAB GUI Window on your personal device.

RAAPOI_USERNAME@amd01n01:~$ module use /home/software/tools/eb_modulefiles/all/Core
RAAPOI_USERNAME@amd01n01:~$ module load MATLAB/2024a
RAAPOI_USERNAME@amd01n01:~$ module load fosscuda/2020b
RAAPOI_USERNAME@amd01n01:~$ matlab -softwareopengl

For any help, please contact one of our support team members: Support