Skip to content
Snippets Groups Projects
Commit c8380b12 authored by David Hoese's avatar David Hoese
Browse files

Add updated and simplified version of the scripts and README

parent 3a3bc10b
No related branches found
No related tags found
No related merge requests found
# jupyter_lab_slurm
Scripts and tools for working with jupyter lab and dask on the iris cluster using slurm
\ No newline at end of file
Scripts and tools for working with jupyter lab and dask on the iris cluster
using slurm. The included scripts and instructions are simplifications of the
instructions created by the PanGeo group here:
http://pangeo.io/setup_guides/hpc.html
## To install
If you have SSH keys set up with gitlab:
```bash
ssh -A iris
git clone git@gitlab.ssec.wisc.edu:davidh/jupyter_lab_slurm.git
```
Otherwise:
```bash
ssh iris
git clone https://gitlab.ssec.wisc.edu/davidh/jupyter_lab_slurm.git
```
### Create password for jupyter lab session
```bash
/home/davidh/miniconda3/envs/pangeo/bin/jupyter notebook password
```
## Run jupyter lab on iris
After exiting out of your iris SSH connection, meaning you are on your local
machine with a direct connection to iris, run:
eval "$(ssh -A iris jupyter_lab_slurm/start_jupyter_lab.sh)"
This will:
1. Submit job to slurm to run jupyter lab on a compute node.
2. Print an ssh command to use forwarding necessary ports for communication.
3. Pass the ssh command from 2 to "eval" to run it and start forwarding ports.
## Open connection to jupyter lab
Open your browser to:
localhost:8888
And use the password set up during installation.
## Notes
* The "start_jupyter_lab.sbatch" script will use port 8888 for the jupyter lab
server and 8787 for dask clients by default.
* Jupyter lab does not currently allow connecting to unix sockets (sockets
with a pathname). See https://github.com/jupyter/notebook/issues/2503.
......@@ -6,7 +6,7 @@
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=1
#SBATCH --mem-per-cpu=16384
#SBATCH --output=/odyssey/isis/tmp/%u/job_logs/start_jupyter_lab_job_%A.txt
#SBATCH --output=/home/%u/start_jupyter_lab_job_%A.txt
module purge
oops() {
......@@ -14,10 +14,9 @@ oops() {
exit 1
}
export HDF5_USE_FILE_LOCKING="FALSE"
export PYTROLL_CHUNK_SIZE="2048"
#export HDF5_USE_FILE_LOCKING="FALSE"
#export PYTROLL_CHUNK_SIZE="2048"
CONDA_ENV=/home/davidh/miniconda3
WORK_DIR=/odyssey/isis/tmp/$USER/job_logs
source $CONDA_ENV/bin/activate pangeo
echo "ssh -N -L 8888:`hostname`:8888 -L 8787:`hostname`:8787 $USER@iris" | tee -a ${HOME}/.jupyter_lab_connect.txt
......
#!/usr/bin/env bash
SCRIPT_HOME="$( cd -P "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
# exit on any errors
set -e
WORK_DIR=/odyssey/isis/tmp/$USER/job_logs
# Reset file specifying the current hostname of the jupyter lab session
CONNECT_FILE="${HOME}/.jupyter_lab_connect.txt"
rm -f $CONNECT_FILE
sbatch start_jupyter_lab.sbatch
# Tell slurm to run jupyter lab and write ssh command to CONNECT_FILE
sbatch --quiet "${SCRIPT_HOME}/start_jupyter_lab.sbatch"
# Wait for slurm to start the job on a compute node
while [ ! -f $CONNECT_FILE ]; do
echo "Waiting for job to start..."
echo "Waiting for job to start..." >&2
sleep 5
done
echo "Connect to Jupyter Lab by SSHing using:"
# Wait for jupyter lab to completely start
echo "Job started, waiting 5 seconds for jupyter lab to start..."
sleep 5
# Now that we know it has started print out the SSH command the user should
# use to forward appropriate ports
echo "Connect to Jupyter Lab by SSHing using:" >&2
tail -n 1 $CONNECT_FILE
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment