The BIL Analysis Ecosystem

How to connect to the BIL Analysis Ecosystem

Connect via Terminal

SSH

You can connect to BIL Analysis ecosystem resources with an ssh client from your local machine using your PSC credentials.

You must install SSH client software on your local machine, but free ssh clients for Macs, Windows and Unix are available. Popular ssh clients (GUI) include PuTTY for Windows and Cyberduck for Macs. A command line version of ssh is installed on Macs by default and you can use the Terminal application if you prefer. You can also check with your university to see if there is an ssh client that they recommend.

To use ssh to connect to the BIL Analysis Ecosystem:

ssh userid@login.brainimagelibrary.org

Everyone who requests a BIL account has access to the login node using their BIL Username and Password. The login node is not intended for heavy computation or visualization, but rather to provide convenient shell access to BIL data and enable interactive and batch use of the BIL Analysis Ecosystem. The BIL computational cluster provides a suitable resource for both computation and visualization.

Workshop VM

For workshop purposes, enter your PSC credentials when prompted.

Open terminal and run the command

ssh userid@workshop.brainimagelibrary.org

For example,

ssh icaoberg@workshop.brainimagelibrary.org
icaoberg@workshop.brainimagelibrary.org's password:

Last login: Mon Jan 24 10:46:38 2022 from pool-71-162-2-190.pitbpa.fios.verizon.net
********************************* W A R N I N G ********************************
You have connected to workshop.brainimagelibrary.org

This computing resource is the property of the Pittsburgh Supercomputing Center.
It is for authorized use only.  By using this system, all users acknowledge
notice of, and agree to comply with, PSC polices including the Resource Use
Policy, available at http://www.psc.edu/index.php/policies. Unauthorized or
improper use of this system may result in administrative disciplinary action,
civil charges/criminal penalties, and/or other sanctions as set forth in PSC
policies. By continuing to use this system you indicate your awareness of and
consent to these terms and conditions of use.

LOG OFF IMMEDIATELY if you do not agree to the conditions stated in this warning



Please contact support@psc.edu with any comments/concerns.

********************************* W A R N I N G ********************************
            

If you can see the message above when you connect, then you should be ready to start using the resources.

Remote Desktop: X2Go

You can access the BIL Analysis Ecosystem using a Virtual Machine system supported by the remote-desktop model X2Go. This will give you remote access to the system via a graphical user interface.

Using X2Go

  • Download and install the appropriate X2Go client (Windows, Linux, or Mac) from here.
  • Start the X2Go client.

  • Under the Session menu, select New Session
  • Enter your VM name for the hostname
  • Enter your PSC login name: (ex. ropelews or icaoberg)
  • Under session type select MATE

  • On the right side of the screen you should see a box called new session.

  • Move your mouse to the words New session and click the left mouse button.
  • Log in using your username and password
  • A new window will appear (usually within 10 seconds) . If you click on your left mouse button in this new window, a submenu will appear. Select xterm to start a terminal.

  • From the xterm window, start an application with graphical output, such as vaa3D or Fiji. The application will appear in the window.

Interactive Apps

OpenOnDemand

The OpenOnDemand (OOD) interface allows you to connect to the BIL Analysis Ecosystem through the web browser. You can submit and manage jobs, move or create files, or use the apps such as Jupyter Notebook.

To connect to the BIL Analysis Ecosystem via Open OnDemand

  1. In your web broswer, go to https://ondemand.bil.psc.edu

  2. Enter your PSC username and password

  3. From the OOD dashboard, pictured below, you will see the top menu bar where you can manage file, jobs, or select the apps to use.

Manage files

From the OOD dashboard, you can create, edit, or move files. Select the Files option from the top menu bar, then the Home which will navigate to your home directory at /bil/users/. From here, you will have the following options:

  • Go To…: Navigate to a specified folder
  • Open in Terminal: Opens the active folder in a terminal session (new tab)
  • New File: Creates a new file in the active folder
  • New Dir: Creates a new folder in the active folder
  • Upload: Select files from your local machine to upload to the active folder
  • Show Dotfiles: Reveals hidden files
  • Show Owner/Mode: Shows ownership and permission information
  • View: Shows file contents inside the current tab
  • Edit: Opens a file editor in a new tab
  • Rename/Move: Gives a file a new path and/or name
  • Download: Downloads the file or folder to your local machine
  • Copy: Copies selected files to the clipboard
  • Paste: Pastes files from the clipboard
  • (Un)Select All: Select or unselect all files/folders
  • Delete: Deletes selected files/folders

Manage Jobs

From the OOD dashboard, you can create, submit, and edit jobs. Select the Jobs option from the top menu bar, then Job composer which will take you to the job composer dashboard. From here, you will have the following options:

  • Create a new job script
  • Edit job scripts
  • Submit a job

When you first visit to this page, you'll go through a helpful tutorial. The buttons do the following:

  • New Job: Create a new job
    • From Default Template: Uses system defaults to create a job that you can then edit.
    • From Specified Path: Creates a job from a specific job script.
    • From Selected Job: Creates a new job that is a copy of the selected job.
  • Edit Files: Opens the project folder in a new File Explorer tab and allows you to edit the files within it.
  • Job Options: Allows you to edit the Name, Cluster, Job Script, and Account of a job.
  • Open Terminal: Opens a terminal session in a new tab at the project folder.
  • Submit: Submits the selected job.
  • Stop: Stops the selected job if it was already submitted.
  • Delete: Delete the selected job.

Jupyter Notebook

From the OOD dashboard you can launch a Jupyter Notebook server on one or more nodes.

  • Select the Apps option from the top menu bar, then Jupyter Notebook which will take you to the page to set up your Jupyer Notebook session.
  • From here, you will specify the timelimit, number of nodes to use. Use the Extra Slurm Args field to specify the number of cores or number of GPUs you want. Use the Extra Jupyter Args field to pass arguments to your Jupyter notebook.
  • Select Launch to begin your Jupyter Notebook session.
  • You will be taken to the dashboard for your interactive sessions. It may take some time for your resources to become available. When your session starts, select the blue "Connect to Jupyter"
  • A new window running Jupyter Hub will open where you can being your Jupyter Notebook session.
  • For instructions on using Jupyter Notebook, see the official documentation.

Software Modules

LMOD

Lmod is a Lua based module system that easily handles the MODULEPATH Hierarchical problem. Environment Modules provide a convenient way to dynamically change the users' environment through modulefiles.

In a nutshell, we use LMOD to manage software that can be used in the VM as well as the large memory nodes. Software available as modules should be accessible on both resources.

This document only lists a few commands. For complete documentation click here.

:bulb: If you want us to install a piece of software in our resources, then please remember to submit software installation requests to bil-support@psc.edu.

Available software modules

To list all available software modules use the command

module avail

For example

module avail

-------------- /bil/modulefiles ---------------
anaconda/3.2019.7
anaconda3/4.10.1
aspera/3.9.6(default)
bcftools/1.9(default)
bioformats/6.0.1
bioformats/6.1.1
bioformats/6.4.0
bioformats/6.5.1
bioformats/6.6.1(default)
bioformats2raw/0.2.4(default)
c-blosc/1.19.0(default)
dust/0.5.4
ffmpeg/20210611

The command above will list all available software.

:envelope: Cannot find the software you need to explore the collections? Please send a request to bil-support@psc.edu.

Software versions

To list information about specific modules use the command

module avail <package-name>

For example, you can view the two versions of MatLab available

module avail matlab

-------------- /bil/modulefiles ---------------
matlab/2019a matlab/2021a

Listing useful information

To list useful info about a module use the command

module help <package-name>

For example,

module help matlab

----------- Module Specific Help for 'matlab/2021a' ---------------

Matlab 2021a
------------

To enable, first load the following required modules (via module load command):

	module load matlab/2021a

For a full list of binaries included in this module, type

	module what-is matlab/2021a

Loading modules

To load a module use the command

module load <package-name>

For example,

module load matlab/2021a

Running the command above will make the matlab binary available in the current session

which matlab

/bil/packages/matlab/R2021a/bin/matlab

In this example, you can simply type matlab to start the Matlab engine

matlab -nodesktop
MATLAB is selecting SOFTWARE OPENGL rendering.

                < M A T L A B (R) >
      Copyright 1984-2021 The MathWorks, Inc.
 R2021a Update 5 (9.10.0.1739362) 64-bit (glnxa64)
                  August 9, 2021


To get started, type doc.
For product information, visit www.mathworks.com.

>>

Loading a specific version of a module

There are times when there are multiple versions of the same same software.

For example,

module avail bioformats

---------------------- /bil/modulefiles ----------------------
bioformats/6.0.1              bioformats/6.7.0
bioformats/6.1.1              bioformats/6.8.0(default)
bioformats/6.4.0              bioformats2raw/0.2.4
bioformats/6.5.1              bioformats2raw/0.3.0(default)
bioformats/6.6.1

If you wish to load a specific version of a package use the command

module load <package>/<version>

For example,

module load bioformats/6.4.0

Listing loaded modules

To list the loaded modules use the command

module list

For example,

module list

Currently Loaded Modulefiles:
  1) matlab/2021a

Unload module

To load a module use the command

module unload <package-name>

for example

module unload matlab/2021a

Using modules on scripts

When building scripts that are using more than one tool available as modules, simply type the module command for each tool

#!/bin/bash module load matlab/2021a module load bioformats

Job management

SLURM

Slurm is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for large and small Linux clusters.

This document only lists a few commands. For complete documentation click here.

sinfo

sinfo - View information about Slurm nodes and partitions.


SYNOPSIS
sinfo [OPTIONS...]

For example

sinfo -p compute

squeue

squeue - view information about jobs located in the Slurm scheduling queue.


SYNOPSIS
squeue [OPTIONS...]

For example

squeue -u icaoberg

JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)
14243   compute script.s icaoberg  R      15:34      1 l001

scontrol

scontrol - view or modify Slurm configuration and state.


SYNOPSIS
scontrol [OPTIONS...] [COMMAND...]

As a regular user you can view information about the nodes and jobs but won't be able to modify them.

Check available memory

The view information about the nodes, including information about memory, use the command

scontrol show nodes

To view information about a specific node, use the node name to print this information. For example

scontrol show nodes l002

NodeName=l002 Arch=x86_64 CoresPerSocket=20
   CPUAlloc=0 CPUTot=80 CPULoad=0.03
   AvailableFeatures=(null)
   ActiveFeatures=(null)
   Gres=(null)
   NodeAddr=l002 NodeHostName=l002 Version=18.08
   OS=Linux 4.18.0-305.7.1.el8_4.x86_64 #1 SMP Tue Jun 29 21:55:12 UTC 2021
   RealMemory=3000000 AllocMem=0 FreeMem=3090695 Sockets=4 Boards=1
   State=IDLE ThreadsPerCore=1 TmpDisk=0 Weight=1 Owner=N/A MCS_label=N/A
   Partitions=compute
   BootTime=2021-07-16T15:47:48 SlurmdStartTime=2021-08-03T20:58:25
   CfgTRES=cpu=80,mem=3000000M,billing=80
   AllocTRES=
   CapWatts=n/a
   CurrentWatts=0 LowestJoules=0 ConsumedJoules=0
   ExtSensorsJoules=n/s ExtSensorsWatts=0 ExtSensorsTemp=n/s

Because there exists one partition, the you can run sinfo or sinfo -p compute to gather basic information about this partition.

For example

→ sinfo -p compute

PARTITION AVAIL  TIMELIMIT  NODES  STATE NODELIST
compute*     up   infinite      8   idle l[001-008]

sbatch

sbatch - Submit a batch script to Slurm.


SYNOPSIS
sbatch [OPTIONS(0)...] [ : [OPTIONS(N)...]] script(0) [args(0)...]

interact

The interact command is an in-house script for starting interactive sessions

interact -h

Usage:  interact [OPTIONS]
  -d                    Turn on debugging information
    --debug
  --noconfig            Do not process config files
  -gpu                  Allocate 1 gpu in the GPU-shared partition
    --gpu
  --gres=<list>         Specifies a comma delimited list of generic consumable
                          resources. e.g.:    --gres=gpu:1
  --mem=<MB>            Real memory required per node in MegaBytes
  -N Nodes              Number of nodes
    --nodes
  -n NTasks             Number of tasks (spread over all Nodes)
  --ntasks-per-node=<ntasks>    Number of tasks, 1 per core per node.
  -p Partition          Partition/queue you would like to run on
    --partition
  -R Reservation        Reservation you would like to run on
    --reservation
  -t Time               Set a limit on the total run time. Format include
                          mins, mins:secs, hours:mins:secs.  e.g. 1:30:00
    --time
  -h                    Print this help message
    -?
  • At the moment, there only exists one partition named compute, so running
interact

or

interact -p compute
  • To specify the amount of memory use the option --mem=<MB>. For example interact --mem=1Tb
  • This is a shared partition, if you wish to get all the resources in a compute node, use the option --nodes. For example interact -N 1. Since this is a shared resource, please be considerate using this resource.

scancel

scancel - Used to signal jobs or job steps that are under the control of Slurm.


SYNOPSIS
scancel [OPTIONS...] [job_id[_array_id][.step_id]] [job_id[_array_id][.step_id]...]

There is no need to

  • To cancel a specific job use the command scancel <job_id>. For example scancel 00001
  • To cancel all your running jobs use the command scancel -u <username>. For example scancel -u icaoberg.

Docker

Docker is not supported and it is not expected to be supported in the near future.

uDocker

If you want to run a program/tool and not a service, then uDocker might be an option for you.

To install uDocker

module load anaconda3/4.11.0
pip install --user udocker

or

module load anaconda3/4.11.0
python -m venv .
source ./bin/activate
pip install udocker

For example,

udocker pull jtduda/python-itk-sitk-ants:0.1.0

Singularity

Building containers

If you have elevated privileges to run Singularity
Make sure to run singularity builds as sudo, that's all that matters.

:warning: If you are constantly building containers, then run

singularity cache clean

often to clean cache.

If you don't have elevated privileges

This applies to all regular users including researchers. If you do not have elevated privileges, you can still build the images remotely.

Follow these steps to do so

  • Create an account on SyLabs.io.
  • Click Access Tokens on the top-right menu

  • Click Create a New Access Token

  • Click Copy token to Clipboard

  • Login to the workshop VM and run the command
singularity remote login
  • Paste the token and click Enter.

Just make sure to use the --remote flag when running singularity build.

Check the rbuild.sh scripts in each repo for working examples.

:warning: If you are constantly building containers remotely, make sure to erase them from your account to avoid running out of space.

To see a list of vetted containers built by PSC, click here.

Example

You can find a Singularity definition file here. To build this image remotely run

git clone https://github.com/pscedu/singularity-cowsay.git
cd 3.04
singularity build --remote cowsay.sif Singularity

after getting a token from SyLabs.io.

Other

Installing Miniconda


There is nothing preventing you from installing a Conda distribution in your home directory, though this is not advised.

However if you need to, you might want to start with a Miniconda distribution

wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh bash ./https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh

and follow the instructions on screen.

:pencil: If you use the default values in the Conda install, it will install in your home directory on /bil/

Using Jupyter Notebooks

Loading the proper module

Click the black terminal icon in the top left corner of your screen

When the terminal opens, type the following commands to load Fiji into your workspace.

module load anaconda3
jupyter lab

Running the commands above will open a browser

The Anaconda3 installation available in BIL infrastructure has access to commonly used packages used in data analytics but it also has a list of unique packages commonly used in neuroscience like

  • allensdk
  • biopython
  • bokeh
  • Dask
  • napari
  • nilearni
  • neuron
  • pandas
  • pytorch
  • scikit-image
  • scikit-learn
  • starfish
  • Theano

To see a full list of packages of packages, run the command in terminal

pip list

:bulb: Nothing prevents you from downloading and installing your own Anaconda/Miniconda distro in your home directory. However, user supported is limited if you choose to do so.

For more info, click here.

Using Jupyter Notebooks on the large memory nodes

This document explains how to run a Jupyter notebook on BIL using a Jupyter client on the workshop VM.

Follow the steps below to run a Jupyter notebook on using a Jupyter client on your local machine.

  1. Login to the workshop VM using x2go.
  2. Load an anaconda module to put the latest version of anaconda and Jupyter in your path.
module load anaconda3
  1. Get a BIL compute node allocated for your use. Get a BIL compute node allocated for you by using the interact command. For example,
interact -n 10 --mem=64Gb
A command prompt will appear when your session begins
"Ctrl+d" or "exit" will end your session
  1. Find the hostname of the node you are running on
    You will need the hostname when you are mapping a port on your local machine to the port on BIL compute node. Find the hostname of the node you are on from the prompt, or type the hostname command.
04:37:12 icaoberg@l001 ~ → hostname

l001.pvt.bil.psc.edu
  1. Start a Jupyter notebook and find the port number and token
    Start a Jupyter notebook. From the output of that command, find the port that you are running on and your token. Pay attention to the port number you are given. You will need it to make the connection between the compute node and the workshop VM.

The port number Jupyter uses on the compute node can be different each time a notebook is started. Jupyter will attempt first to use port 8888, but if it is taken - by a different Jupyter user for example - it increases the port number by one and tries again, and repeats this until it finds a free port. The one it settles on is the one it will report.

jupyter notebook --no-browser --ip=0.0.0.0
[W 2022-03-22 16:45:13.993 LabApp] 'ip' has moved from NotebookApp to ServerApp. This config will be passed to ServerApp. Be sure to update your config before our next release.
[W 2022-03-22 16:45:13.994 LabApp] 'ip' has moved from NotebookApp to ServerApp. This config will be passed to ServerApp. Be sure to update your config before our next release.
[W 2022-03-22 16:45:13.994 LabApp] 'ip' has moved from NotebookApp to ServerApp. This config will be passed to ServerApp. Be sure to update your config before our next release.
[I 2022-03-22 16:45:14.013 LabApp] JupyterLab extension loaded from /bil/packages/anaconda3/4.11.0/lib/python3.9/site-packages/jupyterlab
[I 2022-03-22 16:45:14.013 LabApp] JupyterLab application directory is /bil/packages/anaconda3/4.11.0/share/jupyter/lab
[I 16:45:14.024 NotebookApp] Serving notebooks from local directory: /bil/users/icaoberg
[I 16:45:14.024 NotebookApp] Jupyter Notebook 6.4.5 is running at:
[I 16:45:14.025 NotebookApp] http://l001.pvt.bil.psc.edu:8888/?token=6dd14be951e5717631f9646d292a7dae06c8173c1d850c1b
[I 16:45:14.025 NotebookApp]  or http://127.0.0.1:8888/?token=6dd14be951e5717631f9646d292a7dae06c8173c1d850c1b
[I 16:45:14.025 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
[C 16:45:14.039 NotebookApp]

    To access the notebook, open this file in a browser:
        file:///bil/users/icaoberg/.local/share/jupyter/runtime/nbserver-1022410-open.html
    Or copy and paste one of these URLs:
        http://l001.pvt.bil.psc.edu:8888/?token=6dd14be951e5717631f9646d292a7dae06c8173c1d850c1b
     or http://127.0.0.1:8888/?token=6dd14be951e5717631f9646d292a7dae06c8173c1d850c1b
  1. Map a port on your local machine to the port Jupyter is using on the BIL compute node. Open another terminal to map the port 8888 in the workshop VM machine to the port you are using (8888 in this example) on the compute node. This is a bit of the chicken-and-the-egg situation: if you knew what node and what port you would end up on, you could have done the mapping on the first connection. But there is no way to know that a-priori.

In the new terminal type

ssh -L <local-port>:<compute-host-name>:<compute-node-port> workshop.bil.psc.edu -l <username>

You must use the correct compute node name and port that you have been allocated. In this case, because you were connected to port 8888 on compute node r001, that command would look like

ssh -L 8888:l001.pvt.bil.psc.edu:8888 workshop.bil.psc.edu -l icaoberg

Here the localhost port is 8888. After the first : comes the long name of the compute node, a colon, and the port where Jupyter is running. Here, that string is l001.pvt.bil.psc.edu:8888.

  1. Open a browser window to connect to the Jupyter server. On the workshop VM machine, open a browser and point it to http://localhost:8888.

You will be prompted to enter a token to make the connection. Use the token given to when you started the Jupyter server on the BIL compute node (step 6 above).

  1. When you are done, close your interactive session on BIL.

Using Matlab

If you wish to use Matlab, then please request permission to use it first by filling this form.

After getting access you can use it with

module load matlab/2021a
matlab -nosplash

Using ITK-SNAP

ITK-SNAP is a software application used to segment structures in 3D medical images.

module load itksnap/3.8.0
itksnap

Installing and using SimpleITK

module load anaconda3/4.11.0
python -m venv .
source ./bin/activate
pip install numpy scipy SimpleITK

For example, to load an image as a numpy array

import SimpleITK as sitk
import numpy as np

file = '/bil/data/hackathon/2022_GYBS/data/subject/201606_red_mm_RSA.nii.gz'
image = sitk.ReadImage(file)
arr = sitk.GetArrayFromImage(image)

If you are using iPython you can confirm this

In [2]: whos
Variable   Type       Data/Info
-------------------------------
arr        ndarray    1090x942x997: 1023699660 elems, type `uint8`, 1023699660 bytes (976.2760734558105 Mb)
file       str        /bil/data/hackathon/2022_<...>/201606_red_mm_RSA.nii.gz
image      Image      Image (0x55faa4652f50)\n <...>   Capacity: 1023699660\n
np         module     <module 'numpy' from '/bi<...>kages/numpy/__init__.py'>
sitk       module     <module 'SimpleITK' from <...>s/SimpleITK/__init__.py'>

:bulb: If you are working in virtual environment you might also want to install other useful libraries like

pip install matplotlib scipy pandas

Installing ITK

module load anaconda3/4.11.0
pip install --user itk

or

module load anaconda3/4.11.0
python -m venv .
source ./bin/activate
pip install itk

For a quick guide to ITK please visit here.

For example, to read an image

import itk
import numpy as np

# Read input image
file = '/bil/data/hackathon/2022_GYBS/data/subject/201606_red_mm_RSA.nii.gz'
itk_image = itk.imread(file)

# View only of itk.Image, pixel data is not copied
np_view = itk.array_view_from_image(itk_image)

If you are using iPython you can confirm this

Variable    Type              Data/Info
---------------------------------------
file        str               /bil/data/hackathon/2022_<...>/201606_red_mm_RSA.nii.gz
itk         LazyITKModule     <module 'itk' from '/bil/<...>ackages/itk/__init__.py'>
itk_image   itkImageUC3       Image (0x5653b5da85f0)\n <...>   Capacity: 1023699660\n
np          module            <module 'numpy' from '/bi<...>kages/numpy/__init__.py'>
np_view     NDArrayITKBase    [[[0 0 0 ... 0 0 0]\n  [0<...>0]\n  [0 0 0 ... 0 0 0]]]

:bulb: If you are working in virtual environment you might also want to install other useful libraries like

pip install matplotlib scipy pandas

Installing nibabel



Read/write access to some common neuroimaging file formats.

module load anaconda3/4.11.0
pip install --user nibabel

or

module load anaconda3/4.11.0
python -m venv .
source ./bin/activate
pip install nibabel

Installing cwltool

module load anaconda3/4.11.0
pip install --user cwltool cwlref-runner

or

module load anaconda3/4.11.0
python -m venv .
source ./bin/activate
pip install cwltool cwlref-runner

Installing spyder

Spyder is a free and open source scientific environment written in Python, for Python, and designed by and for scientists, engineers and data analysts.

export QT_XCB_GL_INTEGRATION=none
module load anaconda3/4.11.0
python3 -m venv .
source ./bin/activate
pip install spyder
spyder

Exercises

Exercise 0. Submit a job using OpenOnDemand

Exercise 0. Hello, World! on Jupyter Lab

Exercise 1. Load and combine images in Fiji

Loading the proper module

Click the black terminal icon in the top left corner of your screen

When the terminal opens, type the following commands to load Fiji into your workspace.

module load fiji
fiji

The first time running these commands, the system will install Fiji in your home directory.

:bulb: If the font size in your terminal is too small, you can press CTRL and + to increase the font size.

After running the commands above, a toolbar should appear in your screen, similar to the picture below

Update Fiji (optional)

and then update the plugins

Loading the first channel

On the "(Fiji Is Just) ImageJ" window select menu

[PLUGINS]->[BIOFORMATS][BIOFORMATS-IMPORTER]

Click on FILE SYSTEM in the PLACES sidebar and navigate to

/bil/workshops/2021/data_submission/data/fiji/stitchedImage_ch1

then click on

StitchedImage_Z001_L001.jp2

On the "Bio-Formats Input Options" popup select

-View Stack with Hyperstack
-Group files with similar names
-Color mode: Custom
-Click [OK]

On the "Bio-Formats File Stitching" popup select

-Axis 1 number of images enter 5
-Axis 1 axis first image enter 1
-Axis 1 axis increment enter 54
-Click [OK]

On the "Bio-Formats Series Options" popup select

-Series 1 (8557x11377)
-Click [OK]

On the "Bio-Formats Custom Colorization" popup select

-Series 0 channel 0 Red 255
-Click [OK]

Loading the second channel

We will follow a similar procedure as the first channel

On the "(Fiji Is Just) ImageJ" window select

[PLUGINS]->[BIOFORMATS][BIOFORMATS-IMPORTER]

Click on FILE SYSTEM in the PLACES sidebar and navigate to

/bil/workshops/2021/data_submission/data/fiji/stitchedImage_ch2

then click on

StitchedImage_Z001_L001.jp2

On the "Bio-Formats Input Options" popup select

-View Stack with Hyperstack
-Group files with similar names
-Do Not Use virtual stack
-Click [OK]

On the "Bio-Formats File Stitching" popup select

-Axis 1 number of images enter 5
-Axis 1 axis first image enter 1
-Axis 1 axis increment enter 54
-Click [OK]

On the "Bio-Formats Series Options" popup select

-Series 1 (8557x11377)
-Click [OK]

On the "Bio-Formats Custom Colorization" popup select

-Series 0 channel 0 Red 255
-Click [OK]

Merge 2 channels into one

On the "(Fiji Is Just) ImageJ" window select

[IMAGE]->[COLOR]->[MERGE-CHANNELS]

-For C1 (red) select the first item
-For C2 (green) select the second item
-Click [OK]

Adjust Brightness/Contrast

On the "(Fiji Is Just) ImageJ" window select

[IMAGE]->[ADJUST]->[BRIGHTNESS/CONTRAST]

On the "B&C" popup window

-Set brightness to max
-Set contrast to max
-Click on [Set] button

On "Set Display Range" Popup window

-Set min=0
-Set max=1500
-Check propagate to all other open windows
-Click [OK]

View Composite Z stack and zoom

On the "Composite" window:

-Move the "Z" slider slowly to the right/left to view the Z stack.
-Move the mouse cursor (+) to the top-left of an area that is interesting.
-While clicking the left mouse button drag the selection box to the right and down.
-Move the mouse cursor to the center of the box.
-Click on the plus key to zoom in, - to zoom out
-To get back to the original resolution, move the mouse cursor to outside the selection box. Click on the right mouse button. Select "Original Scale"

To Save the combined-channel images

On the "(Fiji Is Just) ImageJ" window select

[FILE]->[SAVE AS]->[IMAGE SEQUENCE]

On the "Save Image Sequence" popup set values


-Format: TIFF
-Click [OK]

On the "Save Image Sequence" popup set values

-Set DIR - to someplace where you can save (e.g. /bil/home/$USER or your Desktop)
-Set the name
-Click [OK]

To make an animated thumbnail

On the "(Fiji Is Just) ImageJ" window select

[Image]->[Type]->[RGB Color]

On the "Convert to RGB" window select

-Slices(5)
-Keep Source
-Click [OK]

The Image now needs to be downsized.

On the "(Fiji Is Just) ImageJ" window select

[IMAGE]->[SCALE]

-Delete the "Width (pixels)" value and replace with the value 480.
-The Height should automatically be set at 638.
-Set the Title to be "Composite-small"
-Click [OK]

The next step is to save the reduced size animated thumbnail.

On the "(Fiji Is Just) ImageJ" window select

[IMAGE]->[STACKS]->[ANIMATIONS]->[ANIMATION OPTIONS]

On the "Animation Options" popup set


-Speed to 2
-Click [OK]

On the "(Fiji Is Just) ImageJ" window select

[FILE]->[SAVE AS]->[GIF]

On the "Save as GIF" popup, select a directory and filename then click save

If you want to check out the saved gif file, then double click on the file in your desktop to open it with the default viewer.

Close Fiji.

Exercise 2. Contrast-stretching with ImageMagick

This exercise is trying to tie up together all the concepts discussed in this workshop.

Imagine we are interested in collection 84c11fe5e4550ca0 that I found in the portal

:bulb: There is no need to download the data locally because the data is available when you our resources.


I can navigate to /bil/data/84/c1/84c11fe5e4550ca0/ to see the contents of the collection.

Unbfortuntaly it is difficult to visually inspect the images because these are not contrast stretched.


The images are not contrast stretched and cannot be visually inspected.

Fortunately there are tools like Fiji that can contrast stretch the images. However I want to do this in batch mode as a job since this process can be automated.


ImageMagick is a robust library for image manipulation. The convert tool in this library has an option to contrast-stretching.

The format is

convert <input-file> -contrast-stretch  <output-file>

Next I will create a file called script.sh and will place it in a folder in my Desktop.

#!/bin/bash

#this line is needed to be able to use modules on the compute nodes
source /etc/profile.d/modules.sh

#this command loads the ImageMagick library
module load ImageMagick/7.1.0-2

#this for loop finds all the images in the sample folder and contrast-stretch
for FILE in /bil/data/84/c1/84c11fe5e4550ca0/SW170711-04A/*tif
do
	convert $FILE -contrast-stretch 15% $(basename $FILE)
done

:bulb: For simplicity, you can find the script above in

/bil/workshops/2022/data_submission

to copy the script to your Desktop run the command in terminal

cp /bil/workshops/2022/data_submission/script.sh ~/Desktop/

Next I can submit my script using the command

sbatch -p compute --mem=64Gb script.sh

Since I am doing serially I don't need much memory but if I were to do this in parallel I might.

To monitor your job progress use the command squeue -u <username>. For example,

squeue -u icaoberg
             JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)
             14243   compute script.s icaoberg  R      15:34      1 l001`

This leads to

Exercise 3. vaa3D

Finding available tools

To list all available tools, run the command

module avail

----------------------------------------------------------- /bil/modulefiles -----------------------------------------------------------
anaconda/3.2019.7             c-blosc/1.19.0(default)       knime/4.3.2                   raw2ometiff/0.2.6(default)
anaconda3/4.9.2               fiji/1.53h                    lazygit/0.22.9                samtools/1.9(default)
aspera/3.9.6(default)         htslib/1.9(default)           md5deep/4.4                   scala/2.13.5
bcftools/1.9(default)         ilastik/1.3.3                 openjpeg/2.3.0(default)       singularity/3.7.0
bioformats/6.0.1              imagej-fiji/1.52p             openslide/3.4.1               vaa3d/3.601
bioformats/6.1.1              java/jdk8u201                 p7zip/16.02                   xxhash/0.8.0
bioformats/6.4.0              java/jdk8u211                 picard/2.20.2(default)
bioformats/6.5.1(default)     java/jdk8u241(default)        R/3.5.1
bioformats2raw/0.2.4(default) julia/1.0.5                   R/3.6.3

To see all the installed versions of a specific package, e.g. java, run the command

module avail java

----------------------------------------------------------- /bil/modulefiles -----------------------------------------------------------
java/jdk8u201          java/jdk8u211          java/jdk8u241(default)

Running the module load command without specifying a version will load the default version of the software. For example, running

module load java

will load Java JDK8u241.

Loading the proper module

Click the black terminal icon in the top left corner of your screen

When the terminal opens, type the following commands to load Fiji into your workspace.

module load vaa3d
vaa3d

Running the command above will start the tool

For explorations, you can find some examples in

/bil/workshops/2021/data_submission/data/vaa3d

Exercise 4. Building a Singularity container

:bulb: You can find a vetted list of Singularity containers maintained by PSC, here.

In this example we will build a Singularity container using ther remote builder on SyLabs.io.

Choose a location in your home directory, and run the following commands

:warning: You might to setup your SSH key or GitHub personal access token to clone the repo below. For more information click here.

git clone git@github.com:pscedu/singularity-bioformats2raw.git cd singularity-bioformats2raw/3.0.0 singularity build --remote bioformats2raw.sif Singularity

This will create a local Singularity image file named bioformats2raw.sif.

Since you have access to the definition file

Bootstrap: docker
From: debian:stretch

%labels
    AUTHOR icaoberg
    EMAIL icaoberg@psc.edu
    SUPPORT help@psc.edu
    WEBSITE http://github.com/icaoberg/singularity-bioformats2raw
    COPYRIGHT Copyright © 2021 Pittsburgh Supercomputing Center. All Rights Reserved.
    VERSION 3.0.0

%post
    apt update
    apt install -y libblosc1 wget unzip default-jdk
    cd /opt/
    wget -nc https://github.com/glencoesoftware/bioformats2raw/releases/download/v0.3.0/bioformats2raw-0.3.0.zip
    unzip bioformats2raw-0.3.0.zip && rm -f bioformats2raw-0.3.0.zip
    ln -s /opt/bioformats2raw-0.3.0/bin/bioformats2raw /usr/local/bin/bioformats2raw
    apt remove -y wget unzip
    apt clean

you can tell the binary bioformats2raw is available in the container.

To use it, run

module load java singularity exec -B /bil bioformats2raw.sif bioformats2raw --help Missing required parameters: '<inputPath>', '<outputLocation>' Usage: <main class> [-p] [--no-hcs] [--[no-]nested] [--no-ome-meta-export] [--no-root-group] [--overwrite] [--use-existing-resolutions] [--version] [--debug [=<logLevel>]] [--extra-readers[=<extraReaders>[, <extraReaders>...]]]... [--options[=<readerOptions>[, <readerOptions>...]]]... [-s[=<seriesList>[, <seriesList>...]]]... [--additional-scale-format-string-args=<additionalScaleForma tStringArgsCsv>] [-c=<compressionType>] [--dimension-order=<dimensionOrder>] [--downsample-type=<downsampling>] [--fill-value=<fillValue>] [-h=<tileHeight>] [--max_cached_tiles=<maxCachedTiles>] [--max_workers=<maxWorkers>] [--memo-directory=<memoDirectory>] [--pixel-type=<outputPixelType>] [--pyramid-name=<pyramidName>] [-r=<pyramidResolutions>] [--scale-format-string=<scaleFormatString>] [-w=<tileWidth>] [-z=<chunkDepth>] [--compression-properties=<String=Object>]... [--output-options=<String=String>[\|<String=String>...]]... <inputPath> <outputLocation> <inputPath> file to convert <outputLocation> path to the output pyramid directory. The given path can also be a URI (containing ://) which will activate **experimental** support for Filesystems. For example, if the output path given is 's3://my-bucket/some-path' *and* you have an S3FileSystem implementation in your classpath, then all files will be written to S3. --additional-scale-format-string-args=<additionalScaleFormatStringArgsCsv> Additional format string argument CSV file (without header row). Arguments will be added to the end of the scale format string mapping the at the corresponding CSV row index. It is expected that the CSV file contain exactly the same number of rows as the input file has series -c, --compression=<compressionType> Compression type for Zarr (null, zlib, blosc; default: blosc) --compression-properties=<String=Object> Properties for the chosen compression (see https: //jzarr.readthedocs.io/en/latest/tutorial. html#compressors ) --debug, --log-level[=<logLevel>] Change logging level; valid values are OFF, ERROR, WARN, INFO, DEBUG, TRACE and ALL. (default: WARN) --dimension-order=<dimensionOrder> Override the input file dimension order in the output file [Can break compatibility with raw2ometiff] (XYZCT, XYZTC, XYCTZ, XYCZT, XYTCZ, XYTZC) --downsample-type=<downsampling> Tile downsampling algorithm (SIMPLE, GAUSSIAN, AREA, LINEAR, CUBIC, LANCZOS) --extra-readers[=<extraReaders>[,<extraReaders>...]] Separate set of readers to include; (default: [class com.glencoesoftware.bioformats2raw. PyramidTiffReader, class com.glencoesoftware. bioformats2raw.MiraxReader]) --fill-value=<fillValue> Default value to fill in for missing tiles (0-255) (currently .mrxs only) -h, --tile_height=<tileHeight> Maximum tile height to read (default: 1024) --max_cached_tiles=<maxCachedTiles> Maximum number of tiles that will be cached across all workers (default: 64) --max_workers=<maxWorkers> Maximum number of workers (default: 4) --memo-directory=<memoDirectory> Directory used to store .bfmemo cache files --no-hcs Turn off HCS writing --[no-]nested Whether to use '/' as the chunk path seprator (true by default) --no-ome-meta-export Turn off OME metadata exporting [Will break compatibility with raw2ometiff] --no-root-group Turn off creation of root group and corresponding metadata [Will break compatibility with raw2ometiff] --options[=<readerOptions>[,<readerOptions>...]] Reader-specific options, in format key=value[, key2=value2] --output-options=<String=String>[\|<String=String>...] |-separated list of key-value pairs to be used as an additional argument to Filesystem implementations if used. For example, --output-options=s3fs_path_style_access=true|... might be useful for connecting to minio. --overwrite Overwrite the output directory if it exists --pixel-type=<outputPixelType> Pixel type to write if input data is float or double (int8, int16, int32, uint8, uint16, uint32, float, double, complex, double-complex, bit) --pyramid-name=<pyramidName> Name of pyramid (default: null) [Can break compatibility with raw2ometiff] -r, --resolutions=<pyramidResolutions> Number of pyramid resolutions to generate -s, --series[=<seriesList>[,<seriesList>...]] Comma-separated list of series indexes to convert --scale-format-string=<scaleFormatString> Format string for scale paths; the first two arguments will always be series and resolution followed by any additional arguments brought in from `--additional-scale-format-string-args` [Can break compatibility with raw2ometiff] (default: %d/%d) --use-existing-resolutions Use existing sub resolutions from original input format[Will break compatibility with raw2ometiff] -w, --tile_width=<tileWidth> Maximum tile width to read (default: 1024) -z, --chunk_depth=<chunkDepth> Maximum chunk depth to read (default: 1) -p, --progress Print progress bars during conversion --version Print version information and exit

:bulb: The flag -B /bil is important, please use it everytime you run a container in the Brain Image Library Systems.

To run the application in the container simply run

module load java singularity exec -B /bil bioformats2raw.sif bioformats2raw /bil/data/hackathon/2022_GYBS/lightsheet/subject/subject0_25.nii.gz raw/

also try

module load java singularity exec -B /bil bioformats2raw.sif bioformats2raw /bil/data/hackathon/2022_GYBS/lightsheet/subject/subject0_25.nii.gz raw2/ --resolutions 6

The command above will convert the image /bil/data/hackathon/2022_GYBS/lightsheet/subject/subject0_25.nii.gz to a Zarr file format in the folder raw/.

Exercise 5. Trying the Napari BIL Data Viewer

This plugin enables viewing of datasets archived in the Brain Image Library.

:warning: This plugin is under early development. Currently, only a subset of single color, fMOST datasets which include projections are available to view.

Though there are several ways to deploy Napari in your laptop/desktop, at the moment, we recommend you install locally using Anaconda.

Installing Napari

Installing Napari is easier using conda-forge.

To install Napari, run the command

conda install -c conda-forge napari

If the command above fails or Napari fails to start, you can also try

conda create -y -n napari-env -c conda-forge python=3.8
conda activate napari-env
pip install 'napari[all]'

Follow the official documentation to install Anaconda or Miniconda on your local system.

Installing the napari-bil-data-viewer

To install the plugin, we need to start napari first. To start Napari, run the command

napari

This will open the Napari window

I am running Napari in MacOSX, but if you are running Windows or Linux you should see a similar menu.

Clicking Install/Uninstall Plugins... should open the window below

Search for napari-bil-data-viewer and click Install

If installed properly, the plugin will be listed as Installed

Close the window and go back to the main window.

:pencil: If you are familiar with terminal, you can run the following commands instead of following the steps above

conda create -y -n bil-viewer python=3.8
conda activate bil-viewer

# Install napari-bil-data-viewer
pip install napari-bil-data-viewer

Using the plugin

On the top menu select

This should a panel to the right

Use the drop down list to choose the images you want to explore with Napari

If you are curious about the status of your session in Napari, then you can monitor the logs in the terminal you opened the application in

:warning: This plugin is under early development. Currently, only a subset of single color, fMOST datasets which include projections are available to view.

Gentle intro to cwltool

Before we begin

Watch the video above for a gentle intro.

Benefits

There are many benefits to using workflows, mainly their portability. If built flexible, a workflow should be able to be deployed locally, e.g. a laptop, on an HPC cluster, e.g. Brain Image Library or Bridges 2 and in the cloud, e.g. AWS.

The second benefit is workflows are very efficient at connecting tools or programs and scripts written in different languages.

Third and last, workflows can use containers that can be push to repositories like DockerHub, making them truly portable and flexible.

Using workflows is just one way of running pipelines on the Brain Image Library hardware, users can use traditional approaches like bash or Python scripts to run their workflows.

Introduction

Installing cwltool

module load anaconda3/4.11.0
pip install --user cwltool cwlref-runner

cowsay is a tool that pretty prints a cow.


Traditionally we would install the tool locally either using a repository or pip.

For example,

pip install cowsay

will install the binary in our system.

Since the only input to cowsay is a string, a basic CWL workflow document looks like this

#!/usr/bin/env cwl-runner

cwlVersion: v1.0
class: CommandLineTool
baseCommand: cowsay
inputs:
  message:
    type: string
    inputBinding:
      position: 1
outputs: []

CWL documents are written either in YAML or JSON. For example, we can create the file message.cwl

message: Hello world!

and use it as input for the workflow

cwltool cowsay.cwl message.cwl

INFO /bil/packages/anaconda3/4.11.0/bin/cwltool 3.1.20220210171524
INFO Resolved 'cowsay.cwl' to 'file:///bil/users/icaoberg/code/singularity-cowsay/3.04/cowsay.cwl'
INFO [job cowsay.cwl] /tmp/l7knmpt3$ cowsay \
    'Hello world!'
  ____________
| Hello world! |
  ============
            \
             \
               ^__^
               (oo)\_______
               (__)\       )\/\
                   ||----w |
                   ||     ||
INFO [job cowsay.cwl] completed success
{}
INFO Final process status is success

cowsay on Docker

This step cannot run on Brain Image Library hardware since we do not support Docker.

Consider the following Dockerfile

FROM ubuntu:latest

RUN apt-get update && apt-get install -y cowsay --no-install-recommends && rm -rf /var/lib/apt/lists/*
ENV PATH $PATH:/usr/games

CMD ["cowsay"]

The file above creates a container with the cowsay binary. I can be built it using the command

docker build -t icaoberg/cowsay .

and pushed to DockerHub using the command

docker push icaoberg/cowsay

This is a dummy example but technically now there exists a container with my tool. Now, I can recycle the CWL workflow from before and it to get the container from the repo by adding the lines

hints:
  DockerRequirement:
    dockerPull: icaoberg/cowsay

The document now looks like

#!/usr/bin/env cwl-runner

cwlVersion: v1.0
class: CommandLineTool

requirements:
  SubworkflowFeatureRequirement: {}

hints:
  DockerRequirement:
    dockerPull: icaoberg/cowsay

baseCommand: cowsay
inputs:
  message:
    type: string
    inputBinding:
      position: 1
outputs: []

and running it will produce the same results as the previous example

cwltool cowsay2.cwl message.cwl
INFO /Users/icaoberg/opt/anaconda3/bin/cwltool 3.1.20220210171524
INFO Resolved 'cowsay2.cwl' to 'file:///Users/icaoberg/Documents/code/singularity-cowsay/3.04/cowsay2.cwl'
INFO [job cowsay2.cwl] /private/tmp/docker_tmpr_rjhbrj$ docker \
    run \
    -i \
    --mount=type=bind,source=/private/tmp/docker_tmpr_rjhbrj,target=/xJHVRn \
    --mount=type=bind,source=/private/tmp/docker_tmpzpu0ulbd,target=/tmp \
    --workdir=/xJHVRn \
    --read-only=true \
    --user=501:20 \
    --rm \
    --cidfile=/private/tmp/docker_tmp07wk4ale/20220309145946-550721.cid \
    --env=TMPDIR=/tmp \
    --env=HOME=/xJHVRn \
    icaoberg/cowsay \
    cowsay \
    'Hello world!'
 ______________
< Hello world! >
 --------------
        \   ^__^
         \  (oo)\_______
            (__)\       )\/\
                ||----w |
                ||     ||
INFO [job cowsay2.cwl] Max memory used: 0MiB
INFO [job cowsay2.cwl] completed success
{}
INFO Final process status is success

Even though we do not support Docker, you can try installing uDocker.

cowsay on Singularity

The main issue is that most HPC clusters do not support Docker and prefer Singularity or Apptainers. However, if the Docker image in DockerHub has proper entrypoints, then you could simply use the --singularity option to ask CWL tools to convert the Docker image to Singularity.

If the Docker image does not a proper entry point this step might fail if you are not aware of how the image was built.

Only use vetted images or public images whose Dockerfile you have seen and trust.

Using the option

cwltool --singularity cowsay2.cwl message.cwl

will run the workflow

cwltool --singularity cowsay2.cwl message.cwl
INFO /bil/packages/anaconda3/4.11.0/bin/cwltool 3.1.20220210171524
INFO Resolved 'cowsay2.cwl' to 'file:///bil/users/icaoberg/code/singularity-cowsay/3.04/cowsay2.cwl'
INFO ['singularity', 'pull', '--force', '--name', 'icaoberg_cowsay.sif', 'docker://icaoberg/cowsay']
INFO:    Converting OCI blobs to SIF format
INFO:    Starting build...
Getting image source signatures
Copying blob 7c3b88808835 done
Copying blob 6b7a6ea66907 done
Copying config 063d227371 done
Writing manifest to image destination
Storing signatures
2022/03/09 15:17:27  info unpack layer: sha256:7c3b88808835aa80f1ef7f03083c5ae781d0f44e644537cd72de4ce6c5e62e00
2022/03/09 15:17:28  info unpack layer: sha256:6b7a6ea669076a74f122534da10e4e459f36777854e8e1529564d31c685fd9ea
INFO:    Creating SIF file...
INFO [job cowsay2.cwl] /tmp/ceztlkix$ singularity \
    --quiet \
    exec \
    --contain \
    --ipc \
    --cleanenv \
    --pid \
    --home \
    /tmp/ceztlkix:/qxOGcV \
    --bind \
    /tmp/o9hfm5tx:/tmp \
    --pwd \
    /qxOGcV \
    /bil/users/icaoberg/code/singularity-cowsay/3.04/icaoberg_cowsay.sif \
    cowsay \
    'Hello world!'
 ______________
< Hello world! >
 --------------
        \   ^__^
         \  (oo)\_______
            (__)\       )\/\
                ||----w |
                ||     ||
INFO [job cowsay2.cwl] completed success
{}
INFO Final process status is success

but will create a Singularity image file on disk.

Adding more options

cowsay has more options than just the input string.

cowsay(6)                                  Games Manual                                  cowsay(6)

NAME
       cowsay/cowthink - configurable speaking/thinking cow (and a bit more)

SYNOPSIS
       cowsay  [-e  eye_string]  [-f cowfile] [-h] [-l] [-n] [-T tongue_string] [-W column] [-bdg‐
       pstwy]

A cowfile is used to change the picture. For example, running the command

➜  code cowsay -f flaming-sheep "Hello World\!"
 ______________
< Hello World! >
 --------------
  \            .    .     .
   \      .  . .     `  ,
    \    .; .  : .' :  :  : .
     \   i..`: i` i.i.,i  i .
      \   `,--.|i |i|ii|ii|i:
           UooU\.'@@@@@@`.||'
           \__/(@@@@@@@@@@)'
                (@@@@@@@@)
                `YY~~~~YY'
                 ||    ||

will print a flaming sheep.

In this example, we will expose the [-f cowfile] argument by adding the lines

  format:
    type: string
    inputBinding:
      position: 1
      prefix: -f
    default: "flaming-sheep"

to the input block, making the workflow look like

#!/usr/bin/env cwl-runner

cwlVersion: v1.0
class: CommandLineTool

requirements:
  SubworkflowFeatureRequirement: {}

hints:
  DockerRequirement:
    dockerPull: icaoberg/cowsay

baseCommand: "cowsay"
inputs:
  message:
    type: string
    inputBinding:
      position: 2

  format:
    type: string
    inputBinding:
      position: 1
      prefix: -f
    default: "flaming-sheep"

outputs: []

Then you can run it

05:03:16 icaoberg@workshop 3.04 ±|master ✗|→ cwltool --singularity cowsay3.cwl message3.cwl

INFO /bil/users/icaoberg/.local/bin/cwltool 3.1.20220224085855
INFO Resolved 'cowsay3.cwl' to 'file:///bil/users/icaoberg/code/singularity-cowsay/3.04/cowsay3.cwl'
INFO Using local copy of Singularity image found in /bil/users/icaoberg/code/singularity-cowsay/3.04
INFO [job cowsay3.cwl] /tmp/g2zknke0$ singularity \
    --quiet \
    exec \
    --contain \
    --ipc \
    --cleanenv \
    --pid \
    --home \
    /tmp/g2zknke0:/lzqerl \
    --bind \
    /tmp/fkb0phq4:/tmp \
    --pwd \
    /lzqerl \
    /bil/users/icaoberg/code/singularity-cowsay/3.04/icaoberg_cowsay.sif \
    cowsay \
    -f \
    flaming-sheep \
    'Hello world!'
 ______________
< Hello world! >
 --------------
  \            .    .     .
   \      .  . .     `  ,
    \    .; .  : .' :  :  : .
     \   i..`: i` i.i.,i  i .
      \   `,--.|i |i|ii|ii|i:
           UooU\.'@@@@@@`.||'
           \__/(@@@@@@@@@@)'
                (@@@@@@@@)
                `YY~~~~YY'
                 ||    ||
INFO [job cowsay3.cwl] completed success
{}
INFO Final process status is success

Keep in mind your input argument message3.yml now looks like this

message: Hello world!
format: flaming-sheep

You can choose to expose as many input arguments as you want or set default values.

Mixing and matching

Consider the following workflow, fortune.cwl

#!/usr/bin/env cwl-runner

cwlVersion: v1.0
class: CommandLineTool

requirements:
  SubworkflowFeatureRequirement: {}

hints:
  DockerRequirement:
    dockerPull: grycap/cowsay

baseCommand: /usr/games/fortune
inputs: []
outputs: []

which does something like

cwltool --singularity fortune.cwl

INFO /bil/users/icaoberg/.local/bin/cwltool 3.1.20220224085855
INFO Resolved 'fortune.cwl' to 'file:///bil/users/icaoberg/code/singularity-cowsay/3.04/fortune.cwl'
INFO Using local copy of Singularity image found in /bil/users/icaoberg/code/singularity-cowsay/3.04
INFO [job fortune.cwl] /tmp/lppo005u$ singularity \
    --quiet \
    exec \
    --contain \
    --ipc \
    --cleanenv \
    --pid \
    --home \
    /tmp/lppo005u:/jzrcyZ \
    --bind \
    /tmp/l0squr2a:/tmp \
    --pwd \
    /jzrcyZ \
    /bil/users/icaoberg/code/singularity-cowsay/3.04/grycap_cowsay.sif \
    /usr/games/fortune
Q:	How many lawyers does it take to change a light bulb?
A:	You won't find a lawyer who can change a light bulb.  Now, if
	you're looking for a lawyer to screw a light bulb...
INFO [job fortune.cwl] completed success
{}
INFO Final process status is success