CRYOSPARC4 available on BioHPC -- 01122023

Cryosparc V4.1.1, patch 23110 is available on BioHPC ($ module load cryosparc/4.1.1-singularity). CryoSPARC v4.0+ relies on a newer version of MongoDB v3.6, than CryoSPARC v3.3, which relies on MongoDB 3.4. Therefore, when you first-time use cryosparc V4 on BioHPC, a new folder  ~/cryosparc-v4 will be created as the MongoDB database ( ~/cryosparc-v4/cryosparc_database) location.

If user want to migrate the cryosparc-v3 database to cryosparc-v4 (it will be better to merge this before running any real project in cryosparc-v4), please try the following steps:

1, Stop and close the running cryosparc V4 session if there is any.

2, Delete the ~/cryosparc-v4/cryosparc_database folder (backp the cryosparc_database if needed, eg: there are already some projects run), with $ rm -rv ~/cryosparc-v4/cryosparc_database. If there is no ~/cryosparc-v4 folder under user home directory (means has not run any cryosparc v4 session yet), create the folder manually with $ mkdir ~/cryosparc-v4

3, Copy the cryosparc-v3 database to cryosparc-v4. $ cp -ra ~/cryosparc-v3/cryosparc_database ~/cryosparc-v4

4, Start cryosparc V4 session.



Using Command Line interface

Setup of CryoSPARC live on CEMF/BioHPC GPUs will be part of the training. Please request a free academic license at  before your training.  
Once you have obtained your license, put it to end of your .bachrc file in the format given below:
export CRYOSPARC_LICENSE_ID="????????-????-????-????-????????????"      (Use your licence ID instead of question marks)
If you have are not familiar to do that, please email CEMF and CC Murat Atis of BioHPC so we can coordinate the addition of the license to your .bashrc file.

1.  Using UTSW account and password login to CEMF user PC

2.  Use any of ssh client (such as putty or ssh client) to ssh login node

3.  create a slurm file similar to the one given below. You can change the parameters such as partition and the time if you want. The limit for Cryosparc jobs is 5 days for CLI allocation.
#SBATCH --job-name="Cryosparc3"
#SBATCH --partition= GPUp4
#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --time=2-02:00:00
#SBATCH --output="logs.cryosprac3.%j.%N.txt"
#SBATCH --error=errors.cryosparc3.%j.%N.txt

#SBATCH --gres=gpu:1

module load cryosparc/3.1.0-singularity
export no_proxy="localhost"
cryosparc start
tail -f ~/cryosparc-v3/run/command_core.log

4.  Create a file named ~/.cryopwd and write your Cryosparc password to the first line without any space. This is my file as an example:
$ cat ~/.cryopwd

5.  Submit the job by using “sbatch slurm.cryosparc” (use your slurm file name instead)

6. Go to The GPU note Desktop, Connect the web interface with port number: 172.18.227.XXX:39000 by using internet browser. The XXX is the node number you allocated. You can learn the allocated node name, your account name and your password by checking the file: “~/cryosparc-v3/cryosparc.log”.
For example:
$ grep "Creating user " ~/cryosparc-v3/cryosparc.log
Creating user s178722 with email: password: kcof0SwV and name: Murat Atis
$ grep "Nucleus" ~/cryosparc-v3/cryosparc.log

For this example: you can use “”. Because 48 is the node number in the C group nodes.
On the cryosparc interface, create a new project and enter the dataset path

7. Connect another web interface with port number: by using internet browser. 

8. finally, you can use campus pc internet browser to see and manage the cryosparc live running session.

9. The most important step is stopping the Cryosprc after you completed the processing. If you don’t stop the cryosparc properly, you will have trouble to start it at the next time. To stop the cryosparc properly, login to the allocated node (it is NucleusC048 for this example) and stop the cryosparc like as below:  
$ ssh Nucleus005
$ module load cryosparc/3.1.0-singularity

$ squeue -u s178722     (!!! Use your username instead of s178722)
           2322175       GPU webGPU_c  s178722  R    2:35:47      1 NucleusC048

#cryosparc_canceljob <JOBID> eg:

$cryosparc_canceljob 2322175



$ ssh NucleusC048
$ module load cryosparc/3.1.0-singularity
$ cryosparc stop
$ exit
$ squeue -u s178722     (!!! Use your username instead of s178722)
           2322175       GPU webGPU_c  s178722  R    2:35:47      1 NucleusC048
$ scancel 2322175

Using Portal interface

First of all login portal and pick the “Biohpc Ondemand” Services and pick the “Ondemand Cryosparc”.

Pick “Cryosparc3GPU”

Wait 2-3 minutes for starting cryosparc on the compute node. After complete starting cryosparc, the portal interface will give you three address and account/password information as seen below. The first address is the cryosparc interface to manage cryosprac. The next two of them are Cryosparc LIVE and Cryosparc LIVE Legacy.

Click the first address and use the account information to login Cryosparc interface.

(Note: The account is <your_BioHPC_account>, and the password is the CryoSparc on-demand session password).

You can manage Cryosaprc3 by using this interface. For example, you must create projects by using this interface to process your data with Cryosparc3 LIVE.

After creating the project, you can click the second address in portal interface to login Cryosparc LIVE. Use the same account information on portal.

You can start and stop CryoSparc LIVE by using the interface seen below.


If you finished the job earlier, you can cancel your CryoSparc ondemand session ((??: do the user need to login a node and run $ cryosparc stop ?).


Troubleshooting Session:

1,  The on-demand session can not provide correct linkage for CryoSparc Live (either the VNC session of CryoSparc is not generated or the VNC session can not be opened).


  1. Use the address:****6/ (eg: 30026) for CryoSparc Live
  2. Login to the login node of BioHPC via ssh <your_BioHPC_username>, and then check the allocated node use command $ squeue -u < your_BioHPC_username>. You may also check the CryoSparc session password was set to the on-demand session.

  1. Or you can use the command $ grep "Nucleus" ~/cryosparc-v3/cryosparc.log to find the allocated Node. For example, from either way, you can see the allocated node is NucleusC002. Therefore you can use the address: to login the CryoSparc Live session.