BioHPC Staff will add the answers to frequent questions from users here. Click a category name below to jump to questions and answers on that topic:
Please use the following statement when acknowledging support from BioHPC.
This research was supported in part by the computational resources provided by the BioHPC supercomputing facility located in the Lyda Hill Department of Bioinformatics, UT Southwestern Medical Center, TX. URL: https://portal.biohpc.swmed.edu
How can I create a BioHPC account, and access all of the services?
Ensure with your department or center that you are eligible for membership with BioHPC. If you are not certain, please check this page or contact us. Once eligibility is confirmed, register a BioHPC account. Your account will not have full access to BioHPC services until you attend the mandatory new user training session held on the first Wednesday of each month from 10:30 AM - 12:00 PM. We will activate user accounts according to the attendance list. Within two weeks, you will receive an e-mail notifying when your BioHPC account has been activated.
Do I need to register for a training session before I attend?
No, you don't need to register separately for the training sessions. Once you register your BioHPC account, you should be added to our biohpc training mailing list automatically. Let us know if you do not receive any tranining announcement sent on each Monday.
The system says my account has expired, what can I do?
This is because your password has expired. According to UTSW policy, you need to change your password annually at here.
I suddenly cannot get access to BioHPC system, why is that?
In most case, it is because your password has expired. Please refer to the above question for solution.
The situations you have may experience with an expired password are:
How can I create a shared folder between two departments to share data?
Please provide us the information below:
My /project storage space is over quota, what can I do?
1) Decrease disk usage
You may want to move files to your /archive directory. Usage on /archive will be calculated only as 2/3 of the actual usage. The archive directory structure is laid out similarly to /project. The default quota is 5 TB per lab, but it can be increased.
2) Increase quota
Your PI can ask the department chair for approval to increase the lab quota on /project. We will increase the quota with the department chair's email approval.
Submit jobs using SECONDARY GROUP for accounting purpose with multiple projects/groups.
By default when a user submits a job using sbatch, the system will account the CPU usages according to the user's primary group. In some situations, when a user has several secondary groups, and would like to run jobs and make the accounting of usage to the secondary group, the user can submit the job using the --gid=xxxx argument (sbatch --gid=<the secondary group id> slurm_batch_script.sh) so that the usage can be accounted according to different collaborating groups/projects.
My /home2 is over quota, but I cannot find where are those large files located.
du -hs $(ls -A)
command can list all directories/files, including those hidden ones. Beside, files in ~/.cache/*
and ~/.conda/pkgs/*
can be safely removed to clean up some space under your /home2
How many GPU nodes can be allocated to me at any one time.
Due to their availability, only 4 GPU nodes can be allocated to you at any one time, but during times of high usage, users should reasonably expect to run jobs on 1-2 GPU nodes simultaneously.
How do I use the Dual Tesla P100 and V100 nodes?
There are 12 Nvidia Tesla P100 GPU nodes and 2 Tesla V100 nodes. Each node has two GPU cards.
To run a job on the new GPU nodes, please specify GPUp100 or GPUv100s as the partition in your SLURM script.
# In your SLURM script, choose the new GPU partition
#SBATCH -p GPUp100
# In your script, request the use of 2 GPU cards
#SBATCH --gres=gpu:2
If your program does not seem to see both GPUs, you can try setting the CUDA_VISIBLE_DEVICES environment variable. Use the command:
export CUDA_VISIBLE_DEVICES=0,1
... before you run your program. This should not be necessary as long as you use the gres line in your batch scripts.
I'm using a web service from R (e.g. biomaRt) and it complains about a 'redirection'. How can I make it work properly?
The UTSW web proxy can confuse some R packages that interact with web services. Before you use the web service, set download options in your R script:
options(RCurlOptions=list(followlocation=TRUE, postredir=2L))
What is the difference between Cloud Service RStudio and BioHPC OnDemand RStudio ?
Cloud Service RStudio (https://rstudio.biohpc.swmed.edu/) is a shared web server open for all BioHPC users. You may keep a persistent session and share the computational resource with all other users. Open this web server when you want to test small tasks. The current version of RStudio is 3.3
OnDemand RStudio will reserve a compute node up to 20 hours for an end user. During the 20-hour reservation period, a user has a dedicated compute node, suitable for heavy load tasks. You could choose different versions when you request the node.
What if I messed up old RStudio sessions and can not log into it?
Try to clean up the configuration folder under your home folder with command:
rm -rf ~/.rstudio
What if I got connection error when installing packages in RStudio?
Set proxy in RStudio
Sys.setenv(http_proxy='http://proxy.swmed.edu:3128')
Sys.setenv(https_proxy='http://proxy.swmed.edu:3128')
Missing libraries (e.g., gsl, hdf5, gdal, proj, geos) when installing certain R packages
Type module av
in terminal to see if there is any available module. Type module load <module_name>
and then try installing your packages in R session again. If still not working, do not hesitate to contact biohpc-help@utsouthwestern.edu
How can I install packages with conda?
https://portal.biohpc.swmed.edu/content/guides/conda-biohpc/
How can I choose different conda environment from Jupyter Notebook?
You may select the Kernel you want either when creating a new notebook or from the notebook’s Kernel menu
How to install python module locally without conda environment?
use pip install:
$pip install --user modulename
will install package automatically to your home directory: ~/.local/lib/python**/site-packages
Or use --prefix to install to a specific directory:
$pip install --prefix /directory/to/install modulename
This will install lib and bin to the path specified, if the software has executable, it will be put into the bin folder.
How do I install R packages locally:
R packages will be installed automatically into your local R lib. You don't need to worry about permission issues.
Where to find all my local R packages:
Local R packages that user install stay in ~/R/x86_64-***-linux-gnu-library path/{version}.
Getting permission denied error while running a software or command.
The software or command is trying to access a file or a directory you don't have access to.
Possible causes:
Why do I have multiple version of same modules loaded?
When loading modules, all dependent modules are also added. So if two software depend on the same software but with different versions, multiple version will be loaded. This can cause imcompatibility problems when using the first loaded software. It's a good idea to load only the necessary modules when using certain software.
Where should I create link to locally installed software executable, since I dont have permission in any of the bin directory.
Simply create a bin directory under your home directory and link any executatbles to this directory. This directory is included in the PATH variable automatically.
"lib***.so" files not found?
Library dependency is not resovled, most likely you are missing a path in LD_LIBRARY_PATH, can be resolved by adding apropriate module or add the path to variable.
"CXX****_1.x.x" not found?
wrong version of C c compiler is used, try adding newer version of gcc or intel.
".h" files not found?
The header file of the software is not found, this can be solved by adding the include directory of the software to CPATH when using c compilers. Or install .devel package of the software through package manager.
Why does ImageJ give memory errors on a webGUI / 32GB node?
How can I use ImageJ effectively on a 32GB node?
ImageJ has a single configuration file, which has to be setup for the amount of RAM on a node. By default the ImageJ modules are set to work well with large datasets on our 256GB+ nodes.
If you are working with ImageJ on a default webGUI session, or a 32GB node, you may see a memory error mentioning 'heap allocation' or 'heap size' when trying to start ImageJ. On the 32GB nodes ImageJ can be started with:
ImageJ-32GB
... instead of the default ImageJ command to avoid memory errors.
Note that If you need to work with a dataset larger than ~24GB you should use a larger RAM node - i.e. start a webGUI256 session, instead of the standard webGUI session.
Mac samba network mounts disconnects and freezes the computer, how do I fix this?
Background: MacOS 10.13.10+ to 10.14.2 latest versions are having issues with samba server disconnecting due to timeouts. The server disconnects if no activities are performed on the mounted network drive for around 10 minutes. This is due to the system setting a timer to the connect to 10 minutes, if no traffic is done on the connections, the mounted drive will disconnect. A temporary work around has been tested for this, but it needs to be applied everytime the sy
sudo sysctl net.smb.fs.kern_deadtimer=0
sudo sysctl net.smb.fs.kern_hard_deadtimer=0
sudo sysctl net.smb.fs.kern_soft_deadtimer=0
Upgraded java to latest version causes javax.ssl.SSLHANDSHAKE exception:
Background:Upgrading java to latest version on Mac breaks TurboVNC client due to the security cipher requirements of latest java. Turbovnc version 2.2.2 will fix this issue, the latest development release of turbovnc 2.2.x is availble . It fixes this issue.
Another work around is to use mac built in client to connect to the VNC server, to do this,
When user tries to install python package through pip, python reports import error" for some module.
Background: This usually happens when user tries to install a conda environment, or have accepted to upgrade pip to their home directory when using a BioHPC python module A new pip executable is installed into their executable $PATH, but this pip is only useful for the version of python that it was installed from, so this error appears when a different python version is also loaded.
Solution: