Oct 30, 2020

Public workspaceMAPS image analysis

  • 11. Department of Cellular and Physiological Sciences, Life Sciences Institute, University of British Columbia, Vancouver, Canada, V6T1Z3
  • University of British Columbia
Icon indicating open access to content
QR code linking to this content
Protocol Citationjessechao 2020. MAPS image analysis. protocols.io https://dx.doi.org/10.17504/protocols.io.bn7dmhi6
Manuscript citation:
Chao JT, Roskelley CD, Loewen CJR, MAPS: machine-assisted phenotype scoring enables rapid functional assessment of genetic variants by high-content microscopy. BMC Bioinformatics doi: 10.1186/s12859-021-04117-4
License: This is an open access protocol distributed under the terms of the Creative Commons Attribution License,  which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited
Protocol status: Working
We use this protocol and it's working
Created: October 28, 2020
Last Modified: October 30, 2020
Protocol Integer ID: 43973
Setting up required environment and packages
Setting up required environment and packages
Navigate to https://github.com/jessecanada/MAPS, download the repository.
The following commands assume that you are using conda (or Anaconda) to manage your python packages and that you are on a Unix type system (Linux or OSX).
Use the following commands in terminal:

Command
Create a new virtual python environment using conda. Replace "myenv" with your own preferred name. (Mac OS any)
conda create -n myenv python=3

Command
activate virtual environment (Mac OS any)
conda activate myenv

Command
install required packages (Mac OS)
conda install --file requirements.txt

Command
Use this command at the end of your work session. (Mac OS)
conda deactivate

Now you are ready to rock and roll. For additional details, see https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html
Step 1 - image QC
Step 1 - image QC
Open "MAPS_1_img-QC.ipynb" in Jupiter Notebook and follow the steps in the notebook to remove blurry images.
Step 2 - cell detection
Step 2 - cell detection
Follow this quick start guide to build your first object detection model on Azure. (link: https://docs.microsoft.com/en-us/azure/cognitive-services/custom-vision-service/get-started-build-detector)

Note that you can augment the training images you create on Azure. To do this, follow the optional training augmentation step below.

Take note of the endpoint, training key, prediction key, resource ID and project ID.
(optional) Once you have labeled ~100 training images, you can perform training augmentation to boost the performance of your model. Note that it is always better to label more training images.

Open "MAPS_2.1_training_augmentation.ipynb". We recommend opening this notebook on Google Colab so that you don't need to install additional packages
Open "MAPS_2_cell-detection.ipynb". We would recommend opening this notebook on Google Colab. Follow the steps in the notebook to crop out individual cells.
Step 3 - phenotype discovery
Step 3 - phenotype discovery
There are two options here. To use the method in Figure 5A, open "MAPS_3_conv_stacking.ipynb" and follow the instructions in the notebook to generate cell galleries. Alternatively, to use the method in Figure 5C (i.e. deep autoencoder), open "MAPS_3_autoencoder.ipynb" and follow the steps in the notebook. Again, we recommend opening the notebooks in Colab.
After generating cell galleries for wild type and a few variants, carefully inspect them to decide how many classes of localizations you need to classify.
Step 4 - phenotype classification
Step 4 - phenotype classification
Gather the cropped individual cell images. Open "MAPS_4_phenotype_classification.ipynb" and follow the instructions in the notebook to make predictions and classify phenotypes. Again, we recommend opening this notebook on Colab. In the end you will get a .csv file containing predictions for each cell.

Congratulations, you've made it to the end!