May 13, 2025

Public workspaceProcessing 3D FIB-SEM data via AIVE

  • Benjamin Padman1
  • 1UWA
Icon indicating open access to content
QR code linking to this content
Protocol CitationBenjamin Padman 2025. Processing 3D FIB-SEM data via AIVE. protocols.io https://dx.doi.org/10.17504/protocols.io.14egn48x6v5d/v1
License: This is an open access protocol distributed under the terms of the Creative Commons Attribution License,  which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited
Protocol status: Working
We use this protocol and it's working
Created: March 28, 2025
Last Modified: May 13, 2025
Protocol Integer ID: 125599
Funders Acknowledgements:
Aligning Science Across Parkinson’s (ASAP)
Grant ID: ASAP-000350
Disclaimer
DISCLAIMER – FOR INFORMATIONAL PURPOSES ONLY; USE AT YOUR OWN RISK

The protocol content here is for informational purposes only and does not constitute legal, medical, clinical, or safety advice, or otherwise; content added to protocols.io is not peer reviewed and may not have undergone a formal approval of any kind. Information presented in this protocol should not substitute for independent professional judgment, advice, diagnosis, or treatment. Any action you take or refrain from taking using or relying upon the information presented here is strictly at your own risk. You agree that neither the Company nor any of the authors, contributors, administrators, or anyone else associated with protocols.io, can be held responsible for your use of the information contained in or linked to this protocol or any of our Sites/Apps and Services.
Abstract
A complete procedure for processing 3D FIB-SEM data via AIVE (AI-directed Voxel Extraction); from raw images to final result.
Materials
Safety warnings
Minimum system requirements:
System: 64-bit processor and operating system
Processor: 3.2Ghz / 16 core
Memory: 64Gb
Storage: 1Tb
Before start
Ensure that you have installed FIJI/ImageJ, WEKA (Waikato Environment for Knowledge Analysis), and Microscopy Image Browser (MIB) before getting started.

You will also need to have downloaded the necessary scripts, which are detailed in the Materials section. Each of the scripts will be referenced by their corresponding filename.

Please also consult the minimum system requirements before starting.
Data preprocessing
Data preprocessing
30m
30m
Select a subset of 3D FIB-SEM slices for processing, and deposit them in a new directory; Smaller stacks will process faster, but contain less information.
Using ImageJ/FIJI, run the built-in virtual stack registration plugin (select "Plugins → Registration → Register Virtual Stack Slices"). Apply to the folder containing your selected dataset, and set the registration model and transformations to "Translation only".
Once the registration is complete, save the resulting TIFF stack for subsequent processing. Ensure that the scale is correctly set for the dataset (Inspect by selecting "Image → Properties")
Note
If there was significant image drift during data acquisition, the perimeters of the dataset will now have large empty regions. Crop the dataset down to the smallest viable stack to save time on subsequent computations.

Note
The correct pixel scale must be set for the images, or all subsequent stages will fail.

Critical
The TIFF stack generated during this stage will be referred to as the "original TIFF Stack" from this point onward.
3D Contrast Limited Adaptive Histogram Equalization (CLAHE)
3D Contrast Limited Adaptive Histogram Equalization (CLAHE)
30m
30m
Place the original TIFF Stack you intend to process in a new directory for CLAHE processing.
Note
The correct pixel scale must be set for all tiff stacks, or the subsequent stages will fail.

Drag and drop the "CLAHE-Batch-3DCLAHE-AnisotropicXYZ.ijm" script into FIJI, then run the script.
Select the directory containing the TIFF stack(s) you intend to process.
Note
Do not interact with your computer during the process. Accidentally selecting an unrelated window during processing will switch the CLAHE plugin to that window.

The target folder should now contain CLAHE processed versions of the TIFF stack, which will now be referred to as the "CLAHE processed stack"
Critical
Machine Learning with WEKA - 3D Feature Generation
Machine Learning with WEKA - 3D Feature Generation
1d
1d
Open FIJI/ImageJ, drag and drop the "ML-Features-PART1-3DFeatSplitter-Sigma8.bsh" script into FIJI.

Run the script.
A new dialog box will now open, with 5 parameters that must be entered before proceeding:

Input Stack - Select your original TIFF Stack (which was generated during pre-processing)
Output directory - Select/create a new directory on a HDD/SSD with ample storage.
Temporary directory - Select/create a new directory, preferably on an SSD (with ample storage).
Additional target slices per loop - This number determines the number of slices that will be processed per calculation cycle.
Starting Slice - This number defines the which slice the processing will start at (useful for resuming the process if it is ever interrupted.)

Note
When setting the "Additional target slices per loop", The goal is to choose a number that maximizes RAM usage, without exceeding your RAM capacity (Which may crash your computer).
Start with a low number (2-3) while monitoring your memory usage...
If the first full computation loop succeeds, but you have RAM to spare, increase this number.
If the first full computation loop fails due to insufficient memory, reduce this number.
Once you have set all 5 parameters, press Ok.
Leave the process to run.
Note
This process may take minutes, hours, or even days.
It all depends on the specifications of your computer and the size of your dataset.

1d
Once the process is complete, delete all files in the "TEMPORARY STORAGE DIR" and proceed to the next stage. Store backups of your 3D features if required.
Machine Learning with WEKA - 2D Feature Generation
Machine Learning with WEKA - 2D Feature Generation
3h
3h
Open FIJI/ImageJ, drag and drop the "ML-Features-PART2-2DFeatSplitter.bsh" script into FIJI.

Run the script.
Note
For expert users, the script also includes optional extra features which are commented out at the end of the script. Please inspect the script and adjust as per your requirements.

A new dialog box will now open, with 3 parameters that must be entered before proceeding:

Input Stack - Select your original TIFF Stack (which was generated during pre-processing)
Output directory - Select/create a new directory on a HDD/SSD with ample storage.
Temporary directory - Select/create a new directory, preferably on an SSD (with ample storage).

Once you have set all 3 parameters, press Ok.
Leave the process to run.

Note
This process may take minutes, hours, or even days. But it will typically be faster than the 3D feature calculations.

3h
Once the process is complete, delete all files in the "Temporary directory" and proceed to the next stage. Store backups of your 2D features if required.
Machine Learning with WEKA - Combining the Features
Machine Learning with WEKA - Combining the Features
3h
3h
Open FIJI/ImageJ, drag and drop the "ML-Features-PART3-Combine3Dand2DFeatures.bsh" script into FIJI.

Run the script.
A new dialog box will now open, with 3 parameters that must be entered before proceeding:

Folder 1 - Select the directory containing your 3D Features.
Folder 2 - Select the directory containing your 2D Features.
Output directory - Select/create a new directory, where your final feature stacks will be deposited.

Once you have set all 3 parameters, press Ok.
Leave the process to run.
Once the process is complete, the Output directory will be populated with individual tiff stacks, each containing the features corresponding to one slice from the original stack.
This directory will now be referred to as the "Feature Stack Directory"

You may now proceed to machine learning.

Note
You can now delete the original 2D and 3D features, but it is strongly advised that you confirm that the preceding stages were successful before doing so.

Preparation of Training Labels for Machine Learning
Preparation of Training Labels for Machine Learning
1h
1h
Open Microscopy Image Browser (MIB), and load the "original TIFF Stack" for annotation.
Note
This section can be conducted in parallel with the feature generation stages described above, simply by using different computers.

Create a new labeling model and add 4 "materials" to that model. Each "material" represents a class of structure that the machine learning model will be trained to detect.

Note
Alternatively, you can use whatever number of "materials" you would prefer, provided that one of those "materials" represents a type of structure that generates electron signal (i.e. membranes). This is required for AIVE.

Rename the each of the "materials" as follows:
  • "1Void" - To represent empty extracellular regions of the sample
  • "2Sol" - To represent granular protein content of the cytoplasm and nucleus
  • "3Matter" - To represent electron-dense homogenous materials that do not belong to a membrane (i.e. mitochondrial matrix)
  • "4Memb" - To represent cellular membranes.

Note
Alternatively, you can name the classes however you desire, provided that one of those classes is used to identify a class of structure that generates electron signal (i.e. membranes). This is required for AIVE.

Randomly choose a slice in the stack; only proceed if there are representative examples of each material described above.

Make a note of the slice number for future reference.
On the designated slice, use the brush tool to annotate empty extracellular regions of the image, and add those annotations to the material named 1Void.
You may proceed to the next step once you have labelled at least 2000 pixels.
On the designated slice, use the brush tool to annotate intracellular cytosolic regions and proteinaceous content of the cell, and add those annotations to the material named 2Sol.
You may proceed to the next step once you have labelled at least 2000 pixels.
On the designated slice, use the brush tool to annotate electron-dense homogenous materials that can occur between membranes (Mitochondrial matrix, ER lumen, non-membranous lipid in lipid droplets, etc.), and add those annotations to the material named 3Matter.
You may proceed to the next step once you have labelled at least 2000 pixels.
On the designated slice, use the brush tool to annotate cellular membranes, and add those annotations to the material named 4Memb.
You may proceed to the next step once you have labelled at least 2000 pixels.
Repeat this processGo togo to step #23 , until you have labelled at least 8 random slices in the stack.

Note
Randomization is important, but it is more important to ensure adequate coverage of the stack. If any of the randomly chosen slices seem too close to another annotated slice, ensure you choose a new slice.

Save the completed training labels as TIFF slices, by selecting "Model → Save Model As..."; Under the "Save As Type:" tab, choose"Tiff Format (.tif)".

Choose a new directory for the labels, then press Ok.

A new dialog will appear asking how the TIFF files should be formatted.
Select "Sequence of 2D files".

The directory containing these files will now be referred to as the "Training Label Directory"

Note
Remember to save a backup of the training labels, so that you can revisit them later if required.

Machine Learning with WEKA - Extraction of Training Data
Machine Learning with WEKA - Extraction of Training Data
10m
10m
Open FIJI/ImageJ, drag and drop the "ML-CorePartA-ExtractTrainingDataFromFeatures.bsh" script into FIJI.

Run the script.
A new dialog box will now open, with 5 parameters that must be entered before proceeding:

Feature Stack Directory - Select the "Feature Stack Directory" (as prepared above)
Binary Labels Directory - Select the "Training Label Directory" (as prepared above)
Output directory - Select/create a new directory, where you will store the training data.
File number to train - The slice number that training data will be extracted from.
Num. of samples per class - The number of random measurements that will be made per class.

Note
For "File number to train", you must enter a slice number corresponding to one of the slices that you have already labelled for training. If you have been following the instructions, then you will have kept a record of which slices are available for training.

Note
For "Num. of samples per class ", higher numbers are better, but the number cannot exceed the number of pixels annotated for any class on any slice. (i.e. even if you've annotated millions of pixels for all the other classes and slices, if any class only has 500 pixels annotated on any slice, then your maximum number of samples is 500)

Repeat the training data extraction for each slice that you have previously annotated.
You should now have 8 unique ARFF (Attribute-Relation File Format) files in the output directory.

This output directory will now be referred to as the "Training Data Directory"
Open WEKA (Waikato Environment for Knowledge Analysis), and start the command line dialog by clicking the "Simple CLI" button.
Using the following command, combine each pair of ARFF files in your "Training Data Directory", until all files are combined into a single ARFF file (Edit the directory & file names to suit your needs):

java weka.core.Instances append "C:\Training Data Directory\File1.arff" "C:\Training Data Directory\File2.arff" > "C:\Training Data Directory\File1+File2.arff"

Once all .ARFF files have been pooled into a single file, you are ready to proceed to machine learning.
Machine Learning with WEKA - Model Training
Machine Learning with WEKA - Model Training
5m
5m
Open WEKA (Waikato Environment for Knowledge Analysis), and press the "Explorer" button.

In the explorer window, open the pooled .ARFF file described in the previous section (press the "Open File..." button).
Once the data is loaded in the explorer window, navigate to the "Classify" tab.
Click the "Choose" button and select Random Forest (weka > classifiers > trees > RandomForest).

Note
Alternatively, you can choose your own preferred model architecture for machine learning.

Left-click the text bar next to the "Choose" button. You are free to adjust the parameters listed, but the following settings are recommended:
  • bagSizePercent - Should be 10, but can be reduced for particularly massive datasets.
  • numFeatures - Should be the square root of the number of features available in each feature stack.
  • numIterations - 200 is a good starting point, but fewer iterations will generate a faster model.
  • seed - We prefer 1337; whatever your choice of random seed, try to ensure you stay consistent.
Once the training parameters are configured to your satisfaction, you can train the model.
Press the "Start" button in the explorer window to start training the model.

Note
You will know that the training is complete when the bird animation in the lower right corner stops dancing.

Once training is complete, right-click on the corresponding line in the "Results List" (on the left hand side of the window) and save the trained model.

The trained model can now be applied.
Machine Learning with WEKA - Applying the trained model
Machine Learning with WEKA - Applying the trained model
3h
3h
Open FIJI/ImageJ, drag and drop the "ML-CorePartB-ApplyClassifierToFeatures.bsh" script into FIJI.

Run the script.
A new dialog box will now open, with 4 parameters that must be entered before proceeding:

Features Directory - Select the "Feature Stack Directory" (as prepared above)
WEKA Model - Select the model you trained in WEKA
Output Directory - Select/create a new directory, for storage of the results
Starting Slice - This number defines the which slice the processing will start at (useful for resuming the process if it is ever interrupted.)

Once you have set all parameters, press Ok.
Leave the process to run.

Note
This may take several hours, depending on the complexity of the trained model and your data

Once the process is complete, the output directory will be populated with one stack of machine learning predictions per slice.

To convert these results into one stack of slices per predicted class, drag and drop the "ML-postCorePartB-SlicesToStacks.ijm" script into FIJI, then run it. Select the Output Directory from the previous script, then select a new directory for the converted outputs.

Each of the resulting stacks will represent the raw probabilistic predictions for one class, as were originally defined in Step 22.

The tiff stack corresponding to the membrane channel will be referred to as the "Raw Membrane Predictions" from this point onward.
Note
If required, the machine learning outputs from this section can be thresholded and imported back into MIB as new training data. By curating the outputs from a preliminary model, a second round of machine learning to be conducted with with a greater number of samples.

Critical
Organelle annotation for AIVE
Organelle annotation for AIVE
Open Microscopy Image Browser (MIB), and load the "original TIFF Stack" for annotation.

Note
OR You can use any classification method or program you want, provided that final result is a "label image" TIFF stack in which each voxel value represents the identity of an organelle class.

Create a new labeling model, and add a new "material" for each category of organelle you intend to annotate.
Label each organelle of interest, ensuring that it is added to the correct "material" in each instance.
Note
As long as you don't stray into unrelated organelles, and you annotate the entire organelle, you don't need to stay within the perimeter of that organelle!


Note
Ensure that you frequently save backups of the classification labels.

When the classification labels are complete, save them as one TIFF stack by selecting "Model → Save Model As..."; Under the "Save As Type:" tab, choose"Tiff Format (.tif)".
Choose a destination for the labels, then press Ok.

A new dialog will appear asking how the TIFF files should be formatted.
Select "3D Tiff".

The resulting file will be referred to as the "Organelle Class Stack" from this point onward.


Note
If all the above sections are complete, then input data required to complete AIVE is ready; you may now proceed with the final stages.


Final AIVE processing
Final AIVE processing
Organize your data in final preparation for AIVE.

Create a new directory, where you will conduct the AIVE processing.
This will be the "Main Directory"

Create five sub-directories in the main directory, using the following names: "0 Originals", "1 Masks", "2 Processed Masks", "3 Membrane Merge", "4 FIB Merge".

Note
The directory structure should look like this:

.../MAIN DIRECTORY/
├── 0 Originals
├── 1 Masks
├── 2 Processed Masks
├── 3 Membrane Merge
├── 4 FIB Merge

Copy the following precomputed TIFF stacks into the "0 Originals" sub-directory:
  • CLAHE processed stack (From "3D Contrast Limited Adaptive Histogram Equalization (CLAHE)")
  • Raw Membrane Predictions (From "Machine Learning with WEKA - Applying the trained model")
  • Organelle Class Stack (From "Organelle annotation for AIVE")

Note
Reminder! The correct pixel scale must be set for all images, or subsequent stages will fail.

Open FIJI/ImageJ.

Drag and drop the "AIVE-Macro1-ConvertLabelImageToBinaryStacks.ijm" script into the FIJI window.
  • Run the script.
  • Select the "Organelle Class Stack".
  • Select the "1 Masks" sub-directory.
  • Wait for process to finish.

Expected result
The "Organelle Class Stack" will be converted into one binary TIFF stack per organelle class.


Note
Assign file-names to the outputs now, to indicate their corresponding organelle classes. It will help you stay organized and the filenames will be conserved in subsequent steps.

Drag and drop the "AIVE-Macro2-GaussianFilterTheClassBinaries.ijm" script into the FIJI window.
  • Run the script.
  • Select the "1 Masks" sub-directory.
  • select the "2 Processed Masks" sub-directory.
  • Wait for process to finish.

Expected result
The boundaries of each individual binary mask will now be blurred.

Drag and drop the "AIVE-Macro3-MergeMLoutputsWithClasses.ijm" script into the FIJI window.
  • Run the script.
  • Select the "Raw Membrane Predictions" .
  • Select the "2 Processed Masks" sub-directory.
  • Select the "3 Membrane Merge" sub-directory.
  • Wait for process to finish.

Expected result
The organelle masks will have been merged with the membrane predictions.


Note
This stage is really designed for handling multiple CLAHE processed stacks at the same time. You can still follow this procedure if you only have one file to process, but you can also just apply the 10nm radius median filter manually, then save it as a separate file.

Create a new sub-directory called "CLAHE Median" in the "0 Originals" directory, then transfer the CLAHE processed stacks into the new directory.

Drag and drop the "AIVE-Macro4-MedianFilterTheCLAHEstacks.ijm" script into the FIJI window.
  • Run the script.
  • Select the "CLAHE Median" sub-directory.
  • Wait for process to finish.

Expected result
A 3D median blur will now have been applied to the CLAHE results, to denoise them.


Optional
Drag and drop the "AIVE-Macro5-MergeTheMaskedMLoutputsWithCLAHE.ijm" script into the FIJI window.

Run the script.
Select the 3D Median filtered "CLAHE processed stack" generated during the prior stage.
Select the "3 Membrane Merge" sub-directory.
Select the "4 FIB Merge" sub-directory.
Wait for process to finish.


Expected result
These outputs are the final AIVE results.


Note
Once the final AIVE results are available they can be subjected to a variety of 3D analyses, modifications, and 3D visualizations. The following additional scripts have been provided on GitHub as demonstrative examples:
  • Post-AIVE-BatchMorpholibJMeasurement.ijm
  • Post-AIVE-FillMitochondrialStructures.ijm
  • Post-AIVE-PrepDataFor3DEnvironments.ijm
  • Post-AIVE-StrayVesicleSeparator.ijm