Jul 21, 2025

Public workspaceDifferentiating Idiopathic Normal Pressure Hydrocephalus (iNPH) from Alzheimer's Disease (AD) using a Deep Transfer Learning Model on MRI

  • 1
  • 1Department of Neurosurgery, The First Affiliated Hospital of Shenzhen University, Shenzhen Second People's Hospital, Shenzhen, China.
Icon indicating open access to content
QR code linking to this content
Protocol Citation 2025. Differentiating Idiopathic Normal Pressure Hydrocephalus (iNPH) from Alzheimer's Disease (AD) using a Deep Transfer Learning Model on MRI. protocols.io https://dx.doi.org/10.17504/protocols.io.261geko7wg47/v1
License: This is an open access protocol distributed under the terms of the Creative Commons Attribution License,  which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited
Protocol status: In development
We are still developing and optimizing this protocol
Created: July 18, 2025
Last Modified: July 21, 2025
Protocol Integer ID: 222778
Keywords: AD, iNPH, MRI, deep-learining, deep transfer learning model on mri, differentiating idiopathic normal pressure hydrocephalus, idiopathic normal pressure hydrocephalus, deep transfer learning, deep transfer learning model, machine learning model for the differential diagnosis, mri, alzheimer, magnetic resonance imaging, using magnetic resonance imaging, brain t2 flair mri scan, differential diagnosis, idiopathic normal pressure, imaging, ad patient
Funders Acknowledgements:
guodong huang
Grant ID: SZGSP002
Disclaimer
This protocol and the accompanying code are provided for research and educational purposes only.
They are not intended for clinical diagnosis, treatment planning, or any form of medical decision-making.
All models and results are dependent on the datasets and experimental settings described in our study, and may not generalize to other populations or imaging conditions.

Users are responsible for ensuring compliance with local data privacy regulations and ethical guidelines when applying this protocol to their own data.

The authors and contributors accept no liability for any direct or indirect consequences arising from the use or misuse of this protocol or its associated tools.
Abstract
This study developed and validated a machine learning model for the differential diagnosis of idiopathic normal pressure hydrocephalus (iNPH) and Alzheimer's disease (AD) using Magnetic Resonance Imaging (MRI) and Deep Transfer Learning (DTL). Brain T2 FLAIR MRI scans were retrospectively collected from iNPH and AD patients.
Image Attribution
Department of Neurosurgery, The First Affiliated Hospital of Shenzhen University, Shenzhen Second People's Hospital, Shenzhen, China.
Guidelines
This protocol is intended solely for academic research and should not be used in clinical decision-making.
All code and components referenced are publicly available on GitHub and should be used with proper citation.
Users are expected to have a basic understanding of medical image processing, deep learning frameworks (e.g., PyTorch), and statistical evaluation methods (e.g., ICC, LASSO).
Materials
Hardware and Software Environment Operating System: 64-bit Windows 11 22H2 CPU: 13th Gen Intel Core i7-13700KF @ 3.40 GHz GPU: NVIDIA GeForce RTX 4070 Programming Environment: Python 3.6 Development Interface: Jupyter Lab / Jupyter Notebook


Troubleshooting
Safety warnings
Motion artifacts and poor image quality may degrade model performance. Quality control steps must be followed rigorously.

When preparing the 2.5D inputs, ensure consistent slice alignment and segmentation accuracy to avoid data leakage.

Be aware of class imbalance in the dataset; the provided `split_dataset4sol` component includes built-in stratified sampling to mitigate this issue.
Ethics statement
All MRI data must be fully de-identified prior to use, in accordance with applicable data protection regulations .
The protocol should only be applied to datasets that have obtained ethical approval for secondary analysis.
Do not use this workflow for diagnosis, patient selection, or real-time decision-making.
Before start
This study developed and validated a machine learning model for the differential diagnosis of idiopathic normal pressure hydrocephalus (iNPH) and Alzheimer's disease (AD) using Magnetic Resonance Imaging (MRI) and Deep Transfer Learning (DTL). Brain T2 FLAIR MRI scans were retrospectively collected from iNPH and AD patients.
Study Cohort and Data Acquisition
This part describes the selection of patients and the acquisition of MRI data.
Inclusion Criteria for iNPH Patients:
  1. Presence of one or more symptoms from the clinical triad (gait disturbance, cognitive impairment, urinary incontinence).
  2. Clinical symptoms not fully attributable to other coexisting diseases.
  3. No definitive evidence of preceding conditions causing secondary hydrocephalus.
  4. Cerebrospinal fluid (CSF) pressure ≤ 200 mmH2O with normal CSF composition.
  5. At least one of the following:1. Neuroimaging showing narrowing of sulci over the high-convexity/midline surface;2.Symptomatic improvement after CSF tap test and/or drainage test.
Inclusion Criteria for AD Patients:
  1. Aged 65 years or older.
  2. Cognitive decline indicated by Mini-Mental State Examination (MMSE) at admission.
  3. Insidious onset with a disease course of several months to years.
  4. Persistent decline in cognitive functions beyond memory.
  5. Presence of other CSF neuropathological markers for AD.
Exclusion Criteria (for all patients):
  1. History of cerebrovascular disease.
  2. Parkinson’s disease or Parkinson’s syndrome.
  3. Systemic or intracranial malignant tumors.
  4. History of cranial trauma.
  5. Clinical diagnosis of severe psychiatric disorders or long-term psychiatric medication use.
  6. Incomplete clinical records.
  7. MRI data unavailable or containing motion artifacts affecting computer recognition.
To ensure identical distribution proportions of iNPH and AD cases in the training and testing sets,the split_dataset4sol component was employed for stratified random splitting.
Internally,the component automatically performs a stratified shuffle split based on the labels,thereby preventing any class imbalance.
MRI Image Acquisition Processing and Segmentation
This part covers the pre-processing of raw MRI data and the manual segmentation of relevant structures.Data partitioning was performed using ITKGo to https://www.itksnap.org/pmwiki/pmwiki.php htthhttps://www.itksnap.org/
Acquire whole-brain T2 FLAIR-weighted images from all patients in Picture archiving and communication system (PACS)
Image Pre-processing
Normalize: Apply signal intensity normalization to all T2 FLAIR images.
Resample all images to an isotropic voxel size of 0.9 × 0.9 × 0.9 mm³.

Correct Bias Field: Use the N4 bias field correction algorithm to correct for signal inhomogeneity.

Enhance Resolution: Apply a super-resolution algorithm to enhance image quality.
Manual Segmentation of Regions of Interest (ROIs)
A neurosurgeon and a neurologist will manually delineate the lateral ventricles and periventricular edema on a slice-by-slice basis for all images. For each patient, identify the single axial slice that displays the maximum Evans' Index (EI).Save the resulting segmentation masks in NIfTI (.nii) format.
Intra-rater Reliability Assessment
  1. Select Subset: Randomly select a part of patient scans from the training set.
  2. Re-segment: After a 2-month interval, the same neurologist will re-segment the ROIs for this subset.
  3. Calculate ICC: Compute the Intraclass Correlation Coefficients (ICCs) to evaluate the consistency between the initial and repeated segmentations for both ventricular and periventricular edema volumes.
DTL Feature Extraction and Selection
This part details the core computational process of extracting and selecting features using deep transfer learning.we used VGG16 which pretrained on the ImageNet dataset.
Prepare "2.5D" Input

From the identified slice, extract the ROI containing the manually segmented lateral ventricles and periventricular edema.
Create a 3-channel input by stacking the ROI slice with its two immediately adjacent slices
Monitor Training: Manually monitor the learning rate and training/validation loss curves. Stop training or reduce the learning rate if the learning rate significantly decreases or the validation loss plateaus to prevent overfitting.
Principal component analysis (PCA) was applied to the 1,024 Deep Transfer Learning (DTL)–derived imaging features. The first 100 principal components (PCs), collectively explaining >90 % of the total variance, were retained for downstream analyses. To preserve patient-level integrity, feature matrices from multiple lesions or time-points of the same patient were concatenated after PCA, ensuring that intra-patient correlations were captured within a unified low-dimensional representation.
Reproducibility Filter (ICC): Assess the reproducibility of these 100 features using ICCs calculated from the reliability assessment cohort. Retain only features with ICC ≥ 0.8.
Multicollinearity Reduction (Pearson Correlation)
Calculate the Pearson correlation coefficient for all pairs of the remaining features.
For any pair with a high correlation (|r| > 0.9), discard the feature with the lower variance.
Final Feature Selection (LASSO)
Employ the Least Absolute Shrinkage and Selection Operator (LASSO) regression algorithm.
Use 10-fold cross-validation to determine the optimal regularization parameter (λ) that minimizes cross-validated error.
Select the final features corresponding to non-zero coefficients at the optimal λ.
Model Construction and Interpretation
This part describes the final steps of building, validating, and interpreting the classification model.
Train MLP Model:
Construct a Multilayer Perceptron (MLP) classifier.
Optimize hyperparameters by selecting the combination that yields the highest average F1-score across cross-validation folds in the training set.
Evaluate the trained MLP model's performance on the independent testing set.Calculate the following metrics:Accuracy; Precision; Recall; F1-score; Area Under the Receiver Operating Characteristic Curve (AUC)
Interpretation of the Model
Generate Heatmaps: Use Gradient-weighted Class Activation Mapping (Grad-CAM) to generate visual heatmaps for model predictions.
Analyze Heatmaps: Overlay the heatmaps onto the input MRI slices to highlight the class-discriminative regions that the model focused on for its classification decision.