**Step 3: Screen and Select Studies**
Three independent reviewers screen all records.
**3.2 Screening Procedure**
For each record, each reviewer independently:
1. Reads the title and abstract
2. Classifies the paper as Include, Exclude, or Maybe
3. Records a brief rationale
4. For Exclude decisions, assigns an exclusion reason code (E1–E8, see Step 1.2)
- Inclusion: A paper is included if at least 2 of 3 reviewers agree on inclusion
- Maybe: Papers classified as Maybe by majority are retained as included studies
- Exclusion: A paper is excluded if at least 2 of 3 reviewers agree on exclusion
- Discordant cases: Cases where no majority exists are resolved through discussion among all three reviewers. Document the number of discordant cases and how each was resolved.
**3.4 Inter-Rater Reliability**
Calculate and report the following reliability statistics:
- Fleiss' kappa for three raters
- Pairwise percent agreement for each reviewer pair (R1–R2, R1–R3, R2–R3)
- Three-way agreement (proportion of records where all three reviewers agreed)
- Prevalence-Adjusted Bias-Adjusted Kappa (PABAK): Calculate using PABAK = 2Po – 1, where Po is the observed pairwise agreement proportion. Compute for each pair and report the average. PABAK corrects for statistical deflation of kappa under high inclusion rates expected in scoping reviews (Byrt et al., 1993; Sim 6 Wright, 2005).
**3.5 Sensitivity Analysis — Inclusion Threshold**
- Identify all studies included by majority (2/3) but not unanimously
- Also identify studies classified as Maybe
- Report how many studies could potentially change status under unanimous agreement
- Confirm via discussion-based review whether these studies meet eligibility criteria
**Step 4: Develop Analytical Framework (Domain Classification)**
**4.1 Domain Definitions**
Classify each included study into exactly one of three mutually exclusive domains based on its primary technological contribution:
| Domain | Definition | Defining Characteristic |
|----------|-----------------------------------------------------------------------------|--------------------------------------------------------------|
| Hardware | Physical modification of manikins: sensors, actuators, musculoskeletal systems, structural or material modifications | Core innovation resides in the physical apparatus |
| Software | Computational contribution: control algorithms, physiological simulation engines, AI/ML, robotics middleware, teleoperation architectures | No significant physical modification to the manikin |
| Hybrid | Deliberate combination of physical modifications with digital/virtual overlays, mixed reality, or tightly integrated systems | Neither hardware nor software alone constitutes the primary contribution |
**4.2 Decision Rules for Ambiguous Cases**
- If a study explicitly emphasises both hardware and software as equally central → classify as Hybrid
- If dominance remains unclear after careful reading → defer to the stated application or clinical impact
- If still ambiguous → flag as low-confidence and resolve via full-text review and discussion
**4.3 Classification Procedure**
1. Two reviewers independently classify all included studies
2. For each study, record: primary contributing domain, classification confidence rating (high, moderate, or low), and brief justification
3. Resolve disagreements through discussion-based consensus
4. Studies classified as low-confidence require full-text review
5. Track and report the number of discordant and low-confidence cases
**4.4 Sensitivity Analysis — Domain Classification**
1. Restrict to high-confidence classifications only
2. Compare domain distribution (% Hardware, % Software, % Hybrid) with full dataset
3. Conduct independent verification of all classifications
4. Report number and direction of any reclassifications
**Step 5: Chart the Data**
**5.1 Data Charting Form**
Develop a standardised data charting form with the following variables:
| Variable | Description |
|---------------------------|-----------------------------------------------------------------------------|
| Author(s) | All authors |
| Publication year | Year of publication |
| Country | Country of first author affiliation |
| Journal/Conference | Publication venue |
| Study design | Prototype/Development, Experimental/Evaluation, Simulation study, RCT, Review, Technical/Methodological, or Not classified |
| Domain | Hardware, Software, or Hybrid (from Step 4) |
| Clinical application domain | Primary clinical area (assigned inductively) |
| Manikin/simulator type | Type of manikin or simulator described |
| Key technologies | Technologies used (sensors, actuators, AI/ML, middleware, etc.) |
| Key findings | Main findings or contributions (1–2 sentences) |
Pilot-test the data charting form on 10 studies. Assess whether all variables are extractable, the form is complete, and coding instructions are clear. Revise as needed before applying to the full dataset.
**5.3 Clinical Application Domain Coding**
Clinical application domains are assigned inductively based on the primary clinical area addressed in each study. Studies addressing multiple clinical areas are coded using the primary application as stated by the authors.
**5.4 Abstract Recovery**
If abstracts are not available for all included studies:
1. Stage 1: Retrieve from primary search platform
2. Stage 2: Query PubMed E-utilities API
3. Stage 3: Manual full-text retrieval through institutional library access, direct author contact, and open-access repositories
Target: 100% abstract coverage.
**Step 6: Synthesise Results**
**6.1 Descriptive Numerical Summary**
Tabulate included studies by:
- Domain (Hardware, Software, Hybrid)
- Publication year and temporal period
- Geographic distribution (country of first author)
- Clinical application domain
**6.2 Cross-Tabulations**
- Domain × publication period (Table 2)
- Domain × clinical application area (Figure 3)
**6.3 Convergence Indicators (RQ3)**
To assess the state of convergence between simulation and robotics communities, examine:
1. Disciplinary asymmetry: Hardware-to-software ratio across time periods
2. Cross-disciplinary tool adoption: Count studies explicitly describing AI/ML and robotics middleware (ROS)
3. Venue fragmentation: Number of unique publication venues; presence/absence of a dominant venue
4. Cross-domain clinical coverage: Proportion of clinical areas with studies from 2+ domains; identify areas addressed by all 3 domains
5. Hybrid domain trajectory: Trend in Hybrid study output over time; citation metrics by domain
**6.4 Narrative Thematic Analysis**
Organise narrative synthesis around the three research questions:
- RQ1: Nature and extent of research (domain distribution, study designs, technology landscape, clinical application areas)
- RQ2: Temporal trends and geographic patterns (publication growth, geographic concentration, centres of activity, venue distribution)
- RQ3: Convergence indicators (disciplinary asymmetry, absence of cross-disciplinary tools, venue fragmentation, cross-domain clinical coverage, Hybrid domain as convergence signal)
**6.5 Sensitivity Analyses**
Report results of both sensitivity analyses:
1. Inclusion threshold: Number of studies affected by varying from 2/3 majority to unanimous agreement; whether any findings change materially
2. Domain classification confidence: Domain distribution when restricted to high-confidence classifications only; number and direction of reclassifications
**Step 7: Report According to PRISMA-ScR**
**7.1 Required Elements**
Ensure the manuscript includes all 22 PRISMA-ScR checklist items (Tricco et al., 2018). Key elements:
- Structured abstract (Background, Methods, Results, Conclusions)
- PRISMA-ScR flow diagram (Figure 1)
- Complete search strategy (Additional file 1)
- Completed PRISMA-ScR checklist (Additional file 3)
- Data charting form with all extracted data (Additional file 2)
**7.2 Mandatory Declarations**
Include the following statements:
- Ethics approval and consent to participate (not applicable for scoping reviews)
- Availability of data and materials
- Protocol registration status
**7.3 Protocol Registration**
Register this protocol on protocols.io and cite the DOI in the manuscript Methods section. Update the manuscript text to include the retrospective registration DOI.