Oct 06, 2025

Public workspaceAssessment and Competency-Based Education in Distributed Healthcare Education: A Scoping Review Protocol

  • Aysha Alsharhan1,
  • Jukha Shater Al Marzooqi2,
  • Khadija Mohd AlSulaimi3,
  • Mersiha Kovacevic4,
  • Raed Rafeh5,
  • Sara Kazim5,
  • Wail Bamadhaf5,
  • Zeyad Alrais5,
  • Nabil Zary6
  • 1Hamdan Bin Rashid Cancer Hospital, Dubai Health;
  • 2Al Jalila Children’s Specialty Hospital, Dubai Health;
  • 3Latifa Hospital, Dubai Health;
  • 4Institute of Learning, Mohammed Bin Rashid University of Medicine and Health Sciences, Dubai Health;
  • 5DAI, Dubai Health;
  • 6InstitutMohammed Bin Rashid University of Medicine and Health Sciences
  • NeuroInk
Icon indicating open access to content
QR code linking to this content
Protocol CitationAysha Alsharhan, Jukha Shater Al Marzooqi, Khadija Mohd AlSulaimi, Mersiha Kovacevic, Raed Rafeh, Sara Kazim, Wail Bamadhaf, Zeyad Alrais, Nabil Zary 2025. Assessment and Competency-Based Education in Distributed Healthcare Education: A Scoping Review Protocol. protocols.io https://dx.doi.org/10.17504/protocols.io.q26g7nwwqlwz/v1
License: This is an open access protocol distributed under the terms of the Creative Commons Attribution License,  which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited
Protocol status: Working
We use this protocol and it's working
Created: October 04, 2025
Last Modified: October 06, 2025
Protocol Integer ID: 229007
Keywords: Distributed medical education, assessment, competency-based education, Miller's Pyramid, workplace-based assessment, OSCE, programmatic assessment, assessment equity, CanMEDS, ACGME competencies, clinical skills assessment, entrustable professional activities, assessment standardization, distributed healthcare education system, distributed healthcare education, education in distributed healthcare education, competency framework, valid assessment, tier competency framework, assessment framework, based assessment, assessment practice, assessment standardization strategy, assessment implementation, tier competency framework for mapping competency domain, scope of the assessment, widespread use of competency framework, healthcare education system, assessment strategy, distributed education, based assessment implementation, limited systematic evidence on assessment practice, ensuring healthcare learner, literature on assessment, applying competency framework, characterizing assessment method, assessment coverage,
Abstract
Introduction
Assessment and competency-based education (CBME) are vital for ensuring healthcare learners acquire and demonstrate the necessary skills for safe and effective practice. In distributed healthcare education systems, where learners are trained across multiple, geographically dispersed sites, assessment faces unique challenges: achieving standardization and quality across locations; maintaining fairness so all learners have equal opportunities to show their competence; coordinating high-stakes evaluations remotely; implementing workplace-based assessments consistently with proper faculty training; and applying competency frameworks uniformly despite different site characteristics. A 2023 international survey revealed that 58% of distributed medical programs identified assessment standardization as a major challenge, citing issues such as inter-rater reliability, access equity, and consistency among competency committees across sites. Reliable and valid assessments are crucial for patient safety and public trust, emphasizing the importance of assessment equity in distributed settings. Despite the widespread use of competency frameworks and programmatic assessment, there remains limited systematic evidence on assessment practices, challenges, and solutions specific to distributed education contexts.To systematically explore the literature on assessment and competency-based education within distributed healthcare education systems, this review examines: (1) assessment methods employed across various distributed sites aligned with Miller's assessment framework; (2) the competency domains that are evaluated; (3) the scope and depth of assessment coverage; (4) approaches to standardizing assessments and ensuring quality; (5) equity in assessment practices across different sites; (6) implementation of workplace-based assessments; (7) challenges faced and potential solutions; and (8) the outcomes associated with assessment strategies.This scoping review adheres to JBI methodology and PRISMA-ScR guidelines. We will search MEDLINE, Embase, ERIC, Web of Science, and gray literature from 2000 to 2025. Two independent reviewers will screen and extract data using two frameworks: Miller's Pyramid (Knows, Knows How, Shows How, Does) for assessment methods and a Two-Tier Competency Framework for mapping competency domains (5 universal categories plus specific details). Data collection will cover assessment standardization, equity considerations, workplace assessment implementation, and quality assurance. We will develop a 5×4 assessment matrix (comprising five competency categories and 4 Miller levels) to analyze the scope of the assessment. Results will be presented through a narrative synthesis, accompanied by tables, figures, matrix analysis, and cross-framework comparisons.


Objective

To systematically map the literature on assessment and competency-based education in distributed healthcare education systems, examining: (1) assessment methods used across distributed sites mapped to Miller's assessment framework; (2) competency domains assessed; (3) assessment coverage breadth and depth; (4) assessment standardization and quality assurance approaches; (5) assessment equity across sites; (6) workplace-based assessment implementation; (7) challenges and solutions; and (8) outcomes related to assessment practices.


Methods

This scoping review will follow JBI methodology and PRISMA-ScR guidelines. We will search MEDLINE, Embase, ERIC, Web of Science, and gray literature from 2000 to 2025. Two independent reviewers will screen and extract data using two complementary frameworks: Miller's Pyramid for characterizing assessment methods (Knows, Knows How, Shows How, Does) and a Two-Tier Competency Framework for mapping competency domains assessed (5 universal categories plus specific framework detail). Data extraction will include assessment standardization strategies, equity considerations, workplace-based assessment implementation, and quality assurance mechanisms. We will create a 5×4 assessment matrix (5 competency categories × 4 Miller levels) to characterize assessment breadth and depth. Results will be synthesized narratively with tables, figures, quantitative matrix analysis, and cross-framework synthesis.
Guidelines
This scoping review will follow the Joanna Briggs Institute (JBI) methodology for scoping reviews and adhere to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR) reporting guidelines. The protocol will be registered on protocols.io before study commencement.
Materials
- Computers with internet access and institutional database subscriptions
- Covidence systematic review management software license
- Reference management software (Mendeley, EndNote, or Zotero)
- Microsoft Excel for data extraction forms and matrix analysis
- Statistical software for inter-rater reliability analysis (SPSS/R)
- Access to databases: MEDLINE, Embase, ERIC, Web of Science, Cochrane CENTRAL, ProQuest
Troubleshooting
Safety warnings
Methodological Limitations
  • A scoping review does not assess study quality or systematically synthesize effectiveness evidence.
  • Framework mapping and matrix population require judgment despite calibration procedures; inter-rater reliability may be imperfect.
  • English-language restrictions may miss international assessment approaches, particularly from non-English-speaking countries with different competency frameworks.
  • Studies before 2000were excluded, though a vital assessment history exists
  • A comprehensive gray literature search is conducted, but it may not capture all unpublished program descriptions.


Conceptual Limitations

  • Miller's Pyramid, while foundational and appropriate for this review, does not explicitly address programmatic assessment principles, entrustment concepts, or assessment for learning vs. of learning distinctions
  • Two-tier competency framework, while designed to be framework-agnostic, still requires mapping diverse frameworks to 5 categories, which may not perfectly capture all distinctions
  • 5×4 matrix assumes relative independence of competencies and Miller levels, though in reality they interact (e.g., "Does" level assessment often span multiple competencies simultaneously)
  • "Breadth × depth" characterization is functional but simplified - assessment quality involves additional dimensions beyond what we capture
  • The assessment equity definition may vary across contexts and cultures


Practical Limitations

  • Heterogeneity in how the assessment is described may make comparison difficult despite standardized extraction.
  • Many studies may lack sufficient assessment detail for complete framework mapping or matrix population (will have many "not described" cells)
  • Sparse reporting on assessment logistics, costs, and challenges is likely
  • Publication bias may favor successful or innovative assessment implementations over routine practices or failed attempts
  • Psychometric data are often not reported in educational literature, limiting conclusions about assessment quality
  • Studies may describe assessment design/intentions rather than actual implementation


Literature Base Limitations

  • Assessment literature in distributed education may be limited, particularly for specific topics (EPAs, competency committees)
  • Distributed-specific assessment challenges may be under-reported in the general assessment literature.
  • Assessment innovations may not be published (disseminated locally instead)
  • Regional and cultural assessment practices may be under-represented
Ethics statement
This scoping review analyzes published literature and does not involve human subjects; therefore, research ethics board approval is not required per institutional policy.

Ethical practices:
  • Transparent methods and data sharing enabling verification
  • Assessment equity is approached with sensitivity to avoid stigmatizing under-resourced sites; it frames equity findings in terms of structural solutions and opportunities, rather than site deficiencies.
  • Acknowledge tensions between assessment standardization (for fairness) and local context responsiveness (for authenticity)
  • Engage learner perspectives on assessment equity through the Advisory Panel
  • Recognize that assessment has real consequences for learners' careers and patient safety; take seriously the imperative to identify equity issues
Objectives
Systematically map the literature on assessment and competency-based education in distributed healthcare education systems.
Characterize assessment methods used across distributed sites mapped to Miller's Pyramid framework (Knows, Knows How, Shows How, Does).
Identify competency domains assessed using two-tier approach: (a) 5 universal categories applicable across all frameworks, (b) Specific framework details (CanMEDS, ACGME, other).
Analyze assessment coverage breadth (which competencies assessed) and depth (at what Miller's Pyramid levels) using 5×4 assessment matrix.
Examine assessment standardization and quality assurance approaches across distributed sites.
Investigate assessment equity: whether learners at all sites have equivalent assessment experiences and opportunities.
Describe workplace-based assessment implementation, high-stakes assessment logistics, competency committee functioning, and EPA implementation in distributed contexts.
Synthesize challenges, solutions, and outcomes related to assessment in distributed systems.
Methodology Overview
Follow Joanna Briggs Institute (JBI) methodology for scoping reviews and report according to PRISMA-ScR guidelines.
Register protocol on protocols.io prior to search execution.
Conduct systematic search across multiple databases (MEDLINE, Embase, ERIC, Web of Science) and gray literature sources (2000-2025).
Use duplicate independent screening and data extraction processes.
Apply dual complementary frameworks: Miller's Pyramid + Two-Tier Competency Framework.
Create 5×4 assessment matrix (5 competency categories × 4 Miller levels) to characterize assessment breadth and depth.
Present findings with quantitative matrix analysis, descriptive statistics, narrative synthesis, and visual presentations.
Conceptual Frameworks
FRAMEWORK 1: Miller's Pyramid of Clinical Competence
Level 1: KNOWS (Knowledge)

  • Definition: Factual knowledge, basic and clinical sciences
  • Assessment methods: MCQs, short answer questions, written exams, online knowledge tests
  • Distributed context: Most amenable to standardization; can be administered remotely; centralized exam administration possible
Level 2: KNOWS HOW (Competence/Applied Knowledge)

  • Definition: Application of knowledge, clinical reasoning, problem-solving
  • Assessment methods: Extended matching, clinical reasoning exercises, case-based assessments, script concordance tests
  • Distributed context: Can be standardized through structured cases and rubrics; often amenable to online administration
Level 3: SHOWS HOW (Performance)

  • Definition: Demonstration of skills in controlled/simulated settings
  • Assessment methods: OSCEs, simulation-based assessments, clinical skills stations, standardized patients
  • Distributed context: High logistical complexity; requires physical infrastructure at multiple sites; standardization challenges; cost considerations
Level 4: DOES (Action/Authentic Performance)

  • Definition: Performance in actual clinical practice
  • Assessment methods: Workplace-based assessments (Mini-CEX, DOPS), multi-source feedback, portfolio assessment, EPAs, competency committee judgments
  • Distributed context: Most relevant for distributed education; variability in assessor expertise across sites; requires substantial faculty development; challenges aggregating data from multiple sites
Miller's Limitations Acknowledgment

While foundational, Miller's Pyramid has limitations: hierarchical structure may imply "Does" superior to "Knows" though knowledge remains essential; doesn't explicitly address programmatic assessment principles; pre-dates modern frameworks like entrustment. However, its strength is clear, widely-understood structure for categorizing assessment methods by authenticity level, ideal for mapping practices across distributed systems.
FRAMEWORK 2: Two-Tier Competency Framework
TIER 1: Five Universal Core Competency Categories (framework-agnostic, present in ALL major frameworks)

4.1.1 Clinical/Technical Competence Medical knowledge, clinical skills, clinical reasoning, diagnostic/therapeutic procedures, patient management

4.1.2 Communication Patient communication, documentation, interprofessional communication, informed consent, breaking bad news

4.1.3 Professionalism & Ethics Professional behavior, ethical practice, accountability, self-awareness, reflection, commitment to patients/profession

4.1.4 Collaboration & Teamwork Working in teams, interprofessional collaboration, consultation, referral, and shared decision-making

4.1.5 Systems & Context Health systems understanding, leadership, management, health advocacy, quality improvement, practice improvement, resource stewardship, patient safety

TIER 2: Specific Framework Extraction

Also extract:
  • Which specific framework study uses (CanMEDS, ACGME, Good Medical Practice, AMC, institutional, other, none/implicit)
  • Which specific roles/competencies within that framework are assessed
  • Framework-specific assessment approaches
Rationale for Two-Tier Approach

Respects framework diversity while enabling both universal synthesis (Tier 1) and framework-specific insights (Tier 2)
Assessment Matrix: Competency × Miller's Pyramid
Create 5×4 matrix (5 competency categories × 4 Miller levels = 20 cells)
Matrix enables the characterization of the assessment
- breadth (horizontal: which competencies) and
- depth (vertical: at what Miller levels)
Matrix Population Rules

  • Unit of analysis: Per study
  • "Assessed" definition: Competency assessed at that Miller level as a systematic program requirement
  • Distinguish "Not Described" (study doesn't mention) from "Not Assessed" (study explicitly states not assessed)
  • For assessed cells, note the assessment method(s) used
  • Site-specific notation: M (main campus only), M+S (main+some satellites), All (all sites)
Framework Integration

Miller's answers "HOW deeply/authentically?" (method)
Competency answers "WHAT domains?" (content).

Together, reveal the breadth × depth of assessment.
Search Strategy
Three-concept search strategy requiring distributed context: (Healthcare Education) AND (Distributed/Multi-Site - REQUIRED) AND (Assessment/Competency)
Concept 1: Healthcare Education

Medical education OR nursing education OR health professions education OR clinical education OR medical students OR nursing students OR pharmacy students OR residents OR fellows OR healthcare trainees OR academic health systems
Concept 2: Distributed/Multi-Site Education (REQUIRED in all searches)

Distributed medical education OR distributed education OR regional medical campus OR rural education OR rural medical education OR multi-site OR multisite OR satellite campus OR branch campus OR community-based education OR remote sites OR geographically distributed OR decentralized education
Concept 3: Assessment and Competency

Assessment OR evaluation OR examination OR testing OR competency OR competence OR competency-based education OR CBME OR OSCE OR objective structured clinical examination OR workplace-based assessment OR WBA OR mini-CEX OR DOPS OR programmatic assessment OR Miller's Pyramid OR CanMEDS OR ACGME competencies OR competency framework OR EPA OR entrustable professional activities OR portfolio assessment OR multi-source feedback OR simulation assessment OR standardized patient OR competency committee OR assessment standardization OR assessment equity OR assessment parity OR inter-rater reliability
Sample search strategy (MEDLINE via Ovid):

1. exp Education, Medical/ OR exp Education, Nursing/ OR "health professions education".mp. 2. "distributed medical education".mp. OR "regional campus".mp. OR "rural medical education".mp. OR "multisite education".mp. OR "satellite campus".mp. 3. exp "Educational Measurement"/ OR "competency-based education".mp. OR OSCE.mp. OR "workplace-based assessment".mp. OR "clinical skills assessment".mp. OR EPA.mp. OR "entrustable professional activities".mp. OR CanMEDS.mp. OR "programmatic assessment".mp. OR "assessment standardization".mp. OR "assessment equity".mp. 4. 1 AND 2 AND 3 5. Limit to English language, year 2000-current
Search execution across databases:

  • MEDLINE (Ovid)
  • Embase (Ovid)
  • ERIC (EBSCOhost)
  • Web of Science Core Collection
  • Cochrane Central Register of Controlled Trials
  • ProQuest Dissertations and Theses Global
Gray literature sources:

  • MedEdPORTAL
  • AMEE conference proceedings and AMEE Guides
  • AAMC publications
  • Assessment resources from competency framework organizations (Royal College of Physicians and Surgeons of Canada, ACGME)
  • Google Scholar (first 200 results)
Peer review search strategy using PRESS checklist with health sciences librarian.
Study Selection
Title and Abstract Screening
Two independent reviewers screen all titles/abstracts using Covidence.
Inclusion criteria:

  • Population: Healthcare professions learners in distributed education programs (≥2 geographically separate sites)
  • Concept: Assessment methods, competency-based education, programmatic assessment, EPAs, competency committees, assessment equity
  • Context: Distributed healthcare education, any location, any educational level
  • Source types: All study designs, gray literature
Exclusion criteria:

  • Single-site programs
  • Assessments without educational context
  • Fully online programs without workplace-based or face-to-face assessment
  • Published before 2000
  • Non-English language
Pilot screening on 50 citations to calibrate reviewers and refine criteria.
Calculate inter-rater reliability (Cohen's kappa) after first 100 citations; target κ ≥ 0.70.
Resolve disagreements through discussion or third-reviewer arbitration.
Full-Text Review
Two independent reviewers assess all full texts against inclusion criteria.
Document reasons for exclusion systematically.
Resolve disagreements through discussion or arbitration.
Conduct citation chaining: hand-search reference lists and forward citation search.
Data Extraction
General Study Characteristics

  • Author, year, country, publication type
  • Study design and methodology
  • Healthcare profession(s), educational level
  • Number of distributed sites, site characteristics
  • Sample size (learners, assessors), duration
  • Funding source, conflicts of interest
Framework 1: Miller's Pyramid Assessment Method Extraction

For each assessment method described:
  • Name/type of assessment
  • Miller's Pyramid Level: Knows / Knows How / Shows How / Does
  • Purpose: Formative, summative, or both
  • Frequency: One-time, periodic, continuous
  • Standardization degree across sites
  • Sites using assessment: All / subset / main only / satellites only
  • Administration approach (centralized, decentralized, hybrid)
  • Logistical details, resources required
  • Challenges specific to the distributed context
  • Solutions/innovations
  • Quality assurance: Psychometric data, inter-rater reliability, standardization mechanisms
Framework 2: Two-Tier Competency Assessment Extraction
TIER 1: Universal Core Competency Categories

For each of the 5 categories, document:
  • Whether assessed (yes/no)
  • If assessed: assessment methods used, Miller levels, whether at all sites or site-variable, distributed-specific considerations
TIER 2: Specific Framework Details

  • Framework used: CanMEDS / ACGME / GMP / AMC / Institutional / Other / None/Implicit
  • Explicitness: Curriculum organized by framework / Implicit alignment / Not mentioned
  • Which specific roles/competencies within the framework assessed.
Assessment Matrix Population
Populate 5×4 matrix for each study showing which competencies assessed at which Miller levels
Apply matrix population rules:

  • Mark cells as: Assessed / Not Described / Not Assessed
  • For assessed cells, note the assessment method(s)
  • Use site-specific notation (M / M+S / All) for equity analysis
Assessment Equity Extraction

  • Is assessment equity explicitly discussed?
  • Assessment disparities between sites identified?
  • Do all sites use the same methods and assess all competencies?
  • Equity solutions: Traveling examiners, centralized exams, recorded assessments, remote proctoring, regional hubs, resource allocation
  • Equity outcomes: Evidence of disparity reduction
Workplace-Based Assessment Specific Extraction

  • WBA types (Mini-CEX, DOPS, EPA assessments, etc.)
  • Frequency and required numbers
  • Assessor types and training
  • Faculty development approaches
  • Data aggregation methods
  • Competency committee use of WBA data
High-Stakes Assessment Logistics

  • Coordination approach across sites
  • Centralized vs. decentralized administration
  • Standardization mechanisms
  • Resource requirements and costs
Competency Committee / Assessment Governance

  • Committee structure and composition
  • Meeting frequency/format
  • Decision-making processes integrating multi-site data
EPAs (if applicable)

  • Which EPAs used
  • Entrustment decision processes across sites
  • Supervisor training
  • Challenges and solutions
Challenges, Solutions, and Outcomes

  • Assessment challenges (standardization, equity, logistics, faculty development, resources, quality assurance)
  • Solutions implemented
  • Assessment outcomes: Validity, reliability, feasibility, cost, learner/faculty satisfaction, educational impact, equity outcomes
Framework Extraction Calibration
Four-Phase Calibration Process
Phase 1: Training

  • All extractors complete Miller's Pyramid and two-tier competency framework training
  • Practice mapping assessments to frameworks
  • Practice matrix population
Phase 2: Pilot Extraction

  • Extractors independently extract 10 diverse studies
  • Team meeting to discuss every framework mapping and matrix decision
  • Develop decision rules for ambiguous cases
  • Document all rules
Phase 3: Calibration Sample

  • After ~25% extractions, extractors independently extract anadditional 10 studies.
  • Calculate inter-rater reliability (Cohen's kappa) for: Miller assignments, Tier 1 competency identification, matrix population (target κ ≥ 0.70 for each)
  • If thereliability islower, additional calibration
Phase 4: Ongoing Calibration

  • Monthly calibration meetings
  • Document decision rules in the shared log
  • Random audit of 10% extractions
  • Address drift immediately
Data Synthesis
Descriptive Analysis

Create tables presenting:
  • Study characteristics
  • Assessment methods by Miller's Pyramid level
  • Competency framework usage (Tier 2: Which frameworks are used)
  • Competency category coverage (Tier 1: frequencies)
  • Assessment matrix descriptive statistics (cell frequencies, empty cells, site-specific patterns)
  • Assessment equity analysis
  • WBA characteristics
  • High-stakes assessment logistics
  • Challenges and solutions
  • Assessment outcomes
Quantitative Matrix Analysis
Coverage Scores per Competency Category (Breadth Analysis)

For each of 5 categories: % of Miller levels at which competency assessed (0-100%)
Depth Scores per Miller Level (Assessment Method Distribution)

For each Miller level: % of competency categories assessed at that level
Comprehensive Assessment Identification

Define "comprehensive assessment":
System assesses ≥4 of 5 competencies at ≥3 of 4 Miller levels Calculate: % of distributed systems meeting the comprehensive assessment threshold
Assessment Gap Identification

Identify most commonly empty matrix cells (competency-level combinations rarely assessed)
Narrative Synthesis - Tiered Approach
Assessment Methods Distribution Methods at each Miller level

Most common/under-utilized methods; methods particularly suited or challenging for distributed contexts
Competency Assessment Coverage

  • Tier 1 analysis (5 universal categories coverage)
  • Tier 2 analysis (which frameworks used, framework-specific patterns)
  • Comprehensively vs. under-assessed domains
Assessment Breadth × Depth

Analysis of 5×4 matrix patterns; competencies assessed only at lower levels vs. spanning all; comprehensive vs. limited assessment; distributed context impact; quantitative metrics
Assessment Standardization and QA

Standardization approaches; quality assurance mechanisms; inter-rater reliability strategies; factors enabling/hindering standardization
Assessment Equity Analysis - Standalone

  • Equity awareness: Proportion of studies explicitly addressing equity
  • Disparity documentation: Differences between main/satellite sites (matrix-based and method-based equity analysis)
  • Equity challenges: Access, opportunity, resource disparities
  • Equity solutions: Traveling examiners, technology solutions, resource allocation, faculty development
  • Equity outcomes: Evidence of effectiveness
  • Equity recommendations: Best practices
Challenges and Solutions Thematic Synthesis

Challenges categorized by matrix position/theme; common patterns; solution effectiveness; success factors and barriers; innovations.
Outcomes Synthesis

Validity/reliability evidence; feasibility/cost; educational impact; learner/faculty satisfaction; evidence quality and gaps.
Workplace-Based Assessment Implementation (IF sufficient literature)

WBA models; faculty development approaches; data aggregation methods; distributed-specific challenges/solutions.
High-Stakes Assessment Logistics (IF sufficient literature)

Coordination approaches; centralized vs. decentralized models; standardization mechanisms; resource/cost implications; innovations.
Competency Committee Functioning (IF sufficient literature)

Committee structures; decision-making processes; governance models.
EPAs in Distributed Education (IF sufficient literature)

EPA implementation approaches, challenges, entrustment decision processes, and solutions.
Cross-Framework Analysis
Competency Coverage by Assessment Depth

For each of 5 categories: proportion of systems assessing at each Miller level; comprehensively vs. superficially assessed competencies.
Assessment Method Distribution by Competency

Which methods used for which competencies; reliance on single method/level.
Distributed Challenges by Matrix Position

Which matrix cells present the most significant challenges, for example, hypothesis testing?
Assessment Equity by Matrix

Where are equity gaps most prominent? Which matrix positions aremore likely to have equity issues?
Quality Patterns

Which matrix positions aremore likely to have validity/reliability data? Evidence gaps?
Site-Specific Analysis

  • Assessment method availability: main campus vs. satellites
  • Competency assessment coverage by site type
  • Matrix-based site analysis (separate matrices where data is available)
  • Quality assurance by site
  • Equity initiative effectiveness
Visual Presentations

  • PRISMA flow diagram
  • Miller's Pyramid distribution (bar chart with site availability)
  • Competency category assessment frequency (Tier 1)
  • Competency framework usage (Tier 2)
  • Assessment matrix heat map (5×4 with equity notation)
  • Assessment coverage and depth scores (dual axis chart)
  • Assessment equity comparison (main vs. distributed)
  • WBA implementation map
  • High-stakes assessment coordination models
  • Challenges-Solutions-Outcomes framework
  • Geographic distribution map
Gap Analysis

Identify: Miller level gaps, competency category gaps, matrix gaps (empty cells), site-based gaps, geographic/profession/framework gaps, equity gaps, research design gaps, outcomes gaps
Timeline
12-Month Study Timeline

Month 1: Protocol refinement, search strategy finalization, and PRESS review, protocol registration
Month 2: Search execution, citation management, screening form preparation
Month 3: Title/abstract screening with inter-rater reliability assessment and calibration
Month 4: Full-text review
Month 5: Data extraction framework pilot on 10 studies; dual framework mapping calibration; matrix population training
Months 6-8: Full data extraction with monthly calibration meetings; inter-rater reliability monitoring; ongoing matrix population
Month 9: Data analysis, quantitative matrix analysis, cross-framework synthesis
Months 10-11: Manuscript drafting with tiered synthesis
Month 12: Manuscript revision, stakeholder review, submission
Quality Assurance
Methodological Rigor

  • JBI methodology adherence
  • PRISMA-ScR reporting
  • Protocol registration
  • Comprehensive search with PRESS review
  • Duplicate independent processes
  • Inter-rater reliability monitoring (κ ≥ 0.70 for frameworks and matrix)
  • Transparent reporting with clear operationalization rules
Conceptual Quality

  • Dual complementary frameworks (Miller's + Two-Tier Competency)
  • Systematic mapping to both frameworks
  • 5×4 matrix enabling breadth × depth characterization
  • Quantitative matrix metrics (coverage, depth, comprehensive assessment scores)
  • Cross-framework analysis
  • Two-tier competency respects framework diversity
  • Explicit equity focus
  • Matrix and site-specific analysis enable equity identification
Practical Considerations

  • Multiple stakeholder perspectives
  • Real-world logistical challenges
  • Cost and feasibility considerations
  • Actionable synthesis for leaders
  • Assessment equity as a central consideration
  • Assessment matrix visual tool as a practical output
  • Tiered synthesis acknowledges scope breadth
Expected Outputs
Dissemination Plan
Primary Output

Full scoping review manuscript submitted to: Medical Education, Academic Medicine, Medical Teacher, Teaching and Learning in Medicine, Advances in Health Sciences Education
Secondary Outputs

  • Conference presentations (AMEE, AAMC, IAMSE, ANZAHPE, CCME)
  • Assessment Matrix Visual Tool: 5×4 matrix template for distributed programs to self-assess assessment coverage (breadth × depth) with guidance on addressing gaps
  • Policy brief on assessment equity for accreditation bodies
  • Webinar for distributed program leaders on assessment best practices and equity
  • Open-access supplementary materials (search strategies, extraction forms, matrix templates, decision rules)
Target Audiences

  • Distributed program leaders
  • Assessment directors and competency committee members
  • Curriculum developers
  • Accreditation organizations
  • Faculty development leaders
  • Educational technology leaders
  • Assessment researchers
Protocol references
Arksey, H., & O'Malley, L. (2005). Scoping studies: Towards a methodological framework. International Journal of Social Research Methodology, 8(1), 19-32.

Joanna Briggs Institute. (2015). The Joanna Briggs Institute Reviewers' Manual 2015: Methodology for JBI Scoping Reviews. Adelaide: Joanna Briggs Institute.

Levac, D., Colquhoun, H., & O'Brien, K. K. (2010). Scoping studies: Advancing the methodology. Implementation Science, 5, 69.

Miller, G. E. (1990). The assessment of clinical skills/competence/performance. Academic Medicine, 65(9), S63-S67.

Tricco, A. C., Lillie, E., Zarin, W., et al. (2018). PRISMA Extension for Scoping Reviews (PRISMA-ScR): Checklist and explanation. Annals of Internal Medicine, 169(7), 467-473.
Acknowledgements
We wish to thank the research asisstants at CORE/Institute of Learning for their support.