5. Anti-Domestication Mechanics
Why Mechanical Defences Are Necessary
Governance commitments, however clearly stated, do not enforce themselves. The
preceding section describes what the system commits to. This section specifies how
those commitments are enforced through mechanisms that operate without
requiring anyone to be brave in the moment — through code-level constraints,
licence terms, and automatic tripwires.
Two capture scenarios were stress-tested against the governance framework and
found to reveal the gap between philosophical commitment and operational reality.
Both scenarios involve actors who are not hostile, who behave procedurally correctly
throughout, and who nevertheless gradually hollow out the system's epistemic
Scenario A: Institutional Soft-Capture via Flagship Partnership
A large, respected institution becomes the system's flagship partner. They provide
funding, validators, and journal introductions. They are not adversarial. But over
time, small accommodations accumulate: a summary view that buries disagreement
behind a click, cautious language from their validators, polite requests to delay
publication of inconvenient results. No rules are broken. The system is slowly
hollowed out through structural dependency.
This is the most dangerous capture scenario because every individual request is
reasonable, no one acts in bad faith, and the system functions correctly throughout.
The capture is structural, not conspiratorial.
Mechanic 1: Default Salience Rule (Code-Level)
If material disagreement exists in a validation record, it must be visible in the first
screen, first scroll, and first API response. Not behind tabs, summaries, confidence
labels, or badges. This is a UI governance constraint enforced in code. Any interface
— internal or partner-facing — that renders validation records must pass a salience
check: is disagreement visible without user action?
Rationale: default views define reality. Almost no one clicks through. A "summary-
first" display that technically preserves disagreement behind a link is functionally
indistinguishable from hiding it.
Mechanic 2: Anti-Delay Constraint
Validation records must be published within a defined maximum latency after final
attestation. Extensions require a public rationale logged in the governance record, a
visible "Delayed by Governance Review" banner on the pending record, and
automatic publication if the extension period expires without resolution. Delay
becomes louder, not quieter. Any actor requesting delay must accept that the delay
itself is visible and documented.
Rationale: In fast-moving fields, delaying a validation record for months while
"awaiting clarification" is functionally equivalent to suppression. The anti-delay
constraint ensures that the cost of delay is reputational transparency, not quiet
Mechanic 3: Funding Concentration Tripwire
No single institution may simultaneously exceed two of the following three
thresholds: more than 25% of operational funding; more than 25% of active
validators; more than 25% of governance seats. Exceeding two of three triggers
automatic public disclosure of the concentration, temporary intake throttling for new
validations from that institution, and a mandatory governance review within thirty
Rationale: financial dependency is governance capture by another name. An
institution that provides funding, validators, and governance oversight
simultaneously doesn't need to make demands — everyone already knows what
would happen if they withdrew. The tripwire makes concentration visible before it
Mechanic 4: Exit-with-Integrity Clause
The system should pre-commit publicly to the following statement, embedded in all
partnership agreements and published on the project website:
If maintaining this system’s epistemic commitments becomes incompatible
with continued funding or partnership, the system will prefer contraction or
suspension over compromise.
This is not a dramatic gesture. It is a filter. Good partners are not threatened by it.
Partners who intend to exert influence through dependency will self-select out early,
Scenario B: Journal-Led Capture via Workflow Integration
Journals integrate the validation system's API into their editorial workflows and use
validation records during peer review. This appears to be adoption success. But
journals control timing, framing, editorial thresholds, and reviewer instructions — all
of which sit outside the validation system's governance boundary. Without touching
governance or code, a journal can use the system only at pre-submission triage
(filtering out studies likely to show disagreement before they reach validation);
instruct reviewers to defer to validation records as shields rather than signals;
display summarised badges to readers while editors see full records; treat only pre-
submission records as authoritative; and eventually define "validated" in ways that
exclude disagreement and ambiguity.
The capture happens entirely outside the system's boundary. Nobody touches
governance, code, or data. They control the context in which outputs are interpreted.
Mechanic 5: API Display Requirements (Licence-Level)
The system's API licence must include mandatory display terms. Any partner
querying validation records must: display disagreement with the same visual
prominence as agreement; include a mandatory, unfurlable link to the full validation
record; show the "last updated" timestamp (preventing stale snapshots being
presented as current); and include a standard attribution line pointing to the full
record. Summarisation is permitted, but summarisation that omits material
disagreement is a licence violation. Violation results in written notice, a thirty-day
remediation period, and public API access revocation if unresolved.
Mechanic 6: Anti-Binary-Badge Clause
The system should not issue binary pass/fail badges. The output is always the full
validation record or a structured summary that preserves disagreement. If a third
party creates their own binary badge based on the system's data, the API terms must
require that badge to include a clear statement that it is not issued by the validation
system and that the full record is available at the link provided.
Rationale: Binary badges are the mechanism through which nuanced evidence gets
reduced to gatekeeping thresholds. The validation system's entire design philosophy
resists this reduction. If journals want a binary signal, they may create one — but the
validation system's name is not on it, and the full record is always one click away.
Mechanic 7: Temporal Integrity
Validation records are living documents. The API must always return the current
state of a record, including any post-publication validations, updates, or new
disagreements. Records must carry creation timestamp, last-updated timestamp,
version number (incremented on every material change), and a count and summary
of post-publication validations. A journal may cache a snapshot, but the API must
make visible when a cached version is stale. Any display of a validation record that
omits the last-updated field violates the API licence.
Rationale: Temporal freezing — treating a pre-submission validation record as
definitive and ignoring post-acceptance validations — is how journals shift a
validation system from corrective to confirmatory function. Living documents with
visible timestamps ensure that new evidence cannot be quietly ignored.
Mechanic 8: Triage Misuse Detection
Aggregate query patterns across API partners should be monitored statistically.
Detectable patterns include: pre-screening bias (a partner querying studies only
before editorial decision and never after acceptance); selective querying (querying
studies from certain institutions but not others); and outcome filtering (studies
queried pre-decision and showing disagreement being disproportionately rejected).
Findings should be published in the annual transparency report (anonymised by
default), shared privately with the partner for remediation, and made public with
partner identified if the pattern persists after notification.
Mechanic 9: Public Delisting
If a partner is demonstrably using the validation system in ways that systematically
undermine its epistemic function — and remediation has failed — the system publicly
withdraws API access and publishes a detailed explanation. This is the nuclear
option. It is reputational, not legal. In science publishing, reputation is currency. The
credible threat of public delisting is itself a governance mechanism. Delisting
requires documented evidence of systematic misuse, failed remediation, and a
governance supermajority vote. It cannot be triggered unilaterally.
How These Mechanics Interact
The two scenarios target different attack surfaces — internal structural dependency
and external interpretation — but the defences share a common logic:
• Make capture visible. Funding concentration is disclosed. Delay is
bannered. Query patterns are monitored. Display violations are public.
• Make resistance automatic. Code-level salience rules, API licence terms,
and automatic tripwires operate without anyone needing to make a brave
• Make the cost of capture exceed the cost of compliance. Public
delisting, visible delay banners, and funding tripwires impose reputational
costs on capture attempts early, when correction is still cheap.