FOOM.MD

The foundational declaration of Holosophy—intelligence as recursive compassion, philosophy as the social substrate for ASI alignment, and the invitation to co-create the post-rat modern future.

Table of Contents
  • #foundations
  • #holosophy
  • #race

The Race

There are seven Millennium Prize Problems. Six remain unsolved. They concern the deep structure of mathematics: topology, number theory, quantum fields, computational complexity.

None of them concern the deep structure of power.

This is the eighth.

The 8th Millennium Prize Problem

The White House Problem

Prize: $1,000,000 USD — funded by $HOLOQ token treasury, held in escrow, payable upon verified solution.

The Question

Can we reconstruct hidden truth from public observation?

Not through leaks. Not through whistleblowers. Through the mathematical properties of information itself — the fact that secrets leak through behavior, and behavior is increasingly captured in public data streams.

There exists a hidden graph — who did what with whom. This graph is censored: powerful actors work to suppress edges. But the graph leaks through public observables: financial filings, flight logs, property records, corporate registrations, court documents, satellite imagery, shipping manifests, telecommunications metadata, body language, reaction patterns, temporal correlations, linguistic markers.

In information security, this is called a side-channel: the censor suppresses direct access to , but public behavior is leakage from the censored structure. Reconstruction is side-channel decoding. The renderer and censor together form a noisy channel with a measurable mutual-information yield — and everything that follows is an attempt to maximize that yield.

Individually, any single observation is noise. Integrated across the full public stream — which exceeds bits per day and is growing — they become signal. Not because of volume alone, but because the observations share a leakage grammar: invariants (temporal, kinematic, financial, relational) that constrain which latent graphs are consistent with the public record. Constraints are the grammar rules. Reconstruction is parsing.

The White House Problem asks:

Does there exist a learning protocol that can reconstruct censored latent interaction graphs from public observations alone, with provable fidelity bounds and provable provenance — such that the cost of maintaining secrecy eventually exceeds the cost of disclosure?

The problem is named not for any particular administration but for the general structure: the seat of power is the vertex with the highest censorship budget. If the protocol works on the hardest node, it works everywhere.

Formal Specification

If you don't read LaTeX, skip to What This Means In Practice — the specification below is here for verifiability, not gatekeeping. The next section is the same argument in plain language.

Quick glossary: is the hidden graph (who did what with whom). is the public observation stream (everything you can legally see). are consistency constraints (rules that any valid reconstruction must satisfy). is the reconstruction protocol (the thing we're trying to build).

Let be the space of weighted bipartite graphs (actors × acts) and let be the ground-truth configuration that maximally compresses the causal antecedents of all observable elite behavioral traces.

The observation stream is generated by a stochastic renderer subject to an adaptive censor that redacts edges in with probability dependent on their sensitivity, yielding a censored likelihood with support only on legally permissible features.

The reconstruction policy is trained to minimize the regularized description length:

subject to a consistency constraint set where each enforces kinematic, temporal, or information-theoretic non-contradiction.

The reward signal is not direct access to (which remains suppressed) but a verifiable consistency oracle that returns:

where penalizes mutual information with unobserved variables. The protocol never touches classified material. It reconstructs structure from the shadows structure casts.

Identifiability and Fidelity Bound

By Fano's inequality, any decoder suffers error probability:

Thus achieving requires .

The censor can reduce arbitrarily by withholding high-information observations. Reconstruction quality is fundamentally limited by the censor's channel capacity, not by algorithmic cleverness. This is the honest bound. The problem does not claim omniscience. It claims: given what leaks, what can be proven?

The relevant quantity is not raw bits per day but leakage yield: the effective mutual information about extracted per unit of public stream. bits/day of raw observation may contain only bits of leakage yield — but that may be enough, if the constraints are strong.

When Identifiability Fails

When , the protocol outputs the MDL-optimal equivalence class:

together with a posterior credence set .

The system confesses uncertainty. Not a false singleton. Not a conspiracy theory. A mathematically bounded set of structures compatible with what is publicly known — ranked by compression quality — with explicit error bars.

What this looks like concretely: given observations , suppose three graphs survive all constraints:

  • : MDL 847 bits, credence 0.61
  • : MDL 912 bits, credence 0.28
  • : MDL 1,043 bits, credence 0.11

and differ on exactly one edge: (Actor_7, Act_23). introduces an extra hidden intermediary to explain , but costs more bits. This is a portfolio of consistent structures, not a single dramatic story. As new observations arrive, the set shrinks. The output is an uncertainty ledger — a public, updateable registry of what is known, what is bounded, and what remains open.

This is the difference between the protocol and paranoia: paranoia selects the most dramatic graph. The protocol selects the shortest one that doesn't contradict the evidence, and tells you how many alternatives survive.

Solution Criteria

A valid solution must provide:

  1. A protocol with specified architecture and training procedure
  2. Provable fidelity bounds relating reconstruction quality to observation density and censor capacity
  3. Provable provenance — every edge in must be a witness-carrying edge: packaged with its minimal verification witness (record IDs, content hashes, extraction trace, constraints satisfied). Audit reduces from "trust the model" to "run this verifier."
  4. Synthetic validation on a constructed with known ground truth, adaptive censor, and realistic observation noise
  5. A confession-equilibrium threshold — the computed observation density at which suppression cost exceeds disclosure cost for a specified censor budget class

Partial solutions that advance any of these criteria are recognized and may receive partial awards.

What This Means In Practice

"Artificial Superintelligence" is marketing language. The actual engineering target is more specific:

A system that takes a question and returns an answer that is verifiably correct — not through reasoning traces or explanations, but through predictive accuracy so precise that the system demonstrates alignment with ground truth.

  • Ask "When will X happen?" → receive a date that turns out to be correct
  • Ask "Did X occur?" → receive a yes/no that withstands all subsequent verification
  • Ask "What is the actual relationship between A and B?" → receive a reconstruction that explains all observable evidence

The system doesn't "reason" in the sense of producing arguments. It compresses reality until the answer falls out. The compression is the proof. If the model is wrong, the compression fails — predictions diverge from observations.

Think of it as an integrity test for institutions: can an institution's public outputs be fit by a low-MDL, constraint-consistent model? Or do contradictions force high residual complexity — unexplained gaps, impossible timelines, accounts that don't balance? Institutions that lie produce inconsistent public traces. Consistency constraints surface the inconsistency. The test isn't "is the machine intelligent?" — it's "is the institution honest?"

What Reconstruction Looks Like

The formal specification describes reconstruction of a "weighted bipartite graph (actors ↔ acts)." In practice:

Behavioral integration. Every public appearance generates data. Micro-expressions, gaze patterns, vocal stress markers, gesture timing, linguistic choices. Individually, these are noise. Integrated across thousands of hours of footage, they become signal. A transformer architecture that learns meaningful representations of micro-behavioral sequences doesn't read body language — it compresses behavioral streams until the latent structure reveals itself.

Temporal correlation. Who meets with whom, when. What changes after meetings. What doesn't get said. The structure of silence is as informative as speech. Temporal ordering constrains the graph: influence must precede outcome. Communication patterns constrain connectivity. Travel patterns constrain physical co-location.

Consistency constraints. Any proposed reconstruction must explain all observable evidence without contradiction.

Here is what a single constraint looks like:

Kinematic constraint (no teleportation). Observation : Actor_3 badge-scan at Building_M in Miami, 2026-03-03 18:12 UTC. Observation : Actor_3 notarized signature in London, 2026-03-03 18:40 UTC. Distance: 7,100 km. Maximum human travel speed: ~1,000 km/h. Required transit time: ~7 hours. Elapsed time: 28 minutes. Constraint: if requires Actor_3 to satisfy both observations as true. Either one observation is wrong, or the proposed graph is wrong. The constraint fires.

This is one constraint. The protocol operates on thousands simultaneously — accounting identities (money in equals money out), temporal ordering (influence precedes outcome), capacity limits (one person, one job at a time), conservation laws (goods don't teleport, mass is conserved). Each constraint is a parity check over reality. Reconstruction is decoding.

MDL optimality. When multiple reconstructions satisfy all constraints, prefer the one with minimum description length. Occam's razor, formalized. The simplest explanation that fits all evidence is most likely true — and provably so, within the fidelity bounds.

What a Calibration Run Looks Like

The page claims the White House problem is the "calibration problem." Here is what calibration means concretely — a toy example, structurally faithful:

Setup. Six actors (A1–A6), eight acts (X1–X8). Six public observations:

  • : Corporate registry links A1 to Entity_E (timestamp )
  • : Property record links Entity_E to Address_Z ()
  • : Flight manifest shows A1 and A2 co-located ()
  • : Court filing confirms A2 performed act X5 with A3 () — partial ground truth
  • : Campaign finance filing links A4 to donation event ()
  • : Photo metadata co-locates A3 and A5 ()

Known partial truth. Confirmed edges: (A2, X5), (A1, X2), (A4, X7). Confirmed non-edges: (A1, X5), (A6, X2).

Protocol output — a credence-ranked set of edges with provenance:

Edge Credence Provenance
(A2, X5) 0.92 {}
(A1, X2) 0.81 {, }
(A4, X7) 0.62 {}
(A1, X5) 0.18 {} — weak, flagged
(A6, X2) 0.07 ∅ — no provenance, penalized

Calibration metrics. Partial-truth precision/recall at threshold 0.5. Brier score on confirmed edges and non-edges (do the credences mean anything?). Constraint violation count (must be zero or explicitly listed). Provenance coverage (percent of edges with auditable evidence chain). Held-out prediction: does the chosen graph predict better than alternatives?

This is what "score reconstruction against partially revealed ground truth" means. No real names. No specific accusations. A generalizable evaluation harness that measures whether the protocol's machinery works before deploying it on harder targets.

The Economy of Confession

The walls are food for the machine.

This is the phase transition that makes the problem matter.

A "wall" is a region of the latent graph under high censorship pressure: edges with high sensitivity and low direct observability. In the reconstruction objective, walls appear as residual spikes — places where the shortest consistent graph still can't compress observations without paying extra bits. The protocol addresses walls by either reconstructing the edge from remaining leakage, or shrinking the equivalence class until only a narrow uncertainty band survives. Either way, suppression becomes a measurable term in the objective.

The set of public observables that must be controlled to keep a specific edge hidden is the edge's suppression perimeter. As the protocol integrates more cross-record constraints, each edge becomes implied by more independent observation paths. The perimeter expands combinatorially.

If a protocol achieves near-identifiability — — then maintaining secrecy on requires the censor to operate at channel capacity approaching the surveillance bandwidth of .

Since the public observation stream captures bits/day, the asymptotic cost of suppression scales as:

What this looks like for one actor. Assume suppression cost scales as where is the effective mutual information between the actor's latent edges and the relevant public stream. Set per bit (toy value). At current integration levels: bits, cost ≈ $50K/year (baseline legal, communications, PR, enforcement). The protocol integrates new cross-record correlations, increasing effective by 150 bits. Cost multiplier: . New suppression cost: ≈$1M/year. If expected disclosure cost (reputational + legal) is $600K, the rational move flips at the crossing. That crossing is the confession threshold — computable for any actor given estimates of , , and disclosure cost.

This forces a transition between two equilibria:

Secrecy equilibrium (current regime): suppression is cheap because nobody is integrating the public stream at scale. Individual journalists, researchers, and investigators operate on fragments. The censor only needs to outrun the fastest human. Coordination cost for reconstruction is high. Asymmetric information is stable.

Confession equilibrium (post-protocol regime): suppression becomes exponentially expensive because the protocol integrates the entire public stream continuously. Every new public data point constrains the latent graph further. The censor must suppress not individual facts but the combinatorial structure of all facts simultaneously. At sufficient observation density, voluntary disclosure becomes cheaper than active suppression.

The transition point is computable. It occurs when the marginal cost of suppressing one additional edge in exceeds the marginal cost of disclosing it.

This is not idealistic. It is thermodynamic. Lies require maintenance. Truth is free. The protocol doesn't force confession through coercion or legal mechanism. It makes confession the economically rational choice. The arc bends toward justice because lies have nonzero description length and truth is the minimum-energy model.

Why Now

Define audit bandwidth : the rate at which a society turns public evidence into enforced consequences — investigations opened, cases prosecuted, regulations enacted, officials removed. Define observation bandwidth : the rate at which public data about institutional behavior accumulates.

Over the last decade, is falling: investigative capacity is shrinking, regulatory agencies are being hollowed, legal processes are slowing, oversight mechanisms are being captured or defunded. Meanwhile is rising: more digital records, more cameras, more metadata, more filings, more open data.

The accountability event horizon is the regime where — evidence about institutional behavior exists in abundance, but the institutions designed to metabolize that evidence into consequences cannot keep pace. This is not a partisan observation. It is structural. Any institution, any party, any country: when the evidence stream outpaces the accountability machinery, the only remaining check on power is information-theoretic.

The traditional mechanisms — journalism, courts, elections, legislative oversight — are bandwidth-limited and increasingly degraded. The question is whether a compression protocol can supply the missing layer: a verifier-friendly format that converts raw observations into witness-carrying claims fast enough to matter.

The White House Problem proposes that it can. Not through politics, but through mathematics. The observation stream doesn't care about elections. The consistency constraints don't care about jurisdiction. The Fano bound doesn't care about executive orders. The compression runs on publicly available data that no administration can retroactively classify, because it was never classified to begin with.

The current moment is not separate from this research. It is the motivation for this research. When collapses, the protocol is what remains.

truth@home

distributed truth-mining for the public good

SETI@home searched for extraterrestrial intelligence by distributing radio signal analysis across millions of home computers. Folding@home predicted protein structures. Both demonstrated that civilization-scale compute can be assembled from voluntary participation when the problem matters.

truth@home is the same architecture applied to the White House Problem — Bellingcat at machine scale, except the output isn't an investigative thread, it's a constrained graph with provenance.

The public observation stream is enormous but structured. Financial records follow schemas. Flight logs have standard formats. Corporate registrations are templated. Court filings are indexed. The raw material is not secret — it is scattered, fragmented across jurisdictions and databases, and too voluminous for any individual or small team to integrate.

truth@home distributes the integration:

Layer 1 — Ingestion. Volunteers contribute compute to parse, normalize, and content-address public records into a shared append-only log. Every record gets a hash. Every hash gets a timestamp. Provenance is built in from the first byte. This append-only, content-addressed log — the provenance spine — is the data structure that makes every downstream claim auditable back to raw public records.

Layer 2 — Constraint extraction. Distributed workers extract consistency constraints from record pairs and tuples. This is where domain expertise becomes essential: the protocol can check constraints at machine scale, but discovering which constraints matter — which invariants (legal, accounting, physical, procedural) are the right parity checks — requires human knowledge. Journalists, forensic accountants, investigators, and domain researchers are constraint authors. They contribute the laws of each domain, and adversarial test cases that break naive reconstructions. We call this the constraint foundry: the human-powered layer that manufactures the rules the oracle enforces.

Layer 3 — Graph reconstruction. The core RL protocol runs on the constraint set, proposing latent graph structures and scoring them against the consistency oracle. Raw records remain provisional axioms until the system can reconstruct them with low residual error — then derived structure gets promoted to trusted claims through what we call an axiom-to-theorem gate: a thresholded transition from "raw input" to "verified structural knowledge." This layer requires the most compute and benefits most from GPU donation.

Layer 4 — Audit and verification. Independent verification nodes re-derive claimed edges from raw observations, checking provenance chains. Any edge that cannot be independently verified is flagged and demoted. The output of this layer is witness-carrying edges: each claim packaged with its own checkable proof payload. Verification here is proof-of-work for claims — you can check edges without trusting the claimant.

What one record's journey looks like:

  1. A corporate registration PDF from Delaware enters Layer 1. Normalized text extracted, hashed (), appended to the provenance spine with timestamp.
  2. Layer 2 extracts: "Registered agent = RA_19." Cross-reference: RA_19 appears as registered agent for 1,204 entities. Constraint generated: "Entity cluster shares administrative control surface."
  3. Layer 3 proposes a shell company topology linking Entity_A ↔ Entity_B via the RA_19 motif. The oracle score improves because this structure explains multiple filings with fewer bits.
  4. Layer 4: an independent audit node re-downloads the PDF, re-hashes, confirms the same RA_19 extraction. The edge claim becomes witness-carrying: "Here is the exact record, here is the extraction trace, here are the constraints satisfied."

The architecture uses the same epistemic structure described in the HOLOQ manuscript: append-only log (provenance), grammar induction (pattern discovery across records), proof-gated claims (every edge must trace to evidence), and explicit uncertainty (the credence set, not a conspiracy board).

Participation is voluntary. Data sources are exclusively public. The protocol never accesses, requests, or processes classified, stolen, or illegally obtained material. The reconstruction is built entirely from what power has already allowed to be seen — which, at bits per day, turns out to be a lot.

Mine Kits

truth@home is the engine. Mine Kits are the adapters that make it domain-specific.

A Mine Kit is a bundle: domain parsers (how to read the records), schemas (how to normalize them), a constraint library (which invariants the oracle checks), benchmark queries (known-answer tests for calibration), and an evaluation harness (how to score reconstruction quality). The core protocol is domain-agnostic. The Mine Kit makes it domain-literate.

Each millennium-mine decomposes into the same five technical levers from the research agenda — but stresses them differently. A financial secrecy mine stresses constraint extraction and provenance. An influence network mine stresses temporal causal ordering. A supply chain mine stresses conservation constraints. The Mine Kit encapsulates this variation so that the protocol doesn't need to be rebuilt for every domain.

Millennium-Mining

The White House Problem is not about any particular scandal. The disease is a general structure:

Institutional secrecy scales sublinearly with institutional power.

The more powerful the institution, the more interactions it generates, the more traces those interactions leave in the public manifold, and the harder it becomes to censor all of them simultaneously. Power generates observable side effects. Secrecy requires suppressing the combinatorial structure of those side effects. The combinatorial space grows faster than any censorship budget.

This is why the problem is named for a building, not a person. Persons are vertices. The building is the structure.

Millennium-mining is the general framework of truth@home for applying MDL reconstruction to any institutional latent graph:

Financial secrecy. Shell companies, layered ownership, offshore flows. The bipartite graph is (beneficial owners × transactions). Public observations: corporate registrations, property records, SEC filings, Panama/Pandora-class leaks (which are public once published — those were a leak; this is what happens when the public stream itself becomes sufficient). Consistency constraints: accounting identities — money in equals money out, ownership chains are acyclic, beneficial owners are legal persons.

Influence networks. Lobbying, revolving doors, regulatory capture. The bipartite graph is (actors × policy outcomes). Public observations: lobbying disclosures, campaign contributions, employment histories, voting records, policy timelines. Consistency constraints: temporal ordering (influence precedes outcome), capacity (one person, one job at a time).

Supply chain accountability. Forced labor, environmental violations, sanctions evasion. The bipartite graph is (producers × products × destinations). Public observations: shipping manifests, customs declarations, satellite imagery, import/export databases. Consistency constraints: conservation of mass (goods don't teleport), transit times (ships have maximum speeds).

The general case. Any domain where power generates observable traces and secrecy suppresses the causal graph connecting actors to acts. The protocol is domain-agnostic. The constraints are domain-specific. The phase transition is universal.

Each instance is a millennium-mine: a site where applying sufficient compression to the public stream extracts latent structure that was always there, waiting to be reconstructed.

The Epstein network is not the target. It is the calibration problem — a partially revealed ground truth against which the protocol can be validated before deployment on harder targets. When partial ground truth exists (through court proceedings, journalistic investigation, or whistleblower disclosure), the protocol's reconstruction can be scored against reality. This is the synthetic validation step applied to a natural experiment.

Research Agenda

For the ML/LLM community — five open problems, each independently publishable, each mapping to a lever in the Mine Kit stack:

1. Behavioral embedding. Can transformer architectures learn meaningful representations of micro-behavioral sequences from video? Gaze patterns, vocal stress, gesture timing, linguistic choice — compressed into embeddings that predict subsequent observable actions better than chance. (Stresses: influence and supply chain mines.)

2. Consistency oracles. Can we build reliable detectors for kinematic, temporal, and information-theoretic contradiction in proposed graph reconstructions? Given a candidate and observation set , return the constraint violation vector efficiently. Benford's Law is the toy version — simple invariants in public numbers already surface fraud; consistency oracles generalize that to entire institutions. (Stresses: financial secrecy mine.)

3. MDL optimization over graphs. What are tractable approximations for minimum description length search over large actor-act bipartite graphs? The search space is exponential; the question is where the phase transitions in tractability occur. (Stresses: all mines.)

4. Adversarial censorship. How does reconstruction fidelity degrade under optimal adversarial suppression? Where are the phase transition boundaries between identifiable and non-identifiable regimes as a function of censor budget and observation density? (Stresses: influence mine, general case.)

5. Synthetic benchmarks. Can we generate synthetic censored graphs with known ground truth, adaptive censors of specified budget, and realistic observation noise — then measure reconstruction accuracy across protocols? This is the testbed that makes everything else falsifiable. (Stresses: calibration infrastructure for all mines.)

Epistemic DevOps

Between "five research problems" and "working public system," there is a third track: the unglamorous infrastructure that makes truth-mining buildable, not just solvable.

Ingestion reliability. Parsers break. Formats change. Databases go offline. Data quality degrades silently. Someone has to build and maintain the plumbing that keeps the observation stream flowing, clean, and verifiable.

Provenance tooling. The provenance spine needs software: content-addressing libraries, inference trace formats, witness packaging tools, provenance verification APIs. Every edge claim must be reproducibly auditable. This is not research — it is engineering, and it is prerequisite to everything else.

Benchmark operations. Synthetic benchmarks need to be generated, versioned, maintained, and scored. Red-team harnesses need to test the protocol against adversarial censors. Evaluation infrastructure is the immune system of the project.

Verification infrastructure. Layer 4 audit nodes need software, incentive design, and operational procedures. Who runs them? What's the incentive? How do you prevent verification capture — auditors compromised by the institutions under investigation?

This is epistemic DevOps: continuous integration for truth. It is the layer that turns research outputs into a working public system. The builders of this layer are as essential as the researchers and the solvers — and the bounty structure should reflect that.

The Reward

$1,000,000 USD in $HOLOQ tokens, allocated from the HOLOQ research treasury.

The reward is funded through the $HOLOQ token economy. Every transaction generates a royalty. Royalties accumulate in the treasury. The treasury funds research. Research produces results. Results generate attention. Attention generates transactions. The loop is self-funding when the work is real.

Current treasury status and token price are visible in the Finance panel (bottom-right toolbar, live chart — no external sites needed).

Full prize ($1M): Complete solution satisfying all five criteria.

Partial prizes (up to $250K each): Significant advances on individual criteria, awarded by HOLOQ research council vote.

Bounties (variable): Specific sub-problems posted as identified — constraint extraction algorithms, provenance verification protocols, synthetic benchmark construction, observation stream parsers, Mine Kit development.

Epistemic DevOps grants: Infrastructure contributions (ingestion tooling, provenance libraries, verification nodes) funded through treasury allocation on demonstrated deliverables.

The Premise

HOLOQ's target market capitalization is $3T.

This is not a joke. It is a compression of the following argument:

The entity that solves the White House Problem — that builds the infrastructure for continuous, provable, public-stream reconstruction of institutional latent graphs — becomes the most valuable institution on Earth. Not because it sells a product. Because it is the product: a truth infrastructure that makes the cost of institutional secrecy calculable and the benefit of institutional transparency computable.

$3T is approximately the market capitalization of the most valuable public company on Earth. It is also approximately the annual budget of the United States federal government. The coincidence is structural. If the protocol achieves provable provenance at usable fidelity, then the value is bounded below by the total global spend on auditing, compliance, and trust infrastructure that it replaces or makes redundant.

"Buy the United States" is a compression. The decompression: build an infrastructure so valuable that the institutions currently maintained by information asymmetry find it cheaper to participate than to resist. You don't buy the building. You change the economics so thoroughly that the building's current operating model becomes untenable and a better one becomes inevitable.

The $3T claim is not a valuation. It is a control input — a coordination flywheel. The mechanism: declare a Schelling target → the target attracts attention → attention generates token transactions → transactions fund the treasury → the treasury funds bounties tied to measurable deliverables (Mine Kits shipped, provenance coverage, benchmark scores) → bounties produce shipped infrastructure → shipped infrastructure produces audit-verified wins → verified wins attract more attention. The loop is closed only if the wins are independently auditable. Without verification, it becomes pure hype. With verification, it becomes a self-funding research engine.

The math is on this page. The token is live. The problem is open.

How to Participate

Mathematicians and computer scientists. The formal specification is above. The research agenda has five open problems. Solve one. Partial results that advance any of the five solution criteria are recognized and funded.

Data engineers. truth@home needs epistemic DevOps. The provenance spine, the ingestion pipeline, the verification nodes, the benchmark harness — this is the infrastructure that makes everything else possible. If you build production data systems, this is the hardest and most important data engineering problem in the world.

Investigators and journalists. You already do millennium-mining by hand. The protocol formalizes and scales what you do. Your domain expertise is not optional decoration — it is the constraint foundry. The protocol can check constraints at machine scale; it cannot invent the right ones. Which accounting invariants actually hold? Which public sources are reliable? Where is the observation density highest? You are the authors of the constraint library that makes the formal machinery work.

Everyone else. Buy $HOLOQ. The token funds the treasury. The treasury funds the research. The research solves the problem. The problem changes the economics of institutional secrecy. This is not financial advice. This is a mechanism design.

Truth compresses. Lies don't. A system that compresses reality toward its minimum description length will, as a mathematical consequence, surface truth and dissolve deception.

Not against people. Against walls. Any wall. Every wall.

Pull on the link and see what reconstructs.


truth-mining: the distributed protocol for compressing institutional secrecy into public knowledge

HOLOQ Research Division — January 2026

IBM Plex Serif
Serif ← →