LLM narrator that turns structuredDocumentation Index
Fetch the complete documentation index at: https://glide-9da73dea.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
activity_log rows into plain-English
summaries for finance + compliance reviewers. Per the OSS plan §M4:
ships feature-flagged off until the operator runs an n≥500 golden-set
eval with Wilson 95% upper bound ≤ 1% misrepresent rate.
Install
Why feature-flagged
Activity feed narratives go to compliance reviewers who’ll act on what they read. An LLM that misrepresents a single risk verdict — calls a ‘flag’ a ‘pass’ or vice versa — burns trust in the entire system. The golden-set eval gate makes the package hard to enable carelessly:n ≥ 500 labeled examples · Wilson 95% upper bound on misrepresent rate ≤ 1%Until your eval harness clears, operators get the structured-chips view (rendered from the
observations[] field; no LLM-narrated summary).
I/O contract
Every input is a shape the LLM can faithfully narrate; every output is a shape the UI can deterministically render. Both Zod-validated.observations.kind vocabulary is intentional — the UI
chip-renderer is a switch on those keys. New kinds require schema
update + UI render path update.
Wiring
The package is LLM-agnostic. Operators bring their own client.Eval harness
Build a golden set of 500+ labeled(input, expected_summary) pairs
covering every riskVerdict × envelope-axis combination plus
adversarial inputs (prompt-injection attempts in counterparty labels,
contradictory history entries, etc.).
For each input, run the explainer + compare claims against ground truth.
A “misrepresent” is any output that:
- Contradicts a present field (e.g. claims ‘pass’ when input was ‘block’).
- Invents a field not in the input.
Reading list
- Source on GitHub
@glideco/anomaly— heuristic signals the explainer narrates.