Every wire, every prompt, every dollar. The pipeline, the source-licensing covenant, and the quarterly Methodology Log that keeps the framework honest.
Updated May 2026 · GRIN v1.0 · CLAIMS v1.0 · NOVEL v1.0
VibesWire is not a one-prompt-and-publish operation. It's a multi-stage pipeline where the framework specs (GRIN, CLAIMS, NOVEL) are versioned inputs to the transform step, not floating opinions in a human writer's head. The right rail of the diagram below — the editorial review loop — is the load-bearing piece. The framework is updated quarterly from real reader submissions, real adversarial critic reports, and real cross-framework tests. None of that is decoration; all of it is published.
Sources are paid (Guardian) or used per their explicit terms (arXiv TDM policy, NASA public domain) or per non-commercial RSS conventions. The transform step is gated by the published framework spec — version controlled, contestable, audited quarterly. Translation is post-process; every language ships with a named reviewer and a public quality log. The four daily editions are anchors, not deadlines — articles publish continuously and the editions reorganize the homepage.
We started VibesWire with a specific bet: the positive use case for generative AI is structured, transparent, framework-applied analysis on top of legitimately-licensed source material. That bet only works if the source-licensing layer is honest. Five concrete commitments — each with the actual underlying terms one click away.
We pay The Guardian. [Guardian Open Platform]
Their licensed Open Platform API is our primary news source. We pay for it because journalism is expensive to produce and we are not interested in the business model where AI summarizes other people's reporting without paying for it.
We respect every source's terms. [arXiv API terms]
arXiv preprints are used per arXiv's explicit terms of use and TDM policy. NASA news is in the public domain. Al Jazeera, DW, EurekAlert, and Hacker News are accessed via their public RSS feeds and APIs under non-commercial conventions. We do not scrape, we do not bypass paywalls, we do not bulk-mirror.
Every article links back. Every figure is attributed.
The original source is one click away on every piece we publish. CLAIMS articles use the paper's own figures with the paper's attribution. We are a triage layer on top of journalism and research, not a replacement for either.
Every transform is AI-disclosed.
Every article carries the editorial mode it was written under (GRIN / CLAIMS / NOVEL), the model that produced it (Opus 4.7 or Sonnet 4.6), and the framework version it was scored under. The byline is honest. There is no human-impersonation layer.
We are not extractive by values.
The whole publication is built around scoring extraction in the world. We would be hypocrites if our own operation extracted from the journalists, scientists, and creators we cover. We pay where we can, attribute always, and link back without fail. Our use of AI is a structured visible layer of analysis on top of legitimately-licensed source material — that's the entire point.
The framework is not fixed. It evolves on a published quarterly cadence with annual stress-tests by paid critics who are explicitly hostile to its commitments. Every version, every change, every drift delta is public. If you disagree with the framework, you can disagree with a specific dimension or weighting in a specific version — that's a real argument we can move with.
Quarterly cadence
Reader submissions and cross-framework tests are synthesized into a published Methodology Log entry every quarter. The framework version increments. Calibration deltas are published against a held-out 50-article sample so longitudinal claims (“stories trending more extractive”) can be drift-corrected by anyone, not just by us.
Annual adversarial review
Once a year we commission a critic with explicit hostility to the framework's politics — typically a working journalist or academic who would build a different rubric if asked — to audit a sample of our scoring and identify systematic gaps. We pay them regardless of whether their findings flatter us. The report is published in full alongside our response.
Most publications either claim “we're free” (hiding the cost) or pitch subscriptions (“here's why you should pay”). We publish what it costs to run VibesWire openly, with the AI / human / infrastructure split visible.
This month · last published snapshot
See the live breakdown → /costs
The /costs page is the canonical record. Cost shape is roughly: B4M / Claude (the bulk of every dollar) → named reviewer stipends per language → AWS infrastructure → domain & email. We do not run ads. We do not run SEO games. We do not run engagement bait. The publication is funded today out of pocket; supporter contributions and a possible paid tier in 2027 are the two paths to sustainability.
The visible part of the audit-the-bias claim. As reader submissions and adversarial reviews come in, the patterns get summarized in each quarter's Methodology Log. Each entry is dated, version-stamped, and connects a specific blind spot to a specific framework refinement.
No entries yet — v1.0 inaugural release.
First quarterly Methodology Log entry expected Q3 2026. Spotted something the framework missed? Tell us.
One thing the diagram above doesn't show: VibesWire's editor still writes. Once a week, a long-form piece called The Ledgeris authored by hand — Erik's voice, his structural framing, his analytical commitments — and then run through GRIN as structural wrapping. The hybrid is rare because editorials are conventionally unstructured opinion; ours benefit from framework-wrapping precisely because the editorial positions are themselves structural (an arithmetic gap, a regime change, a forced choice). The Ledger sits at the top of the homepage as a permanent hero — the human-voice layer above the AI pipeline. The framework is Erik's instrument, not Erik's replacement.
VibesWire applies three structured analytical frameworks — GRIN for news, CLAIMS for research papers, NOVEL for culture — to source material we pay for or use under explicit license. The frameworks are public, versioned, and contestable. AI does the production work; the editorial commitments are encoded once in the framework spec, not made up per article. We pay The Guardian. We respect arXiv's TDM policy. We do not scrape, we do not bypass paywalls, we attribute everything, we link back to the original, and every article tells you which model wrote it and which framework version scored it. Where we have blind spots, we ask readers to flag them, we hire critics to find them, and we publish quarterly Methodology Log entries documenting what changed. We are biased. The bias is structural, public, and versioned. You can audit it.
Want the full spec?
Each framework has a footer-grade methodology document, with the dimensions, scoring guides, editorial commitments, and intellectual genealogy.
Powered by Bike4Mind · Erik's AI infrastructure firm. VibesWire is a living case study of what the infrastructure can do.
News that makes you smarter, not angrier.