Verifiable Inventory of Bot-Engineered Signals — an open standard for tracking, verifying, and attesting AI involvement in software and agent workflows.
AI is generating production code at an accelerating pace. But the industry has no standard way to record what was generated, by which model, with what instructions, or how the model reasoned. Every AI-produced artifact is a black box the moment it lands in your codebase.
VIBES changes that by defining a structured, tool-agnostic data format for recording AI involvement. The data it captures has immediate value — and opens the door to use cases the industry is only beginning to explore.
Security triage — Know which AI model wrote which code. When a model is found to produce vulnerable patterns, identify every line it touched across your entire codebase.Compliance & provenance — Track AI involvement for regulatory requirements: EU AI Act transparency obligations, SOC 2 change management, FDA software validation, and emerging AI disclosure mandates.Retroactive risk assessment — When a model is compromised, poisoned, or found to carry systematic biases, VIBES data lets you retroactively identify all affected code — not just future output.Transparency badges — Prove your project's AI audit posture with verifiable VIBES badges. Show developers, users, and auditors exactly how AI was involved.Supply chain mapping — Map the full AI supply chain across your codebase: which models, which tools, which versions, which prompts — queryable and git-tracked.Agent decision trees & action audit — VIBES captures context graphs, delegation hierarchies, and structured decision records — the full trace of what an AI agent decided, why, and what it spawned. This data enables security and detection systems to verify that agents performed as expected, flag anomalous decision paths, and reconstruct the chain of reasoning behind any autonomous action.VIBES intentionally captures data without prescribing what to do with it. The base standard is a telemetry layer — a structured signal that downstream systems can consume for purposes we're only beginning to define:
AI Safety & Security insurance — Structured audit data enables insurers to assess and price risk for AI-generated code. VIBES annotations provide the telemetry signal that underwriters need to differentiate between audited and unaudited AI usage. As AI-generated code becomes the norm, insurance products will need this data.Agent performance optimization — The decision records, delegation traces, and action logs captured by VIBES are exactly the data that EVOLVE consumes to drive agent learning, and that PRISM uses to calculate risk scores. By analyzing patterns in agent decisions — which approaches succeeded, which were revised, which triggered high risk scores — organizations can build feedback loops that improve agent performance over time. The audit trail becomes training signal.Industry benchmarking — Aggregate anonymized VIBES data across organizations to track AI code quality trends, model safety profiles, and tool effectiveness. Which models produce the most secure code? Which prompt patterns lead to better outcomes?Automated regulatory compliance — As AI transparency regulations mature globally, standardized audit data enables automated compliance checks rather than manual audits. VIBES provides the machine-readable substrate.Model evaluation at scale — Use real-world VIBES data to evaluate model performance, safety characteristics, and coding patterns across production environments — not just benchmarks.Detection & behavioral analysis — VIBES decision trees and action traces provide the raw signal for anomaly detection, behavioral baselining, and agent monitoring systems. When an agent deviates from expected patterns — unusual delegation chains, unexpected tool invocations, or reasoning traces that don't match the prompt — security teams have the data to detect and investigate.Future extensions — VERIFY, EVOLVE, and PRISM are the first extensions built on VIBES data — but the foundation is designed for more. Any domain that needs structured AI audit telemetry can build on the same tool-agnostic, content-addressable, open data layer. As AI agents become more autonomous, the value of structured audit data compounds — and the use cases we haven't imagined yet will have the data waiting for them.The value of VIBES is not just what you can do with the data today — it's that the data exists at all. Every decision tree captured, every agent action logged, every delegation trace recorded now is an asset for every future use case. The technologies built on top of this data will evolve — the foundation stays.
VIBES is an RFC-style specification that defines a three-tier framework for recording AI involvement in software and AI-driven workflows. It specifies what metadata to capture, how to store it using content-addressable hashing, and where audit data lives relative to the project. The standard supports context graphs for tracking causal relationships between events, multi-agent delegation for orchestrating sub-sessions across AI tools, and structured decision records for capturing architectural choices. It is tool-agnostic and designed for adoption by any AI coding tool — Claude Code, Cursor, Windsurf, Copilot, Codex, CLINE, or others.
All VIBES audit data lives in a .ai-audit/ directory at the project root, tracked alongside your code in version control.
The VIBES standard follows a formal versioning and ratification process. Draft versions are working documents subject to change. A version becomes stable when formally ratified after public review.
| Version | Date | Status | Ratified By | Notes |
|---|---|---|---|---|
| 0.1-draft | 2026-02-03 | Superseded | — | Initial internal draft. Core data model, three assurance levels, file layout. |
| 0.2-draft | 2026-02-10 | Superseded | — | Added context graphs, edge records, multi-agent delegation, decision records. |
| 1.0-draft | 2026-02-20 | Superseded | — | Feature-complete draft. Tool provider cosigning, attestation pipeline, domain extensibility. Pending community review. |
| 1.1-draft | 2026-02-26 | Draft | — | Line number rebase resilience with content anchoring, concurrency bottleneck acknowledgment, PRISM (Provenance & Risk Intelligence Scoring Model) optional extension. |
| 0.1-draft | 2026-02-26 | Draft | — | Initial VERIFY extension draft. Attestation pipeline, DSSE envelopes, trust tiers, tool provider cosigning. See VERIFY. |
| 0.1-draft | 2026-02-26 | Draft | — | Initial EVOLVE extension draft. Agent learning, governance frameworks, and reinforcement pipelines. See EVOLVE. |
| 0.1-draft | 2026-02-26 | Draft | — | Initial PRISM extension draft. Provenance & Risk Intelligence Scoring Model for AI-generated code risk assessment. See PRISM. |
| 1.0 | TBD | Pending | TBD | Target: first stable release after public review period. |
A version is ratified when the authoring committee formally approves it for stable use. The "Ratified By" column records the individuals or organizations that signed off.
VIBES defines three levels that progressively capture more about AI involvement. Start where it makes sense for your project and increase detail as your needs grow.
| Capability | Low | Medium | High |
|---|---|---|---|
| Identify tool and model | ✓ | ✓ | ✓ |
| Track AI-generated lines/functions | ✓ | ✓ | ✓ |
| Track commands/tool invocations | ✓ | ✓ | ✓ |
| Correlate with git commits | ✓ | ✓ | ✓ |
| Know what the AI was asked | — | ✓ | ✓ |
| Know context files provided | — | ✓ | ✓ |
| Know how the AI reasoned | — | — | ✓ |
| Reproduce the generation | Unlikely | Possible | Likely |
| Track causal relationships (context graphs) | ✓ | ✓ | ✓ |
| Multi-agent delegation hierarchy | ✓ | ✓ | ✓ |
| Structured decision records | — | ✓ | ✓ |
| Line annotation rebase resilience | Best-effort | Best-effort | Best-effort + anchoring |
| Storage per annotation | ~200 B | ~2–10 KB | ~10–500 KB |
| Risk scoring & provenance intelligence | PRISM extension | ||
| Agent learning & governance | EVOLVE extension | ||
VIBES stores audit data in three files inside the .ai-audit/ directory. All three are designed to be human-readable, git-friendly, and queryable.
Defines the project's assurance level, tracked file extensions, and exclusion patterns. This is the first file a tool creates when initializing VIBES compliance.
Maps SHA-256 content hashes to their full context objects (environment, prompt, command, reasoning, decision). Each unique context is stored once and referenced by hash from annotations.
Append-only JSONL file containing line annotations, function annotations, session lifecycle events, edge records (causal relationships), and delegation records (multi-agent orchestration). Each record references context hashes from the manifest. Git diffs show exactly what was added.
VIBES uses SHA-256 to ensure no duplication. Identical contexts always produce the same hash, regardless of when they were created. The hashing algorithm:
created_at field from the context objectThese are the authoritative source documents for the VIBES standard. The HTML pages on this site are derived from these specifications.
Formal RFC-style specification. The complete, normative reference for the VIBES data format, hashing algorithm, and storage conventions.
Human-readable markdown version with extended examples, code samples, and implementation guidance.
Drop-in prompt file for AI agents. Add to CLAUDE.md, .cursorrules, .windsurfrules, or any AI agent configuration for automatic VIBES compliance.
The attestation and security verification extension — DSSE envelopes, Ed25519 signatures, trust tiers, and the public attestation registry.
The agent learning and governance extension — delegation hierarchies, decision records, reinforcement pipelines, and governance frameworks for autonomous agents.
The provenance and risk intelligence extension — PRISM scoring model for AI-generated code risk assessment, provenance tracking, and quality signals.
VIBES is complementary to existing software supply chain standards, not competing with them. It fills a specific gap: tracking AI provenance at the line and function level.
The key differentiator: VIBES tracks AI provenance at the line and function level. No other standard provides this granularity. The same data model extends to agent workflow auditing beyond source code.
VIBES is the foundation of a four-part ecosystem. Three companion standards extend it with security, risk analysis, and agent intelligence.
VERIFY — Cryptographic attestation. Ed25519 signatures, DSSE envelopes, tool provider cosigning, and a public attestation registry. Proves your audit data is authentic, untampered, and temporally anchored.
PRISM — Risk scoring. Computes severity bands from VIBES audit data for CI/CD gating, security triage, and compliance dashboards.
EVOLVE — Agent learning & governance. Decision records, feedback loops, and governance frameworks for self-improving agents.