Integrate the VIBES standard into your AI tool or agent. Add transparency to every AI operation in minutes, not months.
This guide covers implementing the core VIBES data format. For attestation integration see VERIFY. For risk scoring integration see PRISM.
AI tools and agents are transforming how work gets done — from writing code to managing infrastructure to automating complex workflows. As adoption grows, so does the demand for transparency: what did the AI do? What prompted it? How did it reason about the task?
Transparency is becoming table stakes. Tools and agents that can prove their work will win enterprise adoption over those that can't.
Organizations in regulated industries need audit trails for AI-driven operations. VIBES gives them one in a standardized format.
When an issue is found in AI-generated output, VIBES lets users trace exactly which model, prompt, and reasoning produced it.
A basic Low assurance implementation takes roughly 200 lines of code — create three JSON files and append to a log on each AI operation. Medium adds prompt capture (~50 more lines). High adds reasoning capture with optional compression (~100 more lines).
Optional content anchoring fields (anchor_context, anchor_hash, file_content_hash) add ~100 bytes per annotation. PRISM fields (risk_score + risk_factors) add ~200–500 bytes per annotation — always inline, no compression needed.
A minimum viable VIBES implementation (Low assurance) requires three steps: create a config file, record environment context, and append annotations after each AI operation.
.ai-audit/config.jsonInitialize the audit directory with a configuration file declaring the assurance level. Create this file on first use if it doesn't exist.
manifest.jsonAt session start, compute a SHA-256 hash of your tool and model identity, then write the environment context entry to the manifest. Reuse the same hash for all annotations in that session.
annotations.jsonlAfter each AI operation, append a line annotation record with the file path, line range, action type, environment hash, and timestamp. One line per annotation, append-only.
Each annotation is a single JSON line. Use atomic append operations (O_APPEND on POSIX) to support concurrent writers. Never modify or delete existing lines.
When creating line or function annotations, read the annotated range and compute content anchors for rebase resilience:
Content anchors enable re-matching annotations after line shifts from rebase, squash, or amend operations. The file_content_hash provides a fast-path: if the file is unchanged, line numbers are still valid and no anchor search is needed.
If risk_scoring.enabled in config.json, gather available signals and compute a PRISM score:
Store individual signal assessments in risk_factors for score transparency. Available signals depend on assurance level: Low has basic signals, Medium adds prompt_token_count, High adds model_capability_tier. For the full PRISM framework, see the PRISM specification. PRISM is a standalone extension on top of VIBES.
Medium assurance adds prompt context. Before sending a prompt to the model, hash and record it:
Prompt types: user_instruction, edit_command, chat_message, inline_completion, review_request, refactor_request, other.
High assurance adds chain-of-thought capture. After receiving the model's response, record its reasoning:
Multi-agent workflows require delegation records, edge emission, and decision tracking. These are mandatory at all assurance levels.
VIBES audit data lives in three files inside the .ai-audit/ directory. All files are JSON and designed for git tracking.
The manifest stores four types of context entries, each identified by its SHA-256 hash. Higher assurance levels capture more context types.
Tool name, version, model name, version, parameters. Required at all levels.
Shell commands, file operations, API calls. Types: shell, file_write, api_call, etc.
Full prompt text, type classification, context files. Links code to the instruction that produced it.
Chain-of-thought trace, token count, reasoning model. Supports compression and external blob storage.
Context hashes are the primary keys linking manifest entries to annotations. The hash algorithm ensures deterministic, content-addressed identifiers.
The first 16 hex characters may be used for display. The full 64-character hash is always authoritative.
The annotation log (annotations.jsonl) supports three record types:
New Action Types:
rebase_remap — Line/function annotation remapped after history rewrite. Must include supersedes edge to original.rebase_orphan — Annotation could not be remapped — anchor not found in rewritten file.Entries are keyed by their SHA-256 hash. The hash is computed from the entry content (excluding created_at). Identical contexts produce identical hashes and are deduplicated automatically.
The annotation log uses JSONL (JSON Lines) format for append-only efficiency:
\nnull values may be omitted or explicitly includedEdge records capture directed causal and dependency relationships between audit events. Edge tracking is mandatory at all assurance levels — tools MUST emit edge records to capture causal relationships.
Edge types: caused_by (code generated from prompt), depends_on (explicit dependency), informed_by (context read), delegated_to (task delegation), supersedes (replacement), reviewed_by (review relationship).
Delegation records capture multi-agent orchestration metadata when a parent session spawns child sessions for task delegation.
Delegation types: task (implementation work), review (code review), test (test writing/running), refactor (refactoring), other (anything else).
Decision context entries are structured records in the manifest created when the AI evaluates multiple approaches. Decision tracking is mandatory at all assurance levels when multiple options are considered.
The decision entry is stored in the manifest keyed by its content hash (excluding created_at). Subsequent annotations can reference it via the decision_hash field to link code changes back to the decision that motivated them.
Line and function annotations support optional content anchoring fields for rebase resilience:
Annotations may include a normalized risk score and structured risk factors. For full risk scoring implementation details, see the PRISM specification.
The fastest way to make an AI agent VIBES-compliant is to drop the agent instruction file into its system prompt. The file is self-contained — no external dependencies, no downloads.
Download vibes-agent.md and add it to your agent's instruction set. The file contains complete specifications for all assurance levels, including hash computation, manifest format, annotation schema, and agent behavior hooks.
Compatible agent instruction systems:
CLAUDE.md — Claude Code.cursorrules — Cursor.windsurfrules — Windsurfcopilot-instructions.md — GitHub CopilotVIBES defines eight hook points where an agent must perform audit operations:
The vibes-agent.md file includes:
.ai-audit/ directory structure)Optional in VIBES 1.0, tool provider cosigning lets your tool add a second signature to attestations, proving the audit data was generated by your tool in real time. This protects users against data fabrication, post-hoc editing, and tool impersonation.
For full attestation implementation details, see the VERIFY specification.
When your tool cosigns, attestations are classified as tool-corroborated instead of just self-attested. This is the strongest trust signal in the VIBES system — it means an independent party (your tool) confirms the data is genuine. Registries, badges, and verification reports all display this trust tier.
Create an Ed25519 key pair for your tool. The private key stays in your infrastructure (server-side or secure enclave) and is never distributed to users.
Serve your public key at https://{your-domain}/vibes/vibes-signing-keys.json. Include your keyid, algorithm, PEM-encoded public key, validity window, and status. Verifiers will use this endpoint to look up your key automatically.
Build an endpoint that accepts a 32-byte PAE hash and returns an Ed25519 signature. This is the privacy-preserving approach — your endpoint never sees the actual audit data, only its cryptographic hash. Users pass this URL via --cosign-url.
The cosignature must be generated when your tool produces the audit data, not after the fact. This timing constraint is the core anti-fabrication property — it prevents users from obtaining your signature for data you didn't generate.
The endpoint receives only the 32-byte hash, not the full audit data. It should authenticate the request (e.g., via API key or session token) to prevent abuse. Rate limiting is recommended.
Requirements: HTTPS only, stable URL for the lifetime of signed attestations, rotate keys at least annually, keep retired keys listed with status: "retired" for verifying historical attestations.
If you distribute a signing key with your tool binary (simpler but harder to rotate), users can pass it directly:
This approach is suitable for development/testing or tools that run fully offline. For production, the remote signing service is recommended because it keeps the private key server-side and supports key rotation without tool updates.
Use the vibecheck CLI to validate your VIBES implementation. It checks file structure, hash integrity, schema compliance, and assurance level requirements.
created_at in hash computation — this field must be excluded before hashing"," and ":")environment_hash on annotations — this field is required at all assurance levelssession_id on annotations is optional)manifest.json creates a bottleneck with 10+ concurrent agents. This is a known v1.0 limitation. For high-concurrency workflows, consider per-agent write-ahead logs merged at commit time.VIBES is an open standard. The spec, reference implementation, and validation tools are all freely available.
Whether you're building an AI coding assistant, an autonomous agent framework, a CI/CD plugin, or an audit platform — VIBES gives your users verifiable transparency with a standard data format. Add tool provider cosigning to prove your tool generated the data.
Understand the standard before implementing it. Start with the overview, then dive into the assurance level that matches your target audience.