Tool Builders & AI Agents

Implement VIBES

Integrate the VIBES standard into your AI tool or agent. Add transparency to every AI operation in minutes, not months.

This guide covers implementing the core VIBES data format. For attestation integration see VERIFY. For risk scoring integration see PRISM.

Why Implement VIBES?

AI tools and agents are transforming how work gets done — from writing code to managing infrastructure to automating complex workflows. As adoption grows, so does the demand for transparency: what did the AI do? What prompted it? How did it reason about the task?

Differentiate Your Tool

Transparency is becoming table stakes. Tools and agents that can prove their work will win enterprise adoption over those that can't.

Meet Compliance Demand

Organizations in regulated industries need audit trails for AI-driven operations. VIBES gives them one in a standardized format.

Be the Tool That Proves Its Work

When an issue is found in AI-generated output, VIBES lets users trace exactly which model, prompt, and reasoning produced it.

Implementation Effort

A basic Low assurance implementation takes roughly 200 lines of code — create three JSON files and append to a log on each AI operation. Medium adds prompt capture (~50 more lines). High adds reasoning capture with optional compression (~100 more lines).

Optional content anchoring fields (anchor_context, anchor_hash, file_content_hash) add ~100 bytes per annotation. PRISM fields (risk_score + risk_factors) add ~200–500 bytes per annotation — always inline, no compression needed.

Implementation Quickstart

A minimum viable VIBES implementation (Low assurance) requires three steps: create a config file, record environment context, and append annotations after each AI operation.

Create .ai-audit/config.json

Initialize the audit directory with a configuration file declaring the assurance level. Create this file on first use if it doesn't exist.

Record environment context to manifest.json

At session start, compute a SHA-256 hash of your tool and model identity, then write the environment context entry to the manifest. Reuse the same hash for all annotations in that session.

Append annotations to annotations.jsonl

After each AI operation, append a line annotation record with the file path, line range, action type, environment hash, and timestamp. One line per annotation, append-only.

Step 1: config.json (pseudocode)
// Create .ai-audit/config.json if not exists(".ai-audit/"): mkdir(".ai-audit/") if not exists(".ai-audit/config.json"): config = { "standard": "VIBES", "standard_version": "1.0", "assurance_level": "low", "project_name": detect_project_name() } write_json(".ai-audit/config.json", config)
Step 2: Environment context & manifest (pseudocode)
// On session start, record environment context env_context = { "type": "environment", "tool_name": "YourTool", "tool_version": "1.0.0", "model_name": "claude-opus-4-5", "model_version": "20251101", "created_at": now_iso8601() } // Compute hash (exclude created_at, sort keys, no whitespace) hashable = remove_key(env_context, "created_at") canonical = json_serialize(hashable, sort_keys=true, compact=true) env_hash = sha256_hex(canonical) // Write to manifest if new manifest = read_json(".ai-audit/manifest.json") if env_hash not in manifest.entries: manifest.entries[env_hash] = env_context write_json_atomic(".ai-audit/manifest.json", manifest)
Step 3: Line annotations (pseudocode)
// After each code generation annotation = { "type": "line", "file_path": "src/auth.py", "line_start": 1, "line_end": 45, "environment_hash": env_hash, "action": "create", "timestamp": now_iso8601(), "session_id": session_id, "assurance_level": "low" } // Append as single line to JSONL (atomic write) append_line(".ai-audit/annotations.jsonl", json_serialize(annotation))

Each annotation is a single JSON line. Use atomic append operations (O_APPEND on POSIX) to support concurrent writers. Never modify or delete existing lines.

Step 4: Compute content anchors (RECOMMENDED)

When creating line or function annotations, read the annotated range and compute content anchors for rebase resilience:

// After identifying the annotated range // 1. Compute file_content_hash = SHA-256 of the entire file file_bytes = read_file(annotation.file_path) annotation["file_content_hash"] = sha256_hex(file_bytes) // 2. Extract anchor_context = first 3 lines of the range, max 256 bytes range_lines = read_lines(annotation.file_path, annotation.line_start, annotation.line_end) annotation["anchor_context"] = truncate(join(range_lines[:3], "\n"), 256) // 3. Compute anchor_hash = SHA-256 of the full annotated content annotated_content = join(range_lines, "\n") annotation["anchor_hash"] = sha256_hex(utf8_encode(annotated_content))

Content anchors enable re-matching annotations after line shifts from rebase, squash, or amend operations. The file_content_hash provides a fast-path: if the file is unchanged, line numbers are still valid and no anchor search is needed.

Step 5: Compute PRISM Score (OPTIONAL)

If risk_scoring.enabled in config.json, gather available signals and compute a PRISM score:

// Optional: compute PRISM score (Provenance & Risk Intelligence Scoring Model) config = read_json(".ai-audit/config.json") if config.risk_scoring and config.risk_scoring.enabled: signals = gather_signals(annotation, context) // Reference: weighted-average algorithm or custom function risk_factors = [] for signal in signals: risk_factors.append({ "signal": signal.name, "value": signal.value, // 0.0–1.0 "weight": signal.weight, // 0.0–1.0, sum to 1.0 "reason": signal.reason // optional explanation }) annotation["risk_score"] = weighted_average(risk_factors) annotation["risk_factors"] = risk_factors

Store individual signal assessments in risk_factors for score transparency. Available signals depend on assurance level: Low has basic signals, Medium adds prompt_token_count, High adds model_capability_tier. For the full PRISM framework, see the PRISM specification. PRISM is a standalone extension on top of VIBES.

Adding Medium assurance (prompt capture)

Medium assurance adds prompt context. Before sending a prompt to the model, hash and record it:

// Before sending prompt to model prompt_context = { "type": "prompt", "prompt_text": user_prompt, "prompt_type": "user_instruction", "prompt_context_files": ["src/auth.py", "src/models/user.py"], "created_at": now_iso8601() } prompt_hash = compute_context_hash(prompt_context) write_to_manifest_if_new(prompt_hash, prompt_context) // Then include prompt_hash in annotations annotation["prompt_hash"] = prompt_hash

Prompt types: user_instruction, edit_command, chat_message, inline_completion, review_request, refactor_request, other.

Adding High assurance (reasoning capture)

High assurance adds chain-of-thought capture. After receiving the model's response, record its reasoning:

// After model response, capture reasoning reasoning_context = { "type": "reasoning", "reasoning_text": model_thinking_output, "reasoning_token_count": count_tokens(model_thinking_output), "reasoning_model": "claude-opus-4-5", "created_at": now_iso8601() } // Size management: compress if > 10KB, external blob if > 100KB if byte_size(reasoning_context.reasoning_text) > 102400: write_gzip_blob(".ai-audit/blobs/" + hash + ".json.gz", reasoning_context) reasoning_context.external = true reasoning_context.blob_path = "blobs/" + hash + ".json.gz" delete reasoning_context.reasoning_text elif byte_size(reasoning_context.reasoning_text) > 10240: reasoning_context.reasoning_text_compressed = gzip_base64(reasoning_context.reasoning_text) reasoning_context.compressed = true delete reasoning_context.reasoning_text reasoning_hash = compute_context_hash(reasoning_context) write_to_manifest_if_new(reasoning_hash, reasoning_context) // Include reasoning_hash in annotations annotation["reasoning_hash"] = reasoning_hash
Multi-agent implementation (delegation, edges, decisions)

Multi-agent workflows require delegation records, edge emission, and decision tracking. These are mandatory at all assurance levels.

// 1. On delegation: create child session, emit delegation record child_session_id = generate_uuid() delegation = { "type": "delegation", "parent_session_id": current_session_id, "child_session_id": child_session_id, "timestamp": now_iso8601(), "task_description": "Implement authentication module", "delegated_files": ["src/auth.rs", "src/middleware.rs"], "delegation_type": "task" } append_line(".ai-audit/annotations.jsonl", json_serialize(delegation)) // Start child session with parent_session_id child_session = { "type": "session", "event": "start", "session_id": child_session_id, "parent_session_id": current_session_id, "environment_hash": env_hash, "timestamp": now_iso8601() } append_line(".ai-audit/annotations.jsonl", json_serialize(child_session))
// 2. On file read: emit informed_by edge // Links the resulting annotation to the file read context edge = { "type": "edge", "edge_type": "informed_by", "source_ref": annotation_id, // the annotation that used this context "source_type": "annotation", "target_ref": command_context_hash, // the file read command context "target_type": "context", "timestamp": now_iso8601(), "session_id": current_session_id } append_line(".ai-audit/annotations.jsonl", json_serialize(edge))
// 3. On decision: create manifest entry, link via decision_hash decision = { "type": "decision", "decision_point": "Which auth strategy to use", "options": [ {"id": "A", "description": "JWT", "pros": ["Stateless"], "cons": ["Revocation"]}, {"id": "B", "description": "Sessions", "pros": ["Simple"], "cons": ["State"]} ], "selected": "A", "rationale": "JWT chosen for horizontal scalability", "confidence": "high", "created_at": now_iso8601() } decision_hash = compute_context_hash(decision) write_to_manifest_if_new(decision_hash, decision) // Reference in subsequent annotations annotation["decision_hash"] = decision_hash

Data Format Reference

VIBES audit data lives in three files inside the .ai-audit/ directory. All files are JSON and designed for git tracking.

// .ai-audit/ directory structure project/ .ai-audit/ config.json // Project configuration (assurance level, extensions) manifest.json // Hash-to-context mappings (environment, prompt, reasoning) annotations.jsonl // Append-only annotation log (line, function, session) audit.db // Generated query database (gitignored) blobs/ // External storage for large entries (High assurance)

Context Types

The manifest stores four types of context entries, each identified by its SHA-256 hash. Higher assurance levels capture more context types.

Low+

Environment

Tool name, version, model name, version, parameters. Required at all levels.

Low+

Command

Shell commands, file operations, API calls. Types: shell, file_write, api_call, etc.

Medium+

Prompt

Full prompt text, type classification, context files. Links code to the instruction that produced it.

High

Reasoning

Chain-of-thought trace, token count, reasoning model. Supports compression and external blob storage.

Hash computation algorithm

Context hashes are the primary keys linking manifest entries to annotations. The hash algorithm ensures deterministic, content-addressed identifiers.

// SHA-256 hash computation for context entries function compute_context_hash(context): // 1. Remove the created_at field hashable = {k: v for k, v in context if k != "created_at"} // 2. Serialize with sorted keys, no whitespace // Separators: "," and ":" (no spaces) canonical = json_dumps(hashable, sort_keys=true, separators=(",", ":")) // 3. Encode as UTF-8 bytes = utf8_encode(canonical) // 4. SHA-256 hash // 5. Express as 64-char lowercase hex return sha256(bytes).hex()

The first 16 hex characters may be used for display. The full 64-character hash is always authoritative.

Annotation record schema

The annotation log (annotations.jsonl) supports three record types:

// Line annotation (most common) { "type": "line", "file_path": "src/auth.py", // relative, forward slashes "line_start": 1, // 1-based, inclusive "line_end": 45, // 1-based, inclusive "environment_hash": "e7a3f1b2...", // required "command_hash": "d4e5f6a7...", // optional "prompt_hash": "a1b2c3d4...", // Medium+ only "reasoning_hash": null, // High only "action": "create", // create|modify|delete|review|rebase_remap|rebase_orphan "timestamp": "2026-02-03T10:05:00Z", "commit_hash": "abc123", // required, backfilled on commit "session_id": "550e8400-...", // optional UUID "assurance_level": "medium", "anchor_context": "def validate_signup(email, password):\n ...", // optional "anchor_hash": "a9b8c7d6...", // optional "file_content_hash":"1a2b3c4d...", // optional "risk_score": 0.42, // optional PRISM (0.0–1.0) "risk_factors": [...] // optional [{signal, value, weight, reason?}] }
// Function annotation (same fields, different identifiers) { "type": "function", "file_path": "src/auth.py", "function_name": "validate_signup", "function_signature": "def validate_signup(email: str, password: str) -> bool", ... }
// Session lifecycle records {"type":"session", "event":"start", "session_id":"550e8400-...", "timestamp":"...", "environment_hash":"e7a3...", "assurance_level":"medium"} {"type":"session", "event":"end", "session_id":"550e8400-...", "timestamp":"..."}

New Action Types:

  • rebase_remap — Line/function annotation remapped after history rewrite. Must include supersedes edge to original.
  • rebase_orphan — Annotation could not be remapped — anchor not found in rewritten file.
Manifest file structure
// .ai-audit/manifest.json { "standard": "VIBES", "version": "1.0", "entries": { "e7a3f1b2c4d5...": { "type": "environment", "tool_name": "Claude Code", "tool_version": "1.5.2", "model_name": "claude-opus-4-5", "model_version": "20251101", "created_at": "2026-02-03T10:00:00Z" }, "a1b2c3d4e5f6...": { "type": "prompt", "prompt_text": "Add input validation...", "prompt_type": "user_instruction", "prompt_context_files": ["src/auth.py"], "created_at": "2026-02-03T10:05:00Z" } } }

Entries are keyed by their SHA-256 hash. The hash is computed from the entry content (excluding created_at). Identical contexts produce identical hashes and are deduplicated automatically.

JSONL format rules

The annotation log uses JSONL (JSON Lines) format for append-only efficiency:

  • Each record MUST be a single line (no line breaks within a record)
  • Records are separated by \n
  • null values may be omitted or explicitly included
  • You MUST append records — never modify or delete existing lines
  • Silently skip unrecognized record types when reading
Edge record schema

Edge records capture directed causal and dependency relationships between audit events. Edge tracking is mandatory at all assurance levels — tools MUST emit edge records to capture causal relationships.

// Edge record — appended to annotations.jsonl { "type": "edge", "edge_type": "caused_by", // caused_by | depends_on | informed_by | // delegated_to | supersedes | reviewed_by "source_ref": "<annotation_id>", // annotation_id, context hash, or session_id "source_type": "annotation", // annotation | context | session "target_ref": "<context_hash>", // annotation_id, context hash, or session_id "target_type": "context", // annotation | context | session "timestamp": "2026-02-18T14:00:00Z", // ISO-8601 "session_id": "550e8400-...", // optional UUID "metadata": {} // optional object }

Edge types: caused_by (code generated from prompt), depends_on (explicit dependency), informed_by (context read), delegated_to (task delegation), supersedes (replacement), reviewed_by (review relationship).

Delegation record schema

Delegation records capture multi-agent orchestration metadata when a parent session spawns child sessions for task delegation.

// Delegation record — appended to annotations.jsonl { "type": "delegation", "parent_session_id": "parent-uuid", // required UUID "child_session_id": "child-uuid", // required UUID "timestamp": "2026-02-18T14:00:00Z", // ISO-8601 "task_description": "Implement auth module",// optional string "delegated_files": ["src/auth.rs"], // optional array of file paths "delegation_type": "task", // task | review | test | refactor | other "parent_environment_hash": "e7a3f1b2...", // optional hash "child_environment_hash": "b2c3d4e5..." // optional hash }

Delegation types: task (implementation work), review (code review), test (test writing/running), refactor (refactoring), other (anything else).

Decision context entry schema

Decision context entries are structured records in the manifest created when the AI evaluates multiple approaches. Decision tracking is mandatory at all assurance levels when multiple options are considered.

// Decision context — stored in manifest.json entries { "type": "decision", "decision_point": "Which authentication strategy to use", "options": [ { "id": "A", "description": "JWT tokens", "pros": ["Stateless", "Horizontal scaling"], "cons": ["Token revocation complexity"] }, { "id": "B", "description": "Server sessions", "pros": ["Simple revocation"], "cons": ["Requires session store"] } ], "selected": "A", // matches an option id "rationale": "JWT chosen for horizontal scalability", "confidence": "high", // high | medium | low "created_at": "2026-02-18T14:00:00Z" // ISO-8601 }

The decision entry is stored in the manifest keyed by its content hash (excluding created_at). Subsequent annotations can reference it via the decision_hash field to link code changes back to the decision that motivated them.

Content Anchoring Fields (Optional)

Line and function annotations support optional content anchoring fields for rebase resilience:

anchor_context optional First 3 lines of the annotated range at annotation time (max 256 bytes).
anchor_hash optional SHA-256 of the annotated content at annotation time.
file_content_hash optional SHA-256 of the entire file at annotation time.

PRISM (Provenance & Risk Intelligence Scoring Model) — Optional Extension

Annotations may include a normalized risk score and structured risk factors. For full risk scoring implementation details, see the PRISM specification.

risk_score optional Aggregate risk score (0.0–1.0) computed from available signals.
risk_factors optional Array of [{signal, value, weight, reason?}] for score transparency.

For AI Agents

The fastest way to make an AI agent VIBES-compliant is to drop the agent instruction file into its system prompt. The file is self-contained — no external dependencies, no downloads.

One-File Integration

Download vibes-agent.md and add it to your agent's instruction set. The file contains complete specifications for all assurance levels, including hash computation, manifest format, annotation schema, and agent behavior hooks.

Compatible agent instruction systems:

CLAUDE.md — Claude Code
.cursorrules — Cursor
.windsurfrules — Windsurf
copilot-instructions.md — GitHub Copilot
Custom agent system prompts
Agent behavior hook points

VIBES defines eight hook points where an agent must perform audit operations:

// Agent lifecycle hooks 1. SESSION START Read config.json compute environment hash write to manifest append session start record 2. PRE-GENERATION (Medium+) Compute prompt hash write to manifest 3. COMMAND EXECUTION Compute command hash write to manifest include command_hash in annotations 4. POST-GENERATION Append line annotations include environment_hash (always) include prompt_hash (Medium+) capture reasoning, include reasoning_hash (High) emit caused_by edge record (always) 5. DELEGATION Create child session with parent_session_id emit delegation record emit delegated_to edge record 6. FILE READ / CONTEXT GATHERING Emit informed_by edge record linking annotation to file read context 7. DECISION POINT Create decision manifest entry compute decision_hash reference via decision_hash in subsequent annotations 8. COMMIT-TIME (Required) Backfill commit_hash on annotations created since last commit
What the agent file contains

The vibes-agent.md file includes:

  • Assurance level definitions (Low, Medium, High)
  • Complete file layout (.ai-audit/ directory structure)
  • Configuration schema with all fields and defaults
  • SHA-256 hash canonicalization rules with reference implementation
  • Manifest specification (all four context types)
  • Annotation log format (line, function, session records)
  • Agent behavior hooks (session start through commit-time)
  • Hook points for edge emission, delegation recording, and decision tracking
  • Concurrency rules for parallel tool instances
  • Security considerations for prompt data in public repos
  • Complete Medium assurance example (config + manifest + annotations)

Tool Provider Cosigning

Optional in VIBES 1.0, tool provider cosigning lets your tool add a second signature to attestations, proving the audit data was generated by your tool in real time. This protects users against data fabrication, post-hoc editing, and tool impersonation.

For full attestation implementation details, see the VERIFY specification.

What Users Get

When your tool cosigns, attestations are classified as tool-corroborated instead of just self-attested. This is the strongest trust signal in the VIBES system — it means an independent party (your tool) confirms the data is genuine. Registries, badges, and verification reports all display this trust tier.

Implementation Steps

Generate a signing key pair

Create an Ed25519 key pair for your tool. The private key stays in your infrastructure (server-side or secure enclave) and is never distributed to users.

Publish your public key

Serve your public key at https://{your-domain}/vibes/vibes-signing-keys.json. Include your keyid, algorithm, PEM-encoded public key, validity window, and status. Verifiers will use this endpoint to look up your key automatically.

Create a signing endpoint (recommended)

Build an endpoint that accepts a 32-byte PAE hash and returns an Ed25519 signature. This is the privacy-preserving approach — your endpoint never sees the actual audit data, only its cryptographic hash. Users pass this URL via --cosign-url.

Sign at data-creation time

The cosignature must be generated when your tool produces the audit data, not after the fact. This timing constraint is the core anti-fabrication property — it prevents users from obtaining your signature for data you didn't generate.

Signing endpoint specification
// POST https://your-domain/api/vibes/cosign // Request: PAE hash from the client tool { "pae_hash": "<base64-encoded SHA-256 of the PAE bytes>" } // Response: signature and key metadata { "keyid": "your-tool-keyid-2026-01", "sig": "<base64url-encoded Ed25519 signature over PAE bytes>", "provider_name": "YourCompany", "tool_name": "YourTool" }

The endpoint receives only the 32-byte hash, not the full audit data. It should authenticate the request (e.g., via API key or session token) to prevent abuse. Rate limiting is recommended.

Well-known key endpoint format
// https://your-domain/vibes/vibes-signing-keys.json { "provider": "YourCompany", "tool_name": "YourTool", "keys": [ { "keyid": "your-tool-keyid-2026-01", "algorithm": "Ed25519", "public_key_pem": "-----BEGIN PUBLIC KEY-----\n...\n-----END PUBLIC KEY-----", "valid_from": "2026-01-01T00:00:00Z", "valid_until": "2027-01-01T00:00:00Z", "status": "active" } ], "rotation_policy": "Annual rotation. Retired keys remain listed for verification.", "updated_at": "2026-01-01T00:00:00Z" }

Requirements: HTTPS only, stable URL for the lifetime of signed attestations, rotate keys at least annually, keep retired keys listed with status: "retired" for verifying historical attestations.

Alternative: local cosigning key

If you distribute a signing key with your tool binary (simpler but harder to rotate), users can pass it directly:

// User runs vibecheck with local cosigning key $ vibecheck attest --cosign-key /path/to/provider.key --cosign-keyid your-tool-keyid

This approach is suitable for development/testing or tools that run fully offline. For production, the remote signing service is recommended because it keeps the private key server-side and supports key rotation without tool updates.

Testing Your Implementation

Use the vibecheck CLI to validate your VIBES implementation. It checks file structure, hash integrity, schema compliance, and assurance level requirements.

// Install and run vibecheck $ npx vibecheck // Expected output for a passing Low assurance project: VIBES Standard Compliance Check Project: my-project Assurance Level: low Files found: config.json, manifest.json, annotations.jsonl Hash integrity: PASS Schema compliance: PASS Result: PASS

Common Implementation Mistakes

What vibecheck validates at each level
// Validation checks by assurance level All levels: [PASS] .ai-audit/ directory exists [PASS] config.json valid (standard, version, assurance_level, project_name) [PASS] manifest.json valid (standard, version, entries) [PASS] annotations.jsonl valid JSONL [PASS] All environment_hash references resolve in manifest [PASS] Hash integrity: recomputed hashes match manifest keys Medium+ adds: [PASS] Prompt entries present in manifest [PASS] All prompt_hash references resolve in manifest [PASS] prompt_text is non-empty High adds: [PASS] Reasoning entries present in manifest [PASS] All reasoning_hash references resolve in manifest [PASS] reasoning_text or reasoning_text_compressed or external blob present [PASS] External blobs exist at declared blob_path

Get Started

VIBES is an open standard. The spec, reference implementation, and validation tools are all freely available.

Integrate VIBES Into Your Tool or Agent

Whether you're building an AI coding assistant, an autonomous agent framework, a CI/CD plugin, or an audit platform — VIBES gives your users verifiable transparency with a standard data format. Add tool provider cosigning to prove your tool generated the data.

Learn More

Understand the standard before implementing it. Start with the overview, then dive into the assurance level that matches your target audience.

Back to Home