Low Assurance

What tool and model touched this code?

Low Assurance is the entry point to VIBES. It records the AI tool name, version, model name, and model version for every AI-generated line or function — the minimum viable audit trail for transparency and provenance tracking.

Storage overhead: ~200 bytes per annotation

What This Level Captures

Low Assurance records the identity of the AI toolchain — enough to answer "which AI model and tool generated this function?" for every annotated line of code.

Environment Context

Command Context

Annotations

For the full PRISM framework (Provenance & Risk Intelligence Scoring Model), see the PRISM extension. PRISM is a standalone extension on top of VIBES.

Session Record

Session records track the lifecycle of AI coding sessions. In addition to the core fields shown above, session records support optional fields for multi-agent hierarchies:

Edge Records

Edge tracking is mandatory at all assurance levels. Edge records capture directed causal and dependency relationships between audit events, enabling graph-based provenance queries.

Delegation Records

Delegation records capture multi-agent orchestration metadata when a parent session spawns sub-sessions for task delegation.

For agent governance and learning built on delegation data, see EVOLVE.

Line Number Stability: line_start and line_end are best-effort temporal references, accurate relative to the recorded commit_hash. They may drift after rebases. Include anchor fields (anchor_context, anchor_hash, file_content_hash) to enable re-matching after line shifts.

Why This Data Matters

Even basic tool and model identification unlocks capabilities that are impossible without structured audit data. Low Assurance is the foundation that all higher levels build on.

Open Source Transparency

Contributors and users know exactly which AI tools were involved in a project. A contributor can see that validate_signup() was generated by Claude Code using claude-opus-4-5. Users can assess AI involvement before depending on a library.

Retroactive Risk Assessment

When a model is found to be compromised, biased, or producing systematically vulnerable code, organizations can immediately identify all code it generated. Without Low Assurance, this is a manual, error-prone audit across the entire codebase.

Compliance Baseline

For organizations beginning AI governance, Low Assurance provides the minimum viable audit trail. It answers the fundamental question procurement teams and compliance officers ask: "Do we know which parts of our code were written by AI?"

Badge Verification

Low Assurance data backs shields.io-style badges showing AI-generated percentage. The vibecheck CLI validates that badge claims match actual audit data — moving from self-reported claims to verifiable facts.

Supply Chain Visibility

SBOMs tell you what dependencies your software uses. Low Assurance tells you what AI tools built it. Together, they provide full supply chain transparency — from the libraries you import to the models that wrote your code.

Example Data

Here is what Low Assurance data looks like in practice. All context is stored once in the manifest and referenced by SHA-256 hash from annotations.

Environment Context Record

Stored in manifest.json — records the AI toolchain identity.

"e7a3f1b2c4d5e6f7...": { "type": "environment", "tool_name": "Claude Code", "tool_version": "1.5.2", "model_name": "claude-opus-4-5", "model_version": "20251101", "model_parameters": { "temperature": 0.7, "max_tokens": 4096 }, "tool_extensions": ["filesystem", "git"], "created_at": "2026-02-03T14:30:00Z" }

Command Context Record

Stored in manifest.json — records tool invocations and shell commands.

"d4e5f6a7b8c9d0e1...": { "type": "command", "command_text": "npm install express", "command_type": "shell", "command_exit_code": 0, "command_output_summary": "added 57 packages in 2.3s", "working_directory": "src/", "created_at": "2026-02-03T14:30:10Z" }

Line Annotation Record

Stored in annotations.jsonl — one line per annotation, referencing the environment hash.

{ "type": "line", "file_path": "src/auth.py", "line_start": 1, "line_end": 45, "environment_hash": "e7a3f1b2c4d5e6f7...", "action": "create", "timestamp": "2026-02-03T14:30:15Z", "commit_hash": "abc123def456", "session_id": "550e8400-e29b-41d4-a716-446655440000", "assurance_level": "low", "anchor_context": "def validate_signup(email, password):\n if not email or '@' not in email:\n raise ValueError('Invalid email')", "anchor_hash": "a9b8c7d6...", "file_content_hash": "1a2b3c4d..." }
Example: Function annotation record
{ "type": "function", "file_path": "src/auth.py", "function_name": "validate_signup", "function_signature": "def validate_signup(email: str, password: str) -> bool", "environment_hash": "e7a3f1b2c4d5e6f7...", "action": "create", "timestamp": "2026-02-03T14:30:15Z", "commit_hash": "abc123def456", "session_id": "550e8400-e29b-41d4-a716-446655440000", "assurance_level": "low", "anchor_context": "def validate_signup(email: str, password: str) -> bool:\n if not email or '@' not in email:\n raise ValueError('Invalid email')", "anchor_hash": "a9b8c7d6...", "file_content_hash": "1a2b3c4d..." }
Example: Edge record (causal relationship)
{ "type": "edge", "edge_type": "caused_by", "source_ref": "<annotation_id>", "source_type": "annotation", "target_ref": "<context_hash>", "target_type": "context", "timestamp": "2026-02-18T14:00:00Z", "session_id": "550e8400-e29b-41d4-a716-446655440000" }
Example: Delegation record (sub-agent task)
{ "type": "delegation", "parent_session_id": "parent-uuid", "child_session_id": "child-uuid", "timestamp": "2026-02-18T14:00:00Z", "task_description": "Implement authentication module", "delegated_files": ["src/auth.rs", "src/middleware.rs"], "delegation_type": "task" }
Example: Session lifecycle records
// Session start — records when an AI coding session begins { "type": "session", "event": "start", "session_id": "550e8400-e29b-41d4-a716-446655440000", "timestamp": "2026-02-03T14:00:00Z", "environment_hash": "e7a3f1b2c4d5e6f7...", "assurance_level": "low", "description": "Adding signup validation" } // Session end — records when the session concludes { "type": "session", "event": "end", "session_id": "550e8400-e29b-41d4-a716-446655440000", "timestamp": "2026-02-03T15:30:00Z" }

Who Should Use This Level

Low Assurance is designed for teams and projects that want AI transparency without overhead. If you're unsure where to start, start here.

If you need to know what the AI was asked (prompt text, context files), upgrade to Medium Assurance. If you need to know how the AI reasoned (chain-of-thought traces), upgrade to High Assurance.

Beyond source code: While the examples here focus on software development, the Low Assurance data model applies to any AI-driven workflow. The same environment and annotation records can track AI involvement in content generation, data processing, or agent-executed tasks.

Implementation

A basic Low Assurance implementation takes approximately 200 lines of code. Here's how tools emit this data at each hook point.

On session start

Compute the environment context hash from tool name, version, model name, and version. If the hash doesn't exist in manifest.json, add it. Append a "session" / "start" record to annotations.jsonl with the environment hash.

On post-generation

After the AI model returns generated code, append line annotation records to annotations.jsonl. Each record includes the file path, line range, environment hash, action type (create, modify, delete), and timestamp.

On tool invocation

When the AI executes a shell command, file operation, or API call, compute a command context hash and add it to manifest.json if new. Include the command_hash in associated annotation records.

On delegation

When spawning a sub-agent, append a delegation record to annotations.jsonl and start a child session with parent_session_id set to the current session's ID. This preserves the multi-agent provenance chain.

On commit (required)

When changes are committed to git, backfill the commit_hash field on all annotation records created since the last commit. This field is required — every annotation must be linked to its git commit, enabling queries like "what AI tool generated code in commit abc123?"

On rebase detection (RECOMMENDED)

If git history is rewritten (rebase, squash, amend), scan annotations whose commit_hash matches rewritten commits. Use file_content_hash fast-path — if the file is unchanged, skip. Otherwise, search for anchor_context in the rewritten file. Emit rebase_remap with updated line numbers or rebase_orphan if the anchor is not found. Add a supersedes edge linking the new annotation to the original.

On session end

Append a "session" / "end" record to annotations.jsonl with the session ID and timestamp. This closes the session lifecycle and enables session-duration analysis.

Edge record emission: Edge records MUST be emitted after code generation (caused_by) to capture causal links between prompts and generated code. Edge records SHOULD be emitted after file reads (informed_by) to capture what context informed the AI's output.

For detailed implementation guidance including hash computation, file format specs, and concurrency handling, see the Implementors Guide or the full RFC specification.

Explore All Assurance Levels

Low Assurance Medium Assurance High Assurance

Back to the VIBES Standard overview →

Low assurance is the foundation for VERIFY attestation and PRISM risk scoring.

Back to The Standard