Medium Assurance
What was the AI asked to do?
Medium Assurance captures the full prompt text, prompt classification, and context files provided to the AI — linking every generated line of code back to the exact instruction that produced it.
Storage overhead: ~2–10 KB per annotation
Medium Assurance adds prompt context to Low Assurance's environment and command tracking. It answers "what instruction produced this code?" for every annotated line or function.
prompt_text — full prompt or instruction text sent to the AI model requiredprompt_type — classification of the prompt (see types below) requiredprompt_context_files — files provided as context to the model, as relative paths from project root optionalThe prompt_type field uses one of seven enumerated values:
annotation_id — content-derived SHA-256 identifier for stable cross-referencing from edge records requiredprompt_hash — SHA-256 hash linking annotation records to the prompt context in manifest.json required at medium+decision_hash — SHA-256 hash linking annotations to a decision context entry when code resulted from evaluating multiple approaches optionalassurance_level — set to "medium" on all annotation records requiredrisk_score — PRISM score (0.0–1.0). More signals available at Medium than Low (adds prompt_token_count). optionalrisk_factors — Array of signal assessments with transparency into which factors drove the score. optionalStructured decision records are mandatory at all assurance levels when the AI evaluates multiple approaches. They capture the options considered, the selection made, and the rationale behind it.
decision_point — human-readable description of the decision being made requiredoptions — array of objects, each with id, description, pros, and cons requiredselected — the id of the chosen option requiredrationale — explanation of why the selected option was chosen requiredconfidence — confidence level: high, medium, or low requiredtool_name, tool_version, model_name, model_version — environment context inheritedcommand_text, command_type — command context inheritedanchor_context, anchor_hash, file_content_hash — content anchoring fields inherited from Low. RECOMMENDED at Medium and above. optionalrisk_score, risk_factors — PRISM extension. More signals available at Medium (prompt_token_count). optionalKnowing what the AI was asked is qualitatively different from knowing which AI was used. Prompt context transforms your audit trail from an inventory into an evidence record.
Given the same model, prompt, and context files, output can be reproduced and compared. If a function behaves unexpectedly, you can re-run the exact instruction that created it to verify whether the model produces consistent results or if parameters drifted.
Auditors can verify that the AI followed instructions correctly. Each code change links to the prompt that triggered it — a complete chain from developer intent to generated output. "Show me the instruction that produced validate_signup()" becomes a single query.
Regulated industries (finance, healthcare, automotive) can demonstrate that AI-generated code was produced from specific, documented instructions. Medium Assurance provides the evidence trail that compliance frameworks increasingly require for AI-assisted development.
Reviewers see not just what changed, but what the developer asked the AI to do. A diff showing a new rate limiter gains context when accompanied by the prompt: "Add rate limiting to the signup endpoint. Max 5 attempts per IP per hour." Review becomes verification, not guesswork.
When a bug or vulnerability is found in AI-generated code, trace back to the exact prompt that produced it. Was the instruction ambiguous? Did the prompt lack important constraints? Medium Assurance turns "what went wrong?" into "what was asked that led to this?"
Medium Assurance stores prompt context in the manifest and links it to annotations via SHA-256 hash — the same content-addressed pattern used for environment and command context.
Stored in manifest.json — records the exact instruction sent to the AI.
Stored in annotations.jsonl — note the prompt_hash field linking code to the instruction that produced it.
Stored in manifest.json — records structured decisions when the AI evaluates multiple approaches. Referenced from annotations via decision_hash.
Medium Assurance is designed for teams that need to trace AI-generated code back to the instructions that produced it. If "what tool was used?" isn't enough and you need "what was it asked?", this is your level.
If you only need to know which tool and model were used, Low Assurance is simpler and has lower overhead. If you also need to know how the AI reasoned (chain-of-thought traces), upgrade to High Assurance.
Beyond source code: Medium Assurance prompt records are equally applicable to non-code workflows. Agents executing financial transactions, infrastructure changes, or content moderation decisions can use the same prompt capture model for audit compliance.
Medium Assurance captures prompt text, which may contain sensitive information. Review your audit data before committing to public repositories.
Prompts can inadvertently include API keys, internal documentation references, proprietary business logic, or personal information. Projects operating at Medium Assurance should:
• Review manifest.json before committing to public repositories
• Use .gitignore to exclude manifest.json if prompts contain confidential instructions
• Consider encrypting sensitive manifest entries at rest
• Establish prompt hygiene practices — avoid pasting credentials into AI chat sessions
Medium Assurance builds on Low with moderate additional effort. Tools must capture the prompt text before each generation event and compute its content-addressed hash.
Compute the environment context hash from tool name, version, model name, and version. If the hash doesn't exist in manifest.json, add it. Append a "session" / "start" record to annotations.jsonl.
Before sending the request to the AI model, capture the full prompt text, classify the prompt_type, and record which files were provided as context. Compute the prompt hash using canonical JSON (sorted keys, no whitespace) → SHA-256. Write the prompt context entry to manifest.json if the hash is new.
Append line annotation records to annotations.jsonl as in Low Assurance, but include the prompt_hash field. This links every generated line or function to the specific instruction that produced it.
If risk_scoring.enabled is true in config.json, compute a PRISM score from available signals. At Medium, you have access to prompt_token_count in addition to Low-level signals. Include risk_score and risk_factors in the annotation record. PRISM is a standalone extension on top of VIBES.
When the AI evaluates multiple approaches, create a decision context manifest entry recording the options, selection, and rationale. Compute its content-addressed hash using canonical JSON → SHA-256. Reference via decision_hash in subsequent annotations produced by that decision.
When the AI executes a shell command, file operation, or API call, compute a command context hash and add it to manifest.json if new. Include the command_hash in associated annotation records.
Backfill the commit_hash field on all annotation records created since the last commit. This field is required — every annotation must be linked to its git commit.
Append a "session" / "end" record to annotations.jsonl with the session ID and timestamp.
The key addition is step 2: capturing prompt text before generation. The hash computation follows the same content-addressed pattern as environment and command hashes. For detailed implementation guidance, see the Implementors Guide or the full RFC specification.
Back to the VIBES Standard overview →
Medium assurance data is the foundation for VERIFY attestation (prompt-level traceability) and PRISM risk scoring (prompt complexity as a PRISM signal).