One foundation standard for AI assurance data. Four specifications that extend it into cryptographic attestation, risk scoring, and agent governance — addressing fundamentally different problem spaces from a single, shared data substrate.
AI is generating production code, executing workflows, and making architectural decisions at scale. Developers, agents, and autonomous systems produce artifacts daily with no standard way to record what was generated, by whom, or why.
This creates three compounding blind spots:
This is not theoretical. CrowdStrike researchers found that LLM vulnerability rates jump nearly 50% under certain prompt conditions. Without audit data, you cannot identify which code was affected. See why attestation matters →
The VIBES ecosystem addresses these gaps with four complementary standards: VIBES captures the data, VERIFY proves it's authentic, PRISM scores the risk, and EVOLVE turns it into actionable intelligence. It starts with data. VIBES provides the foundation.
Everything starts with data. VIBES (Verifiable Inventory of Bot-Engineered Signals) is the base standard — a structured, tool-agnostic format for recording AI involvement. It defines three assurance levels that progressively capture more about the context and execution of AI tools and agents. Start simple and increase detail as your needs grow.
"Which AI model and tool generated this function?"
Records the tool name, version, model name, and version for every AI-generated line or function. The minimum viable audit trail.
~200 bytes per annotation
Learn more →"What prompt produced this code?"
Adds the full prompt text, prompt type, and context files to every annotation. Enables reproducibility and audit trails for regulated industries.
~2–10 KB per annotation
Learn more →"What was the model thinking when it wrote this?"
Captures the full chain-of-thought and reasoning traces. For safety-critical systems, security forensics, and AI research.
~10–500 KB per annotation
Learn more →VIBES also supports context graphs for tracking causal relationships between code changes and multi-agent delegation hierarchies for orchestrated workflows.
Beyond Source Code: While the examples and tooling above focus on software development, VIBES is a general assurance data format. Its annotation model, context graphs, and attestation pipeline apply to any domain where AI decisions need to be recorded, verified, and audited.
VIBES is tool-agnostic — it works with Claude Code, Cursor, Windsurf, Copilot, CLINE, or any AI tool. All data lives in a .ai-audit/ directory alongside your code, tracked in git.
VIBES produces the base data. Each extension both consumes that data and defines additional fields of its own — VERIFY adds cryptographic signatures, PRISM adds risk scores, EVOLVE adds governance and decision records. They are true extensions from both a data-generation and use-case perspective. Adopt just VIBES, or layer on any combination as your needs evolve.
The base data standard. Defines what to record, how to hash it, and where to store it.
Read the VIBES spec →The security attestation extension. Cryptographic proof that your audit data is authentic and untampered.
Read the VERIFY spec →The risk scoring extension. Computes severity bands from audit data for CI/CD gating and security triage.
Read the PRISM spec →The agent learning extension. Governance frameworks, decision records, and feedback loops for self-improving agents.
Read the EVOLVE spec →Add transparency to your projects in minutes. Drop a badge in your README, run vibecheck to validate, and show the world how your code was built.
Get started with badges →Track AI provenance across your codebase for compliance and security. Know which models generated which code — and reassess risk retroactively when threats emerge.
Explore the standard →Integrate VIBES into your AI coding tool. A basic Low implementation takes about 200 lines of code. Add VERIFY for attestation, PRISM for risk scoring, and EVOLVE for agent learning. Be the tool that proves its work.
Implementation guide →The VIBES ecosystem is built on a simple principle: one shared data substrate, extended for each problem space. Every extension reads from the base VIBES data and contributes new data of its own — so adopting one extension doesn't require the others, but combining them creates compounding value.
VERIFY adds cryptographic envelopes and signatures that prove VIBES data hasn't been tampered with. PRISM computes and stores risk scores derived from VIBES signals. EVOLVE introduces delegation records, decision graphs, and governance metadata that power agent learning. Each extension enriches the audit trail with its own data while building on the same foundation — and the ecosystem is open for future extensions we haven't imagined yet.
VIBES is open, free, and community-driven. Start by adding a badge to your project, submitting to the registry, or building tools on top of the standard.