These tools help you implement the VIBES ecosystem — from generating .ai-audit/ data during development, to attesting integrity with VERIFY, to scoring risk with PRISM. Whether you're adding transparency to a personal project or rolling out AI provenance tracking across an organization, these tools make adoption practical.
A Rust CLI tool spanning all four standards — validates .ai-audit/ data (VIBES), signs cryptographic attestations (VERIFY), and computes risk scores (PRISM). The reference implementation for the VIBES ecosystem.
.ai-audit/ structure and hash integrityA development-only tool for generating synthetic .ai-audit/ directories and validating v1.1 spec conformance. Designed for tool implementors, spec maintainers, and CI pipelines — not for end users. Generates controlled test scenarios including rebase flows, PRISM severity spectrums, concurrency stress, and intentionally malformed data for negative testing.
A cross-platform desktop application for orchestrating multiple AI coding agents simultaneously. Maestro supports Claude Code, OpenAI Codex, OpenCode, and Factory Droid with parallel agent management, git worktree integration, auto-run playbooks, group chat coordination, and a mobile remote access interface. Designed for power users who run lengthy unattended automation sessions across parallel projects.
Learn MoreA VIBES-compliant hook for Claude Code that automatically generates .ai-audit/ data as you work. Captures environment context, prompt hashes, session boundaries, and line-level annotations in real time — no workflow changes required. Configurable assurance level from Low to High with optional chain-of-thought capture. Generated audit data is compatible with VERIFY attestation and PRISM risk scoring.
A VIBES integration for Google Gemini CLI that emits standards-compliant audit data during interactive and scripted sessions. Tracks Gemini model variants, tool use, and file modifications with automatic annotation generation. Supports all three assurance levels with configurable output to .ai-audit/. Generated audit data is compatible with VERIFY attestation and PRISM risk scoring.
A VIBES plugin for OpenAI Codex CLI that records AI provenance data for every code generation and edit session. Captures OpenAI model metadata, prompt context, and file-level change tracking with automatic annotation output. Integrates with Codex's sandbox execution model to record tool invocations and shell commands alongside code changes. Generated audit data is compatible with VERIFY attestation and PRISM risk scoring.
Source available at launchIntegrate VIBES tooling into your development workflow for maximum value:
vibecheck verify to catch schema violations before they enter historyvibecheck anchors to detect and remap drifted line annotationsvibecheck risk --ci to gate merges on PRISM thresholdsvibecheck risk --format json and inject risk summaries into PR descriptionsvibecheck attest to sign and submit cryptographic attestation to the public registrySee the Implementors Guide for integration patterns, the VIBES standard for data format details, VERIFY for attestation integration, and PRISM for risk scoring setup.