These badges represent VIBES standard compliance levels — from simple AI usage percentages to cryptographically verified attestations. Add them to your README to communicate transparency about AI involvement in your codebase.
Use shields.io badges to communicate AI usage in your projects. Each badge tier represents a different level of AI involvement.
Light AI assistance
Moderate AI assistance
Mixed development
Primarily AI-driven
Mostly AI-generated
Fully AI-generated
Additional badge styles:
General AI-Generated (blue)
AI-Assisted (flat)
General AI-Generated (dark)
Every AI usage claim has a confidence level. Use these badges to signal how your AI percentage was verified. Attestation badges work alongside the Low, Medium, and High VIBES assurance levels to provide full transparency. Learn more about the cryptographic attestation system.
Attestation badges are validated through the VERIFY extension. See the VERIFY specification for details on cryptographic attestation, trust tiers, and the public registry.
Developer's own estimate
Confirmed by human review
Verified by automated tooling
Combine a percentage badge with an attestation badge for full transparency:
Percentage + Self Attestation
Percentage + Manual Validation
Percentage + Machine Validation
Copy and paste the markdown snippet below into your README.md file. Replace the percentage with your project's AI usage level.
Help build a more transparent AI coding ecosystem by using our itsavibe.ai logo in your README.md files. Let others know when your project is AI-generated or AI-assisted.
400x400 version
Full size version
This project contains AI-generated or AI-assisted code. Learn more at itsavibe.ai
Every project in the registry declares an attestation level indicating how the AI usage percentage was determined. These levels combine with VIBES assurance levels (Low, Medium, High) to give a complete picture of AI provenance. See the attestation system for how cryptographic signing works.
The project maintainer estimates the AI usage percentage based on their own assessment. This is the default level and the easiest starting point. Consider factors like lines of code generated by AI, architectural decisions, and post-generation editing.
A human reviewer analyzes tool-generated audit output — provenance reports, code analysis results, and git history data — to confirm the AI usage claim. This adds credibility through informed third-party validation based on actual tool output rather than opinion alone.
Automated tooling has analyzed the codebase to determine the AI-generated percentage. This is the highest confidence level, using code analysis tools, git history analysis, and AI detection models to verify claims programmatically.
Consider these factors when estimating your AI percentage:
Remember, transparency builds trust in the AI coding ecosystem!