The practical kit for developers, contractors, and teams to find, document, and disclose AI-assisted contributions — before managers, clients, or compliance find them first.
Many client agreements require "original work." AI co-author tags in your commits can raise review questions if you do not have a clear, documented explanation.
AI disclosure expectations are changing. Git attribution artifacts are not proof of wrongdoing, but they are useful audit cues that should be documented before a review.
Proactive disclosure builds trust. Being caught undisclosed erodes it. The difference is who finds it first.
Step-by-step walkthrough: quick scan (15 min), deep audit (60 min), documentation (30 min). Tested shell commands included.
Structured checklist for each repository. Track findings, risk levels, and remediation actions in one place.
Copy-paste ready git commands. Read-only — no destructive operations. Find Co-authored-by headers, AI comments, config files, and attribution rates.
Professional email templates for disclosing AI tool usage to clients and managers. Proactive disclosure beats reactive discovery.
Adoptable policy template covering disclosure standards, review requirements, and configuration hygiene for teams.
Self-assessment worksheet for freelancers. Score your contract risk, attribution rate, and mitigation readiness.
Template for documenting AI attribution findings, severity, and remediation. Creates the paper trail that protects you.
Is this anti-AI? No. This is about disclosure hygiene, not avoiding AI tools. Using AI coding assistants is fine — not documenting that use when required by contract or policy is the risk.
What if I already have AI commits? Most teams and clients respond well to proactive disclosure. The kit includes email templates for that conversation. The worst option is waiting for them to find it.
Can I remove all AI traces from my git history? You can clean current-tree comments and config files when appropriate. Rewriting shared history to hide attribution is risky and should require explicit team-owner and counsel approval.
Does this cover local AI models too? Yes. Local models usually do not insert Co-authored-by headers, but they can leave weak review cues such as comments or config artifacts. The audit commands surface cues; they do not prove AI use.
Is this legal advice? No. This is an educational resource about git attribution patterns and disclosure best practices. Consult a lawyer for contract-specific guidance.
How long does the audit take? Quick scan: 15 minutes for any size repository. Deep audit with documentation: 60-90 minutes.
If the kit is not useful, reply to your Stripe receipt within 30 days for a full refund — no questions asked.