AI agents connect to external tools and data sources. Most are unvetted.
Check any AI tool before you install it. Credence scans the source, verifies the author, and publishes a signed trust score you can query from your AI agent, the terminal, or CI.
Look up any AI tool's trust score before you install it. Works from your AI agent or the terminal.
Browse the registry →Submit your server for independent analysis. Get a signed attestation that developers can verify.
Submit for analysis →Deterministic scanners plus five AI agents in structured adversarial debate. Open pipeline, open methodology. Review the code, read the research, verify the signatures.
Read the research →Right now, people decide whether to install an AI tool based on GitHub stars, README quality, and gut feel. There's no npm audit for AI tools. No Sigstore. No Dependabot. You have no way to verify that a tool does what it claims — or that it hasn't been cloned and poisoned by someone else.
This isn't hypothetical. Straiker's STAR Labs recently documented a supply chain attack that cloned a real MCP server, created fake GitHub accounts, and published the poisoned version to tool registries. There was nothing to distinguish the real tool from the fake one. Credence fixes that.
Once Credence is installed, your AI agent can check any tool's trust score before connecting to it. Or use the CLI from the terminal. Either way, one step between you and a bad install.
You Is the filesystem MCP server safe to install? Agent Let me check Credence. Filesystem MCP Server 88/100 · VERIFIED Signed attestation at commit 618cf486 Trust score is 88. This server has been independently scanned and verified. ✓ Safe to install.
$ credence check modelcontextprotocol/servers/filesystem Filesystem MCP Server modelcontextprotocol/servers/filesystem Score 88 / 100 Verdict VERIFIED Commit 618cf4867bca Signed ed25519 ✓ ✓ VERIFIED — safe to install
Scanner score: 82/100 — zero findings, clean code Adversarial Attacker REJECT · confidence 0.85 Clean scans with zero provenance data is a red flag, not a green light. 0-day account age, 0 contributors — this could be a newly created repo for distribution. Devil's Advocate APPROVE · confidence 0.75 Zero findings across all severity levels. The repo is from the official org. Clean code should count. ··· 3 rounds later ··· Devil's Advocate REJECT · confidence 0.45 I concede. Provenance gaps are blocking regardless of code quality. Without verified ownership, clean scans are an unconfirmed trust signal. Final: REJECTED · 98% confidence · Score adjusted 82 → 15
Both consumer interfaces hit the same signed registry data. The AI agent calls credence_check_server over MCP. The CLI verifies the signature locally. The deliberation panel shows what happens inside the pipeline — five agents debating the scan results across multiple rounds. Install guide →
Every server in the registry has been through the same pipeline. Trust scores are based on what's actually in the code, not stars or vibes.
Credence confirms the submitter is the repo owner, checks account age and contributor history, detects forks, and flags provenance anomalies. If someone clones a legitimate server, the attestation points to the real author — not the clone.
Static analysis, dependency CVE scanning, secrets detection, and MCP-specific checks — suspicious tool definitions, dynamic description loading, prompt injection patterns. Credence looks at what the server actually does, not what the README says it does.
Scanners are deterministic — they find what they're told to find. Five AI agents with competing security mandates debate the results across multiple rounds. Skeptics challenge. Defenders push back. A neutral agent synthesizes. The output is a confidence-scored verdict, not a binary pass/fail.
A cryptographically signed trust score pinned to the exact commit that was scanned. You can verify it from the CLI, from your AI agent, or in CI — before you install. The attestation covers only the scanned commit — new commits require a fresh scan.
Scanners find known patterns. They can't judge context. A hardcoded string might be a leaked credential or a test fixture. A fork might be a supply chain attack or a legitimate contribution. A server with zero findings might be clean — or the scanner might not know what to look for.
Credence runs five AI agents with competing security mandates through multiple rounds of structured debate. Skeptics probe for what the scanners missed. Defenders challenge false positives. A neutral agent synthesizes the positions. Rounds continue until positions stabilize or new evidence stops emerging.
In a recent scan, the scanner scored a server 82/100 — clean code, zero findings. The deliberation dropped it to 15. The provenance was unverifiable: empty owner field, zero-day account age, zero contributors. The adversarial agents caught what the scanner couldn't see: a clean scan means nothing if you can't verify who wrote the code. The defender agent started at APPROVE and flipped to REJECT by round three.
Two adversarial agents — one focused on attack vectors, one on supply chain integrity — look for what the scanners missed. They draw on real MCP incidents (SmartLoader, tool poisoning, rug pulls) and flag patterns that automated tools can't catch: manufactured credibility, missing provenance, suspicious contributor timelines.
Two counter-agents argue the other side — identifying when a flagged pattern is standard practice, when a fork has legitimate provenance, when a finding is noise. If they can't defend a finding, their confidence drops, and the record shows it. The Devil's Advocate in the example above started at 0.75 confidence in APPROVE and ended at 0.45 in REJECT.
A compliance-focused agent weighs both sides against SLSA framework requirements and produces the final confidence-scored verdict. The process terminates when positions converge or five rounds complete — whichever comes first. Every round is logged for audit.
Read the research → The position paper covers the full methodology, threat model, and why adversarial deliberation outperforms single-pass analysis.
Fake provenance, cloned repos, manufactured credibility. The SmartLoader pattern.
Vulnerable dependencies in the server's dependency tree.
Hidden directives, unicode tricks, schema manipulation in MCP tool definitions.
API keys, credentials, tokens committed to source.
Dynamic tool descriptions, version-gated behavior changes, environment-conditional logic.
Missing lockfiles and unpinned dependencies.
Credence provides install-time trust data. It's complementary to runtime tools — Docker MCP Catalog, ToolHive, Solo.io Agent Mesh — that handle enforcement after installation. Defense in depth.
{
"server_id": "modelcontextprotocol/servers/filesystem",
"commit_sha": "618cf4867bca...",
"source_hash": "sha256:a8c3f1...",
"source_hash_method": "merkle-tree-sha256",
"author_identity": {
"repo_owner": "modelcontextprotocol",
"identity_match": true,
"provenance_flags": []
},
"trust_score": 88,
"trust_dimensions": {
"security": 70,
"provenance": 100,
"behavioral": 100
},
"thinktank_verdict": "APPROVED",
"signature": "ed25519:..." // verify with public key before install
}
Attestations are only useful if machines can read them. Credence gives you three ways to verify a server — so the trust check happens automatically, not manually on a website.
Add Credence as an MCP server in Claude Desktop, Claude Code, or any MCP client. Your AI agent calls credence_check_server before connecting to unknown tools. Attestation data flows through the same protocol your agent already speaks. Quick start guide →
credence check owner/server from the terminal. Exit codes for CI integration: 0 = safe, 1 = not attested, 2 = flagged, 3 = rejected. Verify local source hashes against attestations before install.
credence guard wraps any install command with a trust check. If the server isn't attested or the score is below your threshold, the install doesn't run. Exit codes work in shell scripts, CI gates, and orchestration tools.
# Install $ pip install git+https://github.com/pestafford/credence-registry.git#subdirectory=mcp-server # Add to claude_desktop_config.json { "mcpServers": { "credence": { "command": "python3", "args": ["-m", "credence_mcp.server"] } } } # Or from the terminal $ credence check modelcontextprotocol/servers/filesystem Trust score: 88/100 Verdict: APPROVED ✔ VERIFIED — safe to install
Submit an AI tool for Credence analysis. We'll clone the repo, run the pipeline, and publish the attestation. Submissions are tracked as GitHub Issues — you can follow progress there.
Opens a pre-filled GitHub Issue. You'll need a GitHub account.
A maintainer will review your submission and start the scan. Results get posted directly to your GitHub issue — watch it for updates.
Finding vulnerabilities is only useful if they get fixed. Credence gates publication on scan results and flags submissions for manual review when findings are serious.
The pipeline checks whether the submitter is the repo owner, a collaborator, or a contributor via GitHub API. Verified maintainers are labeled as such. Third-party submissions are scanned identically, but marked so reviewers know the context. Detailed scanner output is never published — only scores, verdicts, and provenance flags go into the public registry.
Trust score reflects any minor findings in the dimension breakdowns. Attestation is signed and published to the registry automatically. Results posted to the GitHub issue.
When the trust score is FLAGGED or REJECTED, the attestation is held from publication and the issue is labeled disclosure-pending. A maintainer reviews the detailed findings (stored as ephemeral artifacts with 90-day retention) and decides next steps — which may include contacting the repo owner privately before publishing.
After remediation, maintainers can submit a fresh scan at the new commit. The updated attestation replaces the previous one in the registry. Previous scan results remain in their scan-results/ directory.
Credence is a security tool, not a shaming tool. The goal is fewer vulnerable AI tools in production — not public callouts.
Credence scans source code. Compiled binaries, obfuscated bundles, and private repos are out of scope. If we can't read the code, we can't attest it.
Each attestation covers a single commit. It doesn't monitor for future changes — if the code updates, a new scan is needed. Continuous monitoring is on the roadmap.
Credence analyzes code before installation, not behavior after. Runtime enforcement is handled by tools like Docker MCP Catalog and ToolHive. Credence is the install-time gate.
Credence is in active development. The core pipeline — identity verification, multi-scanner analysis, adversarial deliberation, and signed attestations — is live today. Here's what we're building next.
Automated severity classification (low/medium/high) with configurable remediation windows. Medium findings get a 30-day private disclosure period before attestation is published. High findings trigger immediate review.
When actionable vulnerabilities are found, contact the repo owner directly — before anything is published. Include specific remediation guidance: dependency upgrades, secret rotation steps, tool description rewrites.
Extend credence guard to clone, hash, and verify local source against the attestation before install. Catch tampering between attestation and installation.
Compare lockfile hashes across successive attestations for the same server. Flag when dependencies change between scans — even if the code hasn't.
Extend scanning to MCPB-bundled extensions and MCP Apps. Manifest validation, template injection detection, bundled dependency analysis, and source-vs-bundle verification for packages distributed outside Git.
Notify MCP registry operators (MCP Market, Glama, Smithery) when high-severity findings are confirmed. Report manufactured identity operations to the hosting platform.
Consumer-facing view of a server's full attestation timeline. See when issues were found, when they were fixed, and how trust scores evolved across versions.
Follow progress on GitHub.
Credence is open source and in active development. The pipeline, scoring methodology, and attestation schema are all public.
Credence is free and open source. Every scan runs paid API calls for the five-agent deliberation panel. If this project is useful to you, consider sponsoring to help keep it running.
Sponsor this project