Your proprietary code is flowing into Frontier AI models in the Cloud undetected. Husn Canaries allow you to receive instant alerts when Claude, ChatGPT, Copilot, Gemini, or any AI coding assistant analyzes your code. Know exactly when your intellectual property is exposed, whether by your team, contractors, or attackers.
This research proposes a new standard for AI governance. For it to work, we need frontier AI providers to integrate and the security community to advocate. If you believe in transparent, accountable AIโlet's build this together.
IOActive is a global leader in security services, providing deep expertise in hardware, software, and AI security research. This research is part of our commitment to advancing security standards across the industry.
Learn more about IOActive | info@ioactive.com | LinkedInWe've all been in that meeting.
"How do we actually know if our source code is being sent to AI tools?"
โ Someone from legal, compliance, or the executive team
The room goes quiet. Security looks at IT. IT looks at DevOps. DevOps looks at their shoes.
Everyone knows the honest answer: we don't.
Sure, we have policies. We have endpoint controls and network proxies. We block certain URLs and deploy DLP solutions.
But what happens when...
A contractor copies a repository to their personal laptop and pastes it into Claude or Copilot at home?
A former employee who "forgot" to delete their local clone decides to explore it with Cursor?
An attacker exfiltrates source code and feeds it to AI tools to hunt for vulnerabilities?
We're blind.
They know they're analyzing our code. We have no idea. That's the problem we set out to solve.
Organizations have no visibility when their code is analyzed by AI assistantsโinternally or externally.
Security teams can't audit which developers use AI tools on which codebases. Compliance violations occur when regulated data is exposed to AI models.
When code is stolen, attackers use AI to rapidly find vulnerabilities. Organizations have zero visibility into this analysis.
AI bans on sensitive codebases are meaningless once code leaves the network. Client-side controls are trivially bypassed.
AI providers integrate Husn into their code analysis pipeline. When code is submitted, Husn checks it against all registered patterns and returns policy decisions in real-time.
Register function names, variables, honeypots, and code snippets unique to your codebase on Husn's central registry.
Claude, ChatGPT, Copilot, Gemini and others call the Husn API during code indexing and analysisโa few lines of integration.
When patterns match, Husn alerts your organization instantlyโeven if the code was stolen and analyzed externally.
Configure per-pattern policies: notify silently, require approval, or block AI analysis entirely.
flowchart TB
subgraph USERS[1 - CODE ACCESS]
DEV[Internal Developer]
CON[External Contractor]
ATK[Attacker with Stolen Code]
end
subgraph TOOLS[2 - AI TOOL SELECTION]
AI-Providers
end
subgraph PROVIDER[3 - PROVIDER PROCESSING]
RECEIVE[Receive Code Context]
INDEX[Index Files]
APICALL[Call Husn API]
RECEIVE --> INDEX --> APICALL
end
subgraph HUSN[4 - HUSN CANARIES SERVICE]
REGISTRY[(Pattern Registry)]
MATCHER[Pattern Matching Engine]
DECISION{Match Found?}
REGISTRY --> MATCHER --> DECISION
end
subgraph ENFORCE[5 - POLICY ENFORCEMENT]
NOTIFY[NOTIFY: Allow and Log]
APPROVE[APPROVE: Wait for Authorization]
BLOCK[BLOCK: Deny Access]
end
subgraph ORGRESPONSE[6 - ORGANIZATION RESPONSE]
WEBHOOK[Webhook Notification]
SIEM[SIEM / Slack / PagerDuty]
INCIDENT[Incident Response Team]
WEBHOOK --> SIEM --> INCIDENT
end
CLEAR[CLEAR: Proceed Normally]
DEV --> AI-Providers
CON --> AI-Providers
ATK --> AI-Providers
AI-Providers --> RECEIVE
APICALL --> MATCHER
DECISION -->|NO| CLEAR
DECISION -->|YES| NOTIFY
DECISION -->|YES| APPROVE
DECISION -->|YES| BLOCK
NOTIFY --> WEBHOOK
APPROVE --> WEBHOOK
BLOCK --> WEBHOOK
CLEAR --> RETURN[Return to AI Provider]
BLOCK --> DENY[Return Block to AI Provider]
flowchart TB
subgraph USERS[1 - CODE ACCESS]
DEV2[Internal Developer]
CON2[External Contractor]
ATK2[Attacker with Stolen Code]
end
subgraph TOOLS2[2 - AI TOOL SELECTION]
AI-Providers2[AI-Providers]
end
subgraph AIPROVIDER[3 - AI PROVIDER INFRASTRUCTURE]
subgraph ONPREM[On-Prem Husn Node]
LOCALREG[(Local Pattern Cache)]
LOCALMATCH[Local Matching Engine]
LOCALDEC{Match Found?}
LOCALREG --> LOCALMATCH --> LOCALDEC
end
RECEIVE2[Receive Code Context]
INDEX2[Index Files]
LOCALCHECK[Local Canary Check]
RECEIVE2 --> INDEX2 --> LOCALCHECK
LOCALCHECK --> LOCALMATCH
end
subgraph HUSNCLOUD[HUSN CLOUD - Pattern Sync]
MASTERREG[(Master Pattern Registry)]
SYNC[Periodic Sync Service]
MASTERREG --> SYNC
end
subgraph ENFORCE2[4 - POLICY ENFORCEMENT]
NOTIFY2[NOTIFY: Allow and Log]
APPROVE2[APPROVE: Wait for Authorization]
BLOCK2[BLOCK: Deny Access]
end
subgraph ORGRESPONSE2[5 - ORGANIZATION RESPONSE]
WEBHOOK2[Webhook Notification]
SIEM2[SIEM / Slack / PagerDuty]
INCIDENT2[Incident Response Team]
WEBHOOK2 --> SIEM2 --> INCIDENT2
end
CLEAR2[CLEAR: Proceed Normally]
DEV2 --> AI-Providers2
CON2 --> AI-Providers2
ATK2 --> AI-Providers2
AI-Providers2 --> RECEIVE2
SYNC -.->|Encrypted Sync| LOCALREG
LOCALDEC -->|NO| CLEAR2
LOCALDEC -->|YES| NOTIFY2
LOCALDEC -->|YES| APPROVE2
LOCALDEC -->|YES| BLOCK2
NOTIFY2 --> WEBHOOK2
APPROVE2 --> WEBHOOK2
BLOCK2 --> WEBHOOK2
CLEAR2 --> RETURN2[Return to AI Provider]
BLOCK2 --> DENY2[Return Block to AI Provider]
Organizations register code patterns through the Husn admin console. No code changes required.
When any AI tool reads files, it calls the Husn API to check for registered patterns.
On match, Husn alerts the organization and enforces the configured policy.
Organizations receive real-time alerts with user identity for rapid incident response.
Watch a complete demonstration of Husn Canaries detecting and blocking AI analysis of protected code.
Watch on YouTube
| Capability | Client Hooks | Network Proxies | Husn Canaries |
|---|---|---|---|
| Bypass resistant | โ | โ | โ |
| Works across all AI clients | โ | โ | โ |
| Detects external threats | โ | โ | โ |
| Works for web UI | โ | โ | โ |
| No client configuration | โ | โ | โ |
| Detects stolen code analysis | โ | โ | โ |
The future of AI coding assistants depends on trust. Organizations need assurance that their intellectual property is protected. Husn Canaries offers a lightweight, privacy-preserving integration that transforms your platform into a governance-aware solution enterprises can confidently adopt.
Large organizations are reluctant to adopt AI coding assistants without governance guarantees. Husn integration opens the door to enterprise contracts that require IP protection assurances.
Providers face reputational and legal risk if their platforms are used to analyze stolen intellectual property. Husn provides a defense mechanism and demonstrates due diligence.
Emerging frameworks such as the EU AI Act increasingly expect platforms to implement content governance mechanisms. Early adoption positions you ahead of regulatory requirements.
The technical burden is modest: a small number of API calls during indexing and request handling. Privacy-preserving design means you never need to expose user code to Husn servers.
Join us in building a more trustworthy AI ecosystem. We're actively seeking partnerships with frontier AI providers to pilot Husn Canaries integration.
Get in TouchOur paper presents the conceptual full design, threat model, security analysis, and proof-of-concept implementation. We invite the security community, researchers, and AI providers to collaborate with us in building a safer AI-assisted development ecosystem.
For the best reading experience on your device, open or download the PDF directly.
Download the formal specification with API definitions, JSON schemas, and compliance requirements.