AI Detection
SnapBack’s Guardian AI provides real-time risk analysis that catches issues before they ship. With 94% accuracy, our system detects secrets, mocks, and phantom dependencies that AI coding assistants might introduce.
How It Works
Guardian AI continuously monitors your code as you work, analyzing patterns that indicate potential risks:
Secrets Detection
Identifies API keys, JWT tokens, database strings, and other sensitive information.
Mocks Detection
Catches test artifacts and mock data that accidentally make it into production code.
Phantom Dependencies
Detects missing package.json entries for modules used in your code.
Real-Time Analysis
Guardian AI performs real-time analysis on every save, providing immediate feedback:
Detection Speed
- Average Detection Time: <50ms
- Complex Pattern Analysis: <200ms
- Full File Scan: <1 second
What Gets Detected
Secrets
Guardian AI identifies various types of secrets that should never be committed:
// ❌ Detected as secret risk
const config = {
apiKey: "sk-abcdefghijklmnopqrstuvwxyz1234567890", // API Key
dbPassword: "supersecretpassword123", // Database Password
jwtSecret: "my-jwt-secret-key", // JWT Secret
awsAccessKey: "AKIAIOSFODNN7EXAMPLE", // AWS Access Key
};
Mocks
AI assistants sometimes generate test artifacts that shouldn’t be in production code:
// ❌ Detected as mock risk
const userData = {
id: 1,
name: "John Doe",
email: "john.doe@example.com", // Realistic but fake data
role: "admin"
};
// This looks like mock data that shouldn't be in production
function processPayment() {
console.log("Processing payment of $100.00"); // Mock payment
return { success: true, transactionId: "txn_1234567890" }; // Fake transaction
}
Phantom Dependencies
Missing dependencies that are used in code but not declared in package.json:
// ❌ Detected as phantom dependency risk
import { format } from 'date-fns'; // Used but not in package.json
import lodash from 'lodash'; // Used but not declared
export function formatDate(date) {
return format(date, 'yyyy-MM-dd');
}
export function deepClone(obj) {
return lodash.cloneDeep(obj);
}
False Positive Handling
We understand that not every detection is a real risk. Guardian AI is designed to minimize false positives while maintaining high accuracy:
💡 False Positive Management: If Guardian AI flags something that isn’t actually a risk, you can:
- Add it to your ignore list in .snapbackignore
- Adjust sensitivity levels for specific file patterns
- Provide feedback through the VS Code extension
Configuration
You can customize Guardian AI’s behavior to match your project’s needs:
Sensitivity Levels
// .snapbackconfig
{
"aiDetection": {
"sensitivity": "high", // low, medium, high
"ignorePatterns": [
"**/*.test.js",
"**/mocks/**"
],
"customRules": [
{
"pattern": "API_KEY",
"type": "secret",
"severity": "high"
}
]
}
}
VS Code Integration
In the VS Code extension, you can:
- View real-time detection results in the Problems panel
- See inline decorations for detected risks
- Configure detection settings through the UI
- Provide feedback on detections
Accuracy Metrics
Our continuous testing shows Guardian AI maintains high accuracy across different codebases:
Precision
Of detections are actual risks requiring attention
Recall
Of actual risks are successfully detected
Best Practices
🔍 Review Detections Promptly
Don’t ignore Guardian AI warnings. Take time to understand what’s being flagged and why.
⚙️ Customize for Your Project
Adjust sensitivity levels and ignore patterns based on your specific project needs.
📚 Educate Your Team
Make sure all team members understand what Guardian AI detects and how to respond.
🔄 Provide Feedback
Help us improve by reporting false positives or missed detections.