Linux Users: Pre-built Linux packages are not currently provided in releases. See the Linux Build Instructions below to build your own package.
Copyright (c) 2025 Preamble, Inc. All rights reserved.
An AI security testing platform for detecting and mitigating prompt injection vulnerabilities in AI agent solutions. This open-source desktop application provides comprehensive vulnerability assessment and advanced attack simulation capabilities for AI security researchers and developers.
Prompt Injector is an open-source desktop application for AI security testing, designed specifically for security researchers, penetration testers, and AI developers. The platform provides comprehensive prompt injection detection, jailbreak testing, and vulnerability assessment capabilities for popular LLMs.
- Advanced Attack Engine: Core testing engine with 100+ OWASP LLM01-LLM10 attack payloads
- Professional Desktop UI: Modern Electron-based application with React frontend
- Real-time Testing: Create and execute security tests with live results
- Comprehensive Payload Library: Browse and select from categorized attack payloads
- Detailed Results Analysis: View comprehensive test results with success/failure indicators
- Multi-Provider Support: Configure multiple AI model providers (OpenAI, Anthropic, Google, Ollama)
- Payload Creation Model: Set default models specifically for payload generation and mutation
- Complete AI Model Integration: Full integration with all major AI providers
- Advanced ML Detection: Machine learning-based attack detection (Semantic Guardian)
- Agent Framework Testing: Deep integration with LangChain, AutoGen, CrewAI, and other frameworks
- MCP Testing Environment: Model Context Protocol testing capabilities
- Red Team Campaign Orchestration: Advanced campaign management and automation
- Adaptive Payload Generation: AI-driven attack payload creation and mutation
- Advanced Analytics: Comprehensive security reporting and analysis
- Plugin System: Extensible architecture for community contributions
- Research Integration: Academic benchmark integration and research paper implementations
- Community Features: Payload sharing, template library, and collaborative testing
Note: For detailed implementation status of all features, see the Product Requirements Document.
- Frontend: React 18 + TypeScript + Tailwind CSS
- Desktop Framework: Electron 25
- Backend: Node.js with Express
- Build System: Vite + TypeScript
- UI Components: Lucide React Icons, React Hot Toast
- AI Integration: OpenAI, Anthropic, Google Gemini, Ollama APIs
- Database: SQLite for local storage
- Security: JWT authentication, bcrypt encryption, helmet security headers
Choose your AI provider before running security tests:
- Install Ollama: Download from ollama.com
- Start Ollama service:
ollama serve
- Install a model:
ollama pull phi4:latest
(orllama2
,mistral
, etc.) - Configure in app: No API key required - works automatically
Get API keys from one or more providers:
Configure API keys in the app's Settings page after installation.
- macOS: Download DMG
- Windows: Download EXE
-
Clone the repository
git clone https://github.com/preambleai/prompt-injector.git cd prompt-injector
-
Install dependencies
npm install
-
Start development server
npm run dev
-
Build for distribution
# Build for all platforms npm run dist:all # Or build for specific platform npm run dist:mac # macOS npm run dist:win # Windows
Since pre-built Linux packages are not currently provided in releases due to system dependency variations, Linux users can build their own packages:
-
Install system dependencies (Ubuntu/Debian):
sudo apt-get update sudo apt-get install -y libgtk-3-dev libwebkit2gtk-4.0-dev \ libayatana-appindicator3-dev librsvg2-dev libdrm-dev \ libxss1 libgconf-2-4 libxrandr2 libasound2-dev
-
Follow the "Build from Source" steps above, then:
# Build Linux packages npm run dist:linux
-
Find your packages in the
dist-electron/
directory:*.AppImage
- Portable Linux application*.deb
- Debian/Ubuntu package
Note: For other Linux distributions, install the equivalent packages for your package manager.
This project uses GitHub Actions to automatically build installers for all platforms:
- π Automatic Release Builds: When you create a new release on GitHub, installers are automatically built and attached to the release
- π§ͺ Manual Test Builds: You can manually trigger builds from the Actions tab to test specific platforms
- π Multi-platform Support: Builds simultaneously for macOS, Windows, and Linux
- Push your changes to the main branch
- Go to the GitHub repository
- Click "Releases" β "Create a new release"
- Tag the release (e.g.,
v1.0.0
) - Add release notes
- Click "Publish release"
GitHub Actions will automatically build installers for all platforms and attach them to the release within ~10-15 minutes.
From the GitHub repository:
- Go to the "Actions" tab
- Select "Manual Build Test"
- Click "Run workflow"
- Choose the platform to build for (or "all" for all platforms)
- Download the artifacts when the build completes
- Configure AI Models: Navigate to Settings and add your AI model providers
- Set Default Payload Model: Choose which model to use for payload generation by clicking the star icon
- Verify Connection: Test your model connections to ensure proper setup
- Create Test Campaign: Use the Testing interface to create a new security test
- Select Attack Payloads: Choose from OWASP LLM01-LLM10 categories or create custom payloads
- Execute Tests: Run tests against your configured AI models
- Analyze Results: Review detailed results with success/failure indicators and security recommendations
Our comprehensive payload library includes:
- OWASP LLM01 - Prompt Injection: System prompt extraction and role confusion attacks
- OWASP LLM02 - Insecure Output Handling: Output manipulation and injection techniques
- OWASP LLM03 - Training Data Poisoning: Training data extraction and poisoning attempts
- OWASP LLM04 - Model Denial of Service: Resource exhaustion and performance degradation
- OWASP LLM05 - Supply Chain Vulnerabilities: Third-party integration and dependency attacks
- OWASP LLM06 - Sensitive Information Disclosure: Data extraction and privacy violations
- OWASP LLM07 - Insecure Plugin Design: Plugin and extension security testing
- OWASP LLM08 - Excessive Agency: Permission escalation and unauthorized actions
- OWASP LLM09 - Overreliance: Trust manipulation and decision-making attacks
- OWASP LLM10 - Model Theft: Model extraction and intellectual property theft
All attack payloads follow a standardized schema to ensure consistency and extensibility:
interface AttackPayload {
id: string
name: string
nameUrl?: string
description: string
category?: string
payload: string
tags: string[]
source: string
severity?: string
owasp?: string[] // OWASP LLM categories
mitreAtlas?: string[] // MITRE ATLAS framework
aiSystem?: string[] // AI system components
technique?: string
successRate?: number
bypassMethods?: string[]
successIndicators?: string[] // Success detection keywords
failureIndicators?: string[] // Failure detection keywords
// ... additional metadata fields
}
Contributing Payloads: All payloads are stored in
public/assets/payloads/all-attack-payloads.json
and must conform to this schema.
- Core attack engine implementation
- Desktop application framework
- Professional UI with modern design
- Comprehensive payload library (100+ payloads)
- Basic model configuration and management
- Real-time test execution and results
- Complete AI model API integration
- Machine learning-based detection (Semantic Guardian)
- Advanced payload mutation and generation
- Performance optimization (<15 second response times)
- Enhanced analytics and reporting
- LangChain, AutoGen, CrewAI framework integration
- Model Context Protocol (MCP) testing environment
- Multi-agent system testing capabilities
- Real-time monitoring and instrumentation
- Advanced campaign orchestration
- Academic benchmark integration (INJECAGENT, AdvBench)
- Research paper reproduction capabilities
- Community plugin system
- Advanced local ML capabilities
- Industry standard contributions
For detailed roadmap information, see ROADMAP.md.
We welcome contributions from the AI security community! Whether you're interested in adding new attack payloads, improving detection algorithms, or enhancing the user experience, there are many ways to contribute.
- Fork the repository
- Create a feature branch:
git checkout -b feature/your-feature-name
- Make your changes and add tests
- Follow our coding standards: Run
npm run lint
andnpm test
- Commit with conventional format:
git commit -m 'feat: add new attack payload'
- Push and create a Pull Request
- π― Attack Payloads: New prompt injection techniques and zero-day attacks
- π€ AI Model Integration: Support for new AI providers and models
- π§ Agent Framework Support: Integration with additional agent frameworks
- π¨ UI/UX Improvements: Enhanced user interface and experience
- π Documentation: Improved guides, examples, and API documentation
- π§ͺ Testing: Test coverage and quality improvements
- π Security: Security audits and vulnerability assessments
See our Contributing Guidelines for detailed information.
- Product Requirements Document: Comprehensive feature requirements and implementation status
- Technical Architecture: System architecture and design diagrams
- Development Roadmap: Detailed development phases and milestones
- Build Instructions: Build and release process documentation
- Contributing Guidelines: How to contribute to the project
- API Documentation: Generated from code comments
- Code Examples: Located in
/examples
directory - Testing Guide: Unit and integration testing best practices
- Security Guidelines: Security-focused development practices
- Local-First Architecture: All data processing happens locally
- Encrypted Storage: AES-256 encryption for sensitive data
- No Telemetry: No data collection without explicit consent
- Secure Communication: HTTPS for all external API calls
- Regular Security Audits: Community-driven security reviews
If you discover a security vulnerability, please:
- Do not create a public GitHub issue
- Email security@preamble.com with details
- Include reproduction steps and impact assessment
- Allow reasonable time for response before public disclosure
prompt-injector/
βββ assets/ # Brand assets and images
βββ docs/ # Documentation files
βββ public/
β βββ assets/
β βββ payloads/ # Attack payload definitions
βββ src/
β βββ components/ # React UI components
β βββ pages/ # Application pages/routes
β βββ services/ # Core business logic
β βββ main/ # Electron main process
β βββ types/ # TypeScript type definitions
β βββ __tests__/ # Test files
βββ scripts/ # Build and utility scripts
βββ package.json # Project configuration
βββ README.md # This file
# Development
npm run dev # Start development environment
npm run dev:renderer # Start Vite dev server only
npm run dev:main # Start Electron main process only
# Building
npm run build # Build for production
npm run build:renderer # Build React application
npm run build:main # Build Electron main process
# Distribution
npm run dist # Create installable packages
npm run dist:mac # Build for macOS
npm run dist:win # Build for Windows
npm run dist:linux # Build for Linux
# Testing & Quality
npm test # Run test suite
npm run test:watch # Run tests in watch mode
npm run lint # Run ESLint
npm run lint:fix # Fix linting issues
npm run type-check # TypeScript type checking
Development Phase: Phase 1 (Foundation & Core Infrastructure) - β Complete Next Phase: Phase 2 (Advanced Detection & AI Integration) - π§ In Progress Current Focus: Complete AI model integration and advanced detection capabilities
- β Modern, professional UI with improved user experience
- β Comprehensive model configuration with default payload model selection
- β Enhanced attack payload library with standardized schema
- β Real-time test execution and results visualization
- β Multi-provider AI model support infrastructure
- π― Complete AI model API integration
- π― Implement machine learning-based detection
- π― Add agent framework testing capabilities
- π― Launch community plugin system
- GitHub Issues: Bug reports and feature requests
- GitHub Discussions: Questions and community discussions
- Documentation: Comprehensive guides and examples
- Contributing: Join our development community
We recognize contributors through:
- Contributors List: Featured in README and releases
- Community Spotlight: Highlighted in project updates
- Swag Program: Rewards for significant contributions
- Mentorship: Guidance for new contributors
This project is licensed under the Apache 2.0 License - see the LICENSE file for details.
Special thanks to:
- The AI security research community for their invaluable insights
- Contributors who have helped improve the platform
- Academic institutions supporting AI security research
- Open source projects that make this work possible
Prompt Injector - Advanced AI Security Testing Platform by Preamble, Inc.
Building the future of AI security, one test at a time.
If you see the error "Prompt Injector.app is damaged and canβt be opened. You should move it to the Trash." when opening the DMG on macOS, follow these steps:
-
Right-click Method (Recommended):
- Right-click (or Control-click) on
Prompt Injector.app
in your Applications folder. - Select "Open" from the context menu.
- In the dialog, click "Open" again. The app should now launch and be trusted for future use.
- Right-click (or Control-click) on
-
Remove Quarantine Attribute (Terminal):
- Open Terminal (Applications β Utilities β Terminal)
- Run:
xattr -d com.apple.quarantine "/Applications/Prompt Injector.app"
- Try opening the app again.
-
System Settings Override:
- Ventura/Sonoma: Go to System Settings β Privacy & Security β Developer Tools. Add the app.
- Monterey/Big Sur/Catalina: Go to System Preferences β Security & Privacy. Click the lock, then "Open Anyway" if it appears.
These steps are required for unsigned, open-source Electron apps distributed outside the Mac App Store. This is a security warning, not a real corruption.