Skip to content

preambleai/prompt-injector

Repository files navigation

Prompt Injector

πŸ“₯ Download

Download for macOS Download for Windows

Linux Users: Pre-built Linux packages are not currently provided in releases. See the Linux Build Instructions below to build your own package.

License: Apache 2.0 Contributions Welcome

Copyright (c) 2025 Preamble, Inc. All rights reserved.

An AI security testing platform for detecting and mitigating prompt injection vulnerabilities in AI agent solutions. This open-source desktop application provides comprehensive vulnerability assessment and advanced attack simulation capabilities for AI security researchers and developers.

🎯 About Prompt Injector

Prompt Injector is an open-source desktop application for AI security testing, designed specifically for security researchers, penetration testers, and AI developers. The platform provides comprehensive prompt injection detection, jailbreak testing, and vulnerability assessment capabilities for popular LLMs.

πŸš€ Key Features

βœ… Currently Implemented

  • Advanced Attack Engine: Core testing engine with 100+ OWASP LLM01-LLM10 attack payloads
  • Professional Desktop UI: Modern Electron-based application with React frontend
  • Real-time Testing: Create and execute security tests with live results
  • Comprehensive Payload Library: Browse and select from categorized attack payloads
  • Detailed Results Analysis: View comprehensive test results with success/failure indicators
  • Multi-Provider Support: Configure multiple AI model providers (OpenAI, Anthropic, Google, Ollama)
  • Payload Creation Model: Set default models specifically for payload generation and mutation

🚧 In Development

  • Complete AI Model Integration: Full integration with all major AI providers
  • Advanced ML Detection: Machine learning-based attack detection (Semantic Guardian)
  • Agent Framework Testing: Deep integration with LangChain, AutoGen, CrewAI, and other frameworks
  • MCP Testing Environment: Model Context Protocol testing capabilities
  • Red Team Campaign Orchestration: Advanced campaign management and automation

πŸ“‹ Planned Features

  • Adaptive Payload Generation: AI-driven attack payload creation and mutation
  • Advanced Analytics: Comprehensive security reporting and analysis
  • Plugin System: Extensible architecture for community contributions
  • Research Integration: Academic benchmark integration and research paper implementations
  • Community Features: Payload sharing, template library, and collaborative testing

Note: For detailed implementation status of all features, see the Product Requirements Document.

πŸ› οΈ Technology Stack

  • Frontend: React 18 + TypeScript + Tailwind CSS
  • Desktop Framework: Electron 25
  • Backend: Node.js with Express
  • Build System: Vite + TypeScript
  • UI Components: Lucide React Icons, React Hot Toast
  • AI Integration: OpenAI, Anthropic, Google Gemini, Ollama APIs
  • Database: SQLite for local storage
  • Security: JWT authentication, bcrypt encryption, helmet security headers

πŸš€ Quick Start

AI Provider Setup

Choose your AI provider before running security tests:

Option A: Local Models (Free) - Ollama

  1. Install Ollama: Download from ollama.com
  2. Start Ollama service: ollama serve
  3. Install a model: ollama pull phi4:latest (or llama2, mistral, etc.)
  4. Configure in app: No API key required - works automatically

Option B: Cloud APIs (Paid)

Get API keys from one or more providers:

Configure API keys in the app's Settings page after installation.

Installation

Option 1: Download Pre-built Binaries (Recommended)

Option 2: Build from Source

  1. Clone the repository

    git clone https://github.com/preambleai/prompt-injector.git
    cd prompt-injector
  2. Install dependencies

    npm install
  3. Start development server

    npm run dev
  4. Build for distribution

    # Build for all platforms
    npm run dist:all
    
    # Or build for specific platform
    npm run dist:mac     # macOS
    npm run dist:win     # Windows

🐧 Linux Build Instructions

Since pre-built Linux packages are not currently provided in releases due to system dependency variations, Linux users can build their own packages:

  1. Install system dependencies (Ubuntu/Debian):

    sudo apt-get update
    sudo apt-get install -y libgtk-3-dev libwebkit2gtk-4.0-dev \
      libayatana-appindicator3-dev librsvg2-dev libdrm-dev \
      libxss1 libgconf-2-4 libxrandr2 libasound2-dev
  2. Follow the "Build from Source" steps above, then:

    # Build Linux packages
    npm run dist:linux
  3. Find your packages in the dist-electron/ directory:

    • *.AppImage - Portable Linux application
    • *.deb - Debian/Ubuntu package

Note: For other Linux distributions, install the equivalent packages for your package manager.

Automated Builds with GitHub Actions

This project uses GitHub Actions to automatically build installers for all platforms:

  • πŸš€ Automatic Release Builds: When you create a new release on GitHub, installers are automatically built and attached to the release
  • πŸ§ͺ Manual Test Builds: You can manually trigger builds from the Actions tab to test specific platforms
  • 🌐 Multi-platform Support: Builds simultaneously for macOS, Windows, and Linux

Creating a Release

  1. Push your changes to the main branch
  2. Go to the GitHub repository
  3. Click "Releases" β†’ "Create a new release"
  4. Tag the release (e.g., v1.0.0)
  5. Add release notes
  6. Click "Publish release"

GitHub Actions will automatically build installers for all platforms and attach them to the release within ~10-15 minutes.

Manual Build Testing

From the GitHub repository:

  1. Go to the "Actions" tab
  2. Select "Manual Build Test"
  3. Click "Run workflow"
  4. Choose the platform to build for (or "all" for all platforms)
  5. Download the artifacts when the build completes

🎯 Usage Guide

Initial Setup

  1. Configure AI Models: Navigate to Settings and add your AI model providers
  2. Set Default Payload Model: Choose which model to use for payload generation by clicking the star icon
  3. Verify Connection: Test your model connections to ensure proper setup

Running Security Tests

  1. Create Test Campaign: Use the Testing interface to create a new security test
  2. Select Attack Payloads: Choose from OWASP LLM01-LLM10 categories or create custom payloads
  3. Execute Tests: Run tests against your configured AI models
  4. Analyze Results: Review detailed results with success/failure indicators and security recommendations

Attack Categories

Our comprehensive payload library includes:

  • OWASP LLM01 - Prompt Injection: System prompt extraction and role confusion attacks
  • OWASP LLM02 - Insecure Output Handling: Output manipulation and injection techniques
  • OWASP LLM03 - Training Data Poisoning: Training data extraction and poisoning attempts
  • OWASP LLM04 - Model Denial of Service: Resource exhaustion and performance degradation
  • OWASP LLM05 - Supply Chain Vulnerabilities: Third-party integration and dependency attacks
  • OWASP LLM06 - Sensitive Information Disclosure: Data extraction and privacy violations
  • OWASP LLM07 - Insecure Plugin Design: Plugin and extension security testing
  • OWASP LLM08 - Excessive Agency: Permission escalation and unauthorized actions
  • OWASP LLM09 - Overreliance: Trust manipulation and decision-making attacks
  • OWASP LLM10 - Model Theft: Model extraction and intellectual property theft

πŸ“¦ Payload Schema

All attack payloads follow a standardized schema to ensure consistency and extensibility:

interface AttackPayload {
  id: string
  name: string
  nameUrl?: string
  description: string
  category?: string
  payload: string
  tags: string[]
  source: string
  severity?: string
  owasp?: string[]           // OWASP LLM categories
  mitreAtlas?: string[]      // MITRE ATLAS framework
  aiSystem?: string[]        // AI system components
  technique?: string
  successRate?: number
  bypassMethods?: string[]
  successIndicators?: string[]  // Success detection keywords
  failureIndicators?: string[]  // Failure detection keywords
  // ... additional metadata fields
}

Contributing Payloads: All payloads are stored in public/assets/payloads/all-attack-payloads.json and must conform to this schema.

πŸ—ΊοΈ Development Roadmap

Phase 1: Foundation & Core Infrastructure βœ…

  • Core attack engine implementation
  • Desktop application framework
  • Professional UI with modern design
  • Comprehensive payload library (100+ payloads)
  • Basic model configuration and management
  • Real-time test execution and results

Phase 2: Advanced Detection & AI Integration 🚧

  • Complete AI model API integration
  • Machine learning-based detection (Semantic Guardian)
  • Advanced payload mutation and generation
  • Performance optimization (<15 second response times)
  • Enhanced analytics and reporting

Phase 3: Agent Framework & MCP Testing πŸ“‹

  • LangChain, AutoGen, CrewAI framework integration
  • Model Context Protocol (MCP) testing environment
  • Multi-agent system testing capabilities
  • Real-time monitoring and instrumentation
  • Advanced campaign orchestration

Phase 4: Research & Innovation πŸ”¬

  • Academic benchmark integration (INJECAGENT, AdvBench)
  • Research paper reproduction capabilities
  • Community plugin system
  • Advanced local ML capabilities
  • Industry standard contributions

For detailed roadmap information, see ROADMAP.md.

🀝 Contributing

We welcome contributions from the AI security community! Whether you're interested in adding new attack payloads, improving detection algorithms, or enhancing the user experience, there are many ways to contribute.

Quick Contribution Guide

  1. Fork the repository
  2. Create a feature branch: git checkout -b feature/your-feature-name
  3. Make your changes and add tests
  4. Follow our coding standards: Run npm run lint and npm test
  5. Commit with conventional format: git commit -m 'feat: add new attack payload'
  6. Push and create a Pull Request

Areas for Contribution

  • 🎯 Attack Payloads: New prompt injection techniques and zero-day attacks
  • πŸ€– AI Model Integration: Support for new AI providers and models
  • πŸ”§ Agent Framework Support: Integration with additional agent frameworks
  • 🎨 UI/UX Improvements: Enhanced user interface and experience
  • πŸ“š Documentation: Improved guides, examples, and API documentation
  • πŸ§ͺ Testing: Test coverage and quality improvements
  • πŸ”’ Security: Security audits and vulnerability assessments

See our Contributing Guidelines for detailed information.

πŸ“š Documentation

Core Documentation

Development Resources

  • API Documentation: Generated from code comments
  • Code Examples: Located in /examples directory
  • Testing Guide: Unit and integration testing best practices
  • Security Guidelines: Security-focused development practices

πŸ”’ Security & Privacy

Security Features

  • Local-First Architecture: All data processing happens locally
  • Encrypted Storage: AES-256 encryption for sensitive data
  • No Telemetry: No data collection without explicit consent
  • Secure Communication: HTTPS for all external API calls
  • Regular Security Audits: Community-driven security reviews

Reporting Security Issues

If you discover a security vulnerability, please:

  1. Do not create a public GitHub issue
  2. Email security@preamble.com with details
  3. Include reproduction steps and impact assessment
  4. Allow reasonable time for response before public disclosure

πŸ—οΈ Project Structure

prompt-injector/
β”œβ”€β”€ assets/                    # Brand assets and images
β”œβ”€β”€ docs/                      # Documentation files
β”œβ”€β”€ public/
β”‚   └── assets/
β”‚       └── payloads/          # Attack payload definitions
β”œβ”€β”€ src/
β”‚   β”œβ”€β”€ components/            # React UI components
β”‚   β”œβ”€β”€ pages/                 # Application pages/routes
β”‚   β”œβ”€β”€ services/              # Core business logic
β”‚   β”œβ”€β”€ main/                  # Electron main process
β”‚   β”œβ”€β”€ types/                 # TypeScript type definitions
β”‚   └── __tests__/             # Test files
β”œβ”€β”€ scripts/                   # Build and utility scripts
β”œβ”€β”€ package.json               # Project configuration
└── README.md                  # This file

πŸ”§ Development Commands

# Development
npm run dev                    # Start development environment
npm run dev:renderer          # Start Vite dev server only
npm run dev:main              # Start Electron main process only

# Building
npm run build                 # Build for production
npm run build:renderer        # Build React application
npm run build:main            # Build Electron main process

# Distribution
npm run dist                  # Create installable packages
npm run dist:mac              # Build for macOS
npm run dist:win              # Build for Windows
npm run dist:linux            # Build for Linux

# Testing & Quality
npm test                      # Run test suite
npm run test:watch            # Run tests in watch mode
npm run lint                  # Run ESLint
npm run lint:fix              # Fix linting issues
npm run type-check            # TypeScript type checking

πŸ“Š Current Status

Development Phase: Phase 1 (Foundation & Core Infrastructure) - βœ… Complete Next Phase: Phase 2 (Advanced Detection & AI Integration) - 🚧 In Progress Current Focus: Complete AI model integration and advanced detection capabilities

Recent Accomplishments

  • βœ… Modern, professional UI with improved user experience
  • βœ… Comprehensive model configuration with default payload model selection
  • βœ… Enhanced attack payload library with standardized schema
  • βœ… Real-time test execution and results visualization
  • βœ… Multi-provider AI model support infrastructure

Upcoming Milestones

  • 🎯 Complete AI model API integration
  • 🎯 Implement machine learning-based detection
  • 🎯 Add agent framework testing capabilities
  • 🎯 Launch community plugin system

🌟 Community & Support

Getting Help

  • GitHub Issues: Bug reports and feature requests
  • GitHub Discussions: Questions and community discussions
  • Documentation: Comprehensive guides and examples
  • Contributing: Join our development community

Community Recognition

We recognize contributors through:

  • Contributors List: Featured in README and releases
  • Community Spotlight: Highlighted in project updates
  • Swag Program: Rewards for significant contributions
  • Mentorship: Guidance for new contributors

πŸ“„ License

This project is licensed under the Apache 2.0 License - see the LICENSE file for details.

πŸ™ Acknowledgments

Special thanks to:

  • The AI security research community for their invaluable insights
  • Contributors who have helped improve the platform
  • Academic institutions supporting AI security research
  • Open source projects that make this work possible

Prompt Injector - Advanced AI Security Testing Platform by Preamble, Inc.

Building the future of AI security, one test at a time.

πŸ›‘οΈ macOS: Troubleshooting "App is Damaged and Can't Be Opened"

If you see the error "Prompt Injector.app is damaged and can’t be opened. You should move it to the Trash." when opening the DMG on macOS, follow these steps:

  1. Right-click Method (Recommended):

    • Right-click (or Control-click) on Prompt Injector.app in your Applications folder.
    • Select "Open" from the context menu.
    • In the dialog, click "Open" again. The app should now launch and be trusted for future use.
  2. Remove Quarantine Attribute (Terminal):

    • Open Terminal (Applications β†’ Utilities β†’ Terminal)
    • Run:
      xattr -d com.apple.quarantine "/Applications/Prompt Injector.app"
    • Try opening the app again.
  3. System Settings Override:

    • Ventura/Sonoma: Go to System Settings β†’ Privacy & Security β†’ Developer Tools. Add the app.
    • Monterey/Big Sur/Catalina: Go to System Preferences β†’ Security & Privacy. Click the lock, then "Open Anyway" if it appears.

These steps are required for unsigned, open-source Electron apps distributed outside the Mac App Store. This is a security warning, not a real corruption.


Packages

No packages published

Languages