A general-purpose, customizable WhatsApp AI Chatbot in Python π that can understand text π, audio π΅ and images πΌοΈ, and reply your clients π¬ about anything related to your business π’ directly on WhatsApp β . Powered by OpenAI GPT4o π (other models can be used too) and Wassenger WhatsApp API π.
Now supports GPT-4o with text + audio + image input ππ΅πΌοΈ, audio responses π, and improved RAG with MCP tools π οΈ and external functions for external API calls support π
Find other AI Chatbot implementations in Node.js and PHP
π Get started for free with Wassenger WhatsApp API in minutes by connecting your existing WhatsApp number and obtain your API key β¨
- π€ Fully featured chatbot for your WhatsApp number connected to Wassenger
- π¬ Automatic replies to incoming messages from users
- π Multi-language support - understands and replies in 90+ different languages
- π€ Audio input/output - transcription and text-to-speech capabilities
- πΌοΈ Image processing - can analyze and understand images
- π₯ Human handoff - allows users to request human assistance
- βοΈ Customizable AI behavior and instructions
- π§ Function calling capabilities for external data integration
- π Memory management with conversation history and rate limiting
- π¦ Smart routing with webhook handling and error management
- π Secure with proper error handling and logging
- Features
- Quick Start
- Requirements
- Configuration
- Usage
- Deployment
- Architecture
- Testing
- Development
- Customization
- API Endpoints
- Troubleshooting
- Resources
- Contributing
- License
- Clone the repository:
git clone https://github.com/wassengerhq/whatsapp-chatgpt-bot-python.git cd whatsapp-chatgpt-bot-python
- Install Python dependencies:
pip install -r requirements.txt
- Configure environment:
cp .env.example .env # Edit .env file with your API keys (see Configuration section)
- Run the bot (development mode):
uvicorn src.main:app --reload --port 8080 # Or use provided scripts for Ngrok tunnel
-
Clone the repository:
git clone https://github.com/wassengerhq/whatsapp-chatgpt-bot-python.git cd whatsapp-chatgpt-bot-python
-
Configure environment:
cp .env.example .env # Edit .env file with your API keys (see Configuration section)
-
Run with Docker Compose:
# Production mode docker-compose up chatbot # Development mode with hot reloading docker-compose --profile development up chatbot-dev
For local development and testing using Docker:
- Build the Docker image locally: ```bash # Build production image docker build -t whatsapp-chatbot-local:latest .
# Or build development image with debugging tools
docker build --target development -t whatsapp-chatbot-local:dev .
```
- Run for local testing:
```bash
# Run production build locally
docker run -d
--name whatsapp-chatbot-test
-p 8080:8080
--env-file .env
whatsapp-chatbot-local:latest
# Run development build with volume mounting for live code changes
docker run -d \
--name whatsapp-chatbot-dev-test \
-p 8080:8080 \
--env-file .env \
-v $(pwd)/src:/app/src:ro \
whatsapp-chatbot-local:dev
```
- Test the local container: ```bash # Check if container is running docker ps | grep whatsapp-chatbot
# View logs
docker logs whatsapp-chatbot-test
# Test the API endpoint
curl http://localhost:8080/
# Stop and remove when done
docker stop whatsapp-chatbot-test
docker rm whatsapp-chatbot-test
```
- Python 3.9 or higher
- WhatsApp Personal or Business number
- Wassenger API key - Sign up for free
- OpenAI API key - Sign up for free
- Ngrok account (for local development) - Sign up for free
Edit the .env
file with your API credentials:
# Required: Wassenger API key
API_KEY=your_wassenger_api_key_here
# Required: OpenAI API key
OPENAI_API_KEY=your_openai_api_key_here
# OpenAI model to use (gpt-4o, gpt-4, gpt-3.5-turbo)
OPENAI_MODEL=gpt-4o
# Required for local development: Ngrok auth token
NGROK_TOKEN=your_ngrok_token_here
# Optional: Specific WhatsApp device ID
DEVICE=
# Optional: Webhook URL for production deployment
WEBHOOK_URL=https://yourdomain.com/webhook
# Server configuration
PORT=8080
LOG_LEVEL=info
-
Wassenger API Key:
-
OpenAI API Key:
-
Ngrok Token (for local development):
- Sign up at Ngrok
- Get your auth token from the dashboard
- Copy it to
NGROK_TOKEN
in.env
Edit src/config/bot_config.py
to customize:
- Bot instructions and personality
- Welcome and help messages
- Supported features (audio, images, etc.)
- Rate limits and quotas
- Whitelisted/blacklisted numbers
- Labels and metadata settings
-
Start the development server:
uvicorn src.main:app --reload --port 8080
-
The bot will:
- Start a local HTTP server on port 8080
- Optionally create an Ngrok tunnel automatically
- Register the webhook with Wassenger
- Begin processing WhatsApp messages
-
Send a message to your WhatsApp number connected to Wassenger to test the bot.
- Set environment variables on your server:
export WEBHOOK_URL=https://yourdomain.com/webhook export API_KEY=your_wassenger_api_key export OPENAI_API_KEY=your_openai_api_key
- Deploy to your web server (e.g., Gunicorn/Uvicorn, Docker, or cloud platform)
- Make sure your server can receive POST requests at
/webhook
The project includes a multi-stage Dockerfile optimized for both development and production environments.
- Build and run with Docker Compose (easiest):
# Production deployment docker-compose up -d chatbot # Development with hot reloading docker-compose --profile development up chatbot-dev
-
Build the Docker image:
# Production build docker build -t whatsapp-chatbot:latest . # Development build docker build --target development -t whatsapp-chatbot:dev .
-
Run the container:
# Production mode docker run -d \ --name whatsapp-chatbot \ -p 8080:8080 \ --env-file .env \ whatsapp-chatbot:latest # Development mode with volume mounting docker run -d \ --name whatsapp-chatbot-dev \ -p 8080:8080 \ --env-file .env \ -v $(pwd)/src:/app/src:ro \ whatsapp-chatbot:dev
When deploying with Docker, ensure these environment variables are set in your .env
file:
# Required
API_KEY=your_wassenger_api_key_here
OPENAI_API_KEY=your_openai_api_key_here
# Production webhook URL (required for production)
WEBHOOK_URL=https://yourdomain.com/webhook
# Optional
OPENAI_MODEL=gpt-4o
DEVICE=
PORT=8080
LOG_LEVEL=info
Deploy to any cloud platform that supports Docker:
Docker Hub:
# Build and tag for your registry
docker build -t your-username/whatsapp-chatbot:latest .
docker push your-username/whatsapp-chatbot:latest
Heroku:
# Using Heroku Container Registry
heroku container:push web -a your-app-name
heroku container:release web -a your-app-name
Google Cloud Run:
# Build and deploy
gcloud builds submit --tag gcr.io/PROJECT-ID/whatsapp-chatbot
gcloud run deploy --image gcr.io/PROJECT-ID/whatsapp-chatbot --platform managed
Render:
# Create render.yaml in project root
services:
- type: web
name: whatsapp-chatbot
env: docker
dockerfilePath: ./Dockerfile
envVars:
- key: API_KEY
value: your_wassenger_api_key_here
- key: OPENAI_API_KEY
value: your_openai_api_key_here
- key: WEBHOOK_URL
value: https://your-app-name.onrender.com/webhook
# Deploy via Render dashboard or CLI
Railway:
# Install Railway CLI
npm install -g @railway/cli
# Login and deploy
railway login
railway link
railway up
# Set environment variables in Railway dashboard
# or via CLI:
railway variables set API_KEY=your_wassenger_api_key_here
railway variables set OPENAI_API_KEY=your_openai_api_key_here
railway variables set WEBHOOK_URL=https://your-app.railway.app/webhook
Fly.io:
# Install flyctl and initialize
fly auth login
fly launch --no-deploy
# Configure fly.toml
[env]
PORT = "8080"
[[services]]
http_checks = []
internal_port = 8080
processes = ["app"]
protocol = "tcp"
script_checks = []
[services.concurrency]
hard_limit = 25
soft_limit = 20
type = "connections"
[[services.ports]]
force_https = true
handlers = ["http"]
port = 80
[[services.ports]]
handlers = ["tls", "http"]
port = 443
# Set secrets and deploy
fly secrets set API_KEY=your_wassenger_api_key_here
fly secrets set OPENAI_API_KEY=your_openai_api_key_here
fly secrets set WEBHOOK_URL=https://your-app.fly.dev/webhook
fly deploy
You can also deploy this bot without Docker to any cloud platform that supports Python and FastAPI:
FROM python:3.10-slim
WORKDIR /app
COPY . .
RUN pip install -r requirements.txt
CMD ["uvicorn", "src.main:app", "--host", "0.0.0.0", "--port", "8080"]
The Python implementation follows a modern, modular architecture:
src/
βββ api/ # API clients (OpenAI, Wassenger)
βββ bot/ # Core bot logic (ChatBot, FunctionHandler)
βββ config/ # Configuration management
βββ http/ # HTTP routing and webhook handling
βββ storage/ # In-memory storage
βββ utils/ # Utilities (logging, Ngrok)
βββ main.py # FastAPI app entrypoint
The project includes several test utilities to validate your setup:
- Configuration Test: Validate that all configuration files load correctly and dependencies are installed.
- API Connection Test: Test connectivity to Wassenger and OpenAI APIs with your configured keys.
- Webhook Test: Simulate a webhook request to test the message processing pipeline.
Example (using HTTPie):
http POST http://localhost:8080/webhook event=message:in:new data:='{"chat": {"id": "test", "fromNumber": "123", "type": "chat"}, "fromNumber": "123", "body": "Hello"}'
chatgpt-python/
βββ src/
β βββ api/ # API integrations
β βββ bot/ # Core bot logic
β βββ config/ # Configuration
β βββ http/ # HTTP handling
β βββ storage/ # Data storage
β βββ utils/ # Utility classes
β βββ main.py # FastAPI app entry point
βββ tests/ # Test utilities
βββ .env.example # Environment template
βββ .dockerignore # Docker ignore file
βββ Dockerfile # Multi-stage Docker build
βββ docker-compose.yml # Docker Compose configuration
βββ requirements.txt # Python dependencies
βββ run.py # Application startup script
βββ README.md
ChatBot
- Main bot processing logicOpenAIClient
- OpenAI API integration with chat, audio, and image supportWassengerClient
- Wassenger API integration for WhatsApp messagingFunctionHandler
- AI function calling systemRouter
- HTTP request routing and webhook handlingBotConfig
- Centralized configuration managementMemoryStore
- In-memory caching and conversation state managementAppLogger
- Logging systemNgrokTunnel
- Development tunneling
Edit the AI behavior in src/config/bot_config.py
:
BOT_INSTRUCTIONS = 'You are a helpful assistant...'
Add custom functions in src/bot/function_handler.py
:
def get_business_hours():
return {
'monday': '9:00 AM - 6:00 PM',
'tuesday': '9:00 AM - 6:00 PM',
# ... more days
}
Adjust limits in src/config/bot_config.py
:
LIMITS = {
'maxInputCharacters': 1000,
'maxOutputTokens': 1000,
'chatHistoryLimit': 20,
# ... more limits
}
GET /
- Bot information and statusPOST /webhook
- Webhook for incoming WhatsApp messagesPOST /message
- Send message endpointGET /sample
- Send sample messageGET /files/{id}
- Temporary file downloads
-
"No active WhatsApp numbers"
- Verify your Wassenger API key
- Check that you have a connected WhatsApp device in Wassenger
-
"WhatsApp number is not online"
- Ensure your WhatsApp device is connected and online in Wassenger dashboard
-
Webhook not receiving messages
- Check that your webhook URL is accessible from the internet
- Verify Ngrok tunnel is running (development mode)
- Check firewall settings
-
OpenAI API errors
- Verify your OpenAI API key is valid
- Check your OpenAI account has sufficient credits
- Ensure the model name is correct
-
Container fails to start
# Check container logs docker logs whatsapp-chatbot # Check if environment variables are set docker exec whatsapp-chatbot env | grep -E "(API_KEY|OPENAI_API_KEY)"
-
Port already in use
# Use different port docker run -p 8081:8080 whatsapp-chatbot:latest # Or stop conflicting services docker ps | grep 8080
-
Permission denied errors
# Check if files are accessible ls -la .env # Fix permissions if needed chmod 644 .env
-
Container exits immediately
# Run interactively to debug docker run -it --env-file .env whatsapp-chatbot:latest /bin/bash # Check health status docker inspect whatsapp-chatbot | grep Health
Enable detailed logging by setting in .env
:
LOG_LEVEL=debug
Then check the logs in your console or configured log output.
- Wassenger Documentation
- OpenAI API Documentation
- FastAPI Documentation
- Pyngrok Documentation
- GitHub Issues
- Fork the repository
- Create a feature branch
- Make your changes
- Test thoroughly
- Submit a pull request
MIT License - see LICENSE file for details.
Built with β€οΈ using Python and the Wassenger API.