Demonstration of an RPC-based messaging system using RabbitMQ, Node.js, Redis, and Express.js HTTP server. It now supports multi-core scalability using Node.js cluster mode and integrates Redis for managing pending request states across distributed workers.
+-----------+ +-------------------+ +------------+
| HTTP Client| ----> | HTTP Server (Node)| ----> | RabbitMQ |
| | | (Cluster Workers) | | (RPC Queue)|
+-----------+ +-------------------+ +------------+
|
v
+----------+
| Redis |
| (Request |
| Tracking)|
+----------+
- Asynchronous RPC Pattern: Processes messages via RabbitMQ and responds to HTTP requests.
- Redis Integration: Centralized state management using Redis for pending request tracking.
- Cluster Mode: Leverages all available CPU cores for concurrent HTTP request handling.
- Request-Response Handling: Ensures each HTTP request gets a unique RabbitMQ response with fault tolerance.
- Scalable Design: Supports multiple RabbitMQ consumers and concurrent HTTP requests across multiple workers.
- Docker and Docker Compose installed.
- Node.js (v14 or later) and npm installed.
-
Clone the repository:
git clone <repository-url> cd <repository-folder>
-
Start RabbitMQ and Redis using Docker Compose:
make docker-up
-
Install Node.js dependencies:
npm install
-
Create a
.env
file for environment variables:cp .env.example .env
- URL: http://localhost:15672
- Username:
guest
- Password:
guest
To interact with Redis:
docker exec -it redis redis-cli
Example commands:
127.0.0.1:6379> keys *
127.0.0.1:6379> get <correlation_id>
-
Start RPC Server:
make start-rpc
-
Start HTTP Server in Cluster Mode: The HTTP server now runs in cluster mode, utilizing all CPU cores:
make start-http
Send a test HTTP request to the /process
endpoint:
curl -X POST -H "Content-Type: application/json" -d '{"input": "foo"}' http://localhost:3000/process
Expected Response:
{
"result": "foo bar"
}
Run the load test script to benchmark the system:
node load_test.js
Starting load test...
Load test completed.
========== RESULTS ==========
Requests per second: 500
Average latency (ms): 25
Failed requests: 0
=============================
Stop all services and clean up resources:
make clean
- Replaces the in-memory
Map
with Redis for tracking pending requests. - Ensures state persistence and fault tolerance, even in distributed setups.
- Uses a TTL (time-to-live) of 30 seconds for automatic cleanup of stale requests.
- Uses Node.js cluster module to utilize all available CPU cores.
- Automatically restarts workers in case of failure.
- Ensures high availability and scalability for HTTP request handling.
-
Increase RabbitMQ Consumers: Run multiple
rpc_server.js
instances to scale message processing. -
Load Balancing: Use a load balancer (e.g., NGINX) in front of the HTTP servers to handle increased traffic.
-
Monitoring:
- Monitor RabbitMQ queues using the management UI.
- Monitor Redis key usage and expiration using
redis-cli
.