Skip to content

Latest commit

 

History

History
229 lines (177 loc) · 5.97 KB

README.md

File metadata and controls

229 lines (177 loc) · 5.97 KB

💬 Scalable WebSocket Chat Application

Made with Turborepo Node.js Next.js Redis Apache Kafka PlanetScale

A high-performance, scalable chat application built with modern technologies supporting 10,000+ concurrent users.

🏗️ Architecture

graph TB
    subgraph "Frontend Clients"
        C1[Client 1]
        C2[Client 2]
        C3[Client n...]
    end

    subgraph "Load Balancer"
        LB[NGINX]
    end

    subgraph "WebSocket Servers"
        WS1[WebSocket Server 1]
        WS2[WebSocket Server 2]
        WS3[WebSocket Server n...]
    end

    subgraph "Message Queue"
        K[Apache Kafka]
    end

    subgraph "Pub/Sub Layer"
        R1[Redis Pub/Sub]
    end

    subgraph "Database"
        DB[PlanetScale]
    end

    C1 & C2 & C3 --> LB
    LB --> WS1 & WS2 & WS3
    WS1 & WS2 & WS3 <--> R1
    WS1 & WS2 & WS3 <--> K
    K --> DB
Loading

WebSocket Scaling Strategy

sequenceDiagram
    participant Client
    participant LoadBalancer
    participant WebSocketServer1
    participant WebSocketServer2
    participant RedisPubSub
    participant Kafka
    participant PlanetScale

    Client->>LoadBalancer: Connect via WebSocket
    LoadBalancer->>WebSocketServer1: Route to available server
    Client->>WebSocketServer1: Send message
    WebSocketServer1->>RedisPubSub: Publish message
    RedisPubSub->>WebSocketServer2: Broadcast to all servers
    WebSocketServer2->>Client: Deliver to other clients
    WebSocketServer1->>Kafka: Queue message for persistence
    Kafka->>PlanetScale: Store message in database
Loading

🚀 Features

  • Real-time Communication: Low-latency messaging using WebSocket protocol
  • High Scalability: Supports 10,000+ concurrent users
  • Message Persistence: Reliable message storage using PlanetScale
  • Load Balancing: Efficient request distribution with NGINX
  • Microservices Architecture: Built with Turborepo for better modularity
  • Message Queueing: Kafka integration for reliable message processing
  • Pub/Sub System: Redis for real-time message broadcasting
  • High Availability: 99.9% uptime during peak traffic

🛠️ Technical Stack

Frontend

  • Next.js for server-side rendering
  • TypeScript for type safety
  • WebSocket client implementation

Backend

  • Node.js microservices
  • WebSocket server implementation
  • Turborepo for monorepo management

Infrastructure

  • Redis Pub/Sub for message broadcasting
  • Apache Kafka for message queuing
  • PlanetScale for scalable database
  • NGINX for load balancing

📦 Project Structure

├── apps/
│   ├── web/                 # Next.js frontend
│   └── websocket-server/    # Node.js WebSocket server
├── packages/
│   ├── shared/              # Shared utilities
│   ├── database/            # Database schemas
│   └── config/              # Configuration
└── turbo.json              # Turborepo configuration

🚦 Getting Started

  1. Clone the repository
git clone https://github.com/yourusername/scalable-chat-app.git
cd scalable-chat-app
  1. Install dependencies
npm install
  1. Set up environment variables
cp .env.example .env
  1. Start development servers
npm run dev

💻 Development

Prerequisites

  • Node.js 18+
  • Redis
  • Apache Kafka
  • PlanetScale account

Running Locally

  1. Start Redis server
redis-server
  1. Start Kafka
# Start Zookeeper
bin/zookeeper-server-start.sh config/zookeeper.properties

# Start Kafka
bin/kafka-server-start.sh config/server.properties
  1. Run the application
turbo dev

🔄 Scalability Features

WebSocket Scaling

  • Multiple WebSocket servers handle client connections
  • Redis Pub/Sub ensures message delivery across all servers
  • NGINX load balancer distributes client connections

Database Scaling

  • PlanetScale handles database scaling automatically
  • Kafka manages high-throughput message persistence
  • Efficient connection pooling and query optimization

Performance Optimizations

  • Message batching for bulk operations
  • Connection pooling for database efficiency
  • Caching frequently accessed data in Redis

📊 Performance Metrics

  • Concurrent Users: 10,000+
  • Message Latency: <100ms
  • Uptime: 99.9%
  • Message Throughput: 1000+ messages/second

🔐 Security Features

  • WebSocket connection authentication
  • Rate limiting
  • Input validation
  • SQL injection prevention
  • XSS protection

🚧 Future Improvements

  • Implement message encryption
  • Add file sharing capabilities
  • Enhance monitoring and alerting
  • Add support for voice/video calls
  • Implement message search functionality

🤝 Contributing

Contributions are welcome! Please read our Contributing Guide for details on our code of conduct and the process for submitting pull requests.

📝 License

This project is licensed under the MIT License - see the LICENSE file for details.

👨‍💻 Author

Mohd Jami Khan


Made with ❤️ by Mohd Jami Khan