Back to Architecture Patterns
DataEnterprise

Real-Time Streaming Systems

Batch is a special case of streaming. When your business needs to react in seconds instead of hours, you need an architecture built for continuous data flow.

May 2, 2026
|
3 topics covered
Discuss This Architecture
Real-Time Streaming Systems
Data
Category
Enterprise
Complexity
Financial Services, Logistics
Industries
3+
Technologies

When You Need This

Your dashboards are stale by the time anyone looks at them. Fraud detection runs as an overnight batch job, catching fraud the next morning. Inventory counts are updated hourly, causing overselling. Sensor data is collected but not acted on until it's analyzed in a nightly ETL. You need a system where data flows continuously from sources through processing to consumers with sub-second latency — real-time analytics, live notifications, streaming AI inference, and instant synchronization between systems.

Pattern Overview

Real-time streaming architecture processes data as a continuous, unbounded flow rather than discrete batches. Event producers publish to a streaming platform (Kafka, Kinesis, Pulsar). Stream processors (Flink, Kafka Streams, custom consumers) transform, enrich, filter, and aggregate events in-flight. Processed results are pushed to consumers: real-time dashboards (WebSocket), search indices (Elasticsearch), analytics databases (ClickHouse), and downstream services. Change Data Capture (CDC) enables existing databases to participate as event sources without application changes.

Reference Architecture

The architecture has four layers. Event sources produce data — application events, database CDC streams, IoT telemetry, user clickstreams, external API webhooks. The streaming platform (Kafka) provides durable, ordered, replayable event storage. Stream processors consume from topics, apply transformations (filtering, enrichment, windowed aggregation, joins), and produce to output topics or sinks. Consumers subscribe to processed streams — WebSocket servers push to browsers, connectors sink to databases, alerting engines evaluate rules and fire notifications.

Core Components
  • Streaming Platform (Kafka): Multi-broker cluster with topic-per-event-type organization. Partitioned for parallelism (partition key = entity ID for ordering guarantees). Retention configured per topic — 7 days for operational events, 30+ days for audit/replay. Schema Registry (Confluent or Apicurio) enforces event schema compatibility across producers and consumers
  • Change Data Capture: Debezium connectors capture row-level changes from PostgreSQL, MySQL, or MongoDB and publish them as events to Kafka. This turns existing databases into event sources without modifying application code — essential for incremental migration to event-driven architectures
  • Stream Processing Engine: Apache Flink for complex event processing — windowed aggregations, stream-stream joins, pattern detection. Kafka Streams for simpler transformations that don't need a separate processing cluster. Custom Node.js/Python consumers for lightweight event handling
  • Real-Time Delivery: WebSocket server (Socket.io, native WS) for pushing live updates to browser clients. Server-Sent Events (SSE) for one-directional streaming. GraphQL Subscriptions for type-safe real-time queries. Fan-out architecture that decouples producer throughput from consumer connection count

Design Decisions & Trade-offs

Kafka vs. Kinesis vs. Pulsar
Kafka for teams that need the most mature ecosystem, highest throughput, and full control (self-managed or Confluent Cloud). Kinesis for AWS-native teams wanting zero operational burden with lower throughput requirements. Pulsar for multi-tenant streaming with built-in tiered storage and geo-replication. MW defaults to Kafka (MSK or Confluent Cloud) for most streaming architectures — the ecosystem of connectors, tooling, and operational knowledge is unmatched.
Flink vs. Kafka Streams vs. Custom Consumers
Flink for complex streaming logic — windowed aggregations, stream joins, CEP (complex event processing), exactly-once semantics. Kafka Streams when processing is simpler and you want to avoid running a separate Flink cluster. Custom consumers (Node.js, Python) for straightforward event handling that doesn't need stream processing primitives. MW uses Flink for analytics-heavy pipelines and Kafka Streams or custom consumers for event-driven microservice communication.
Exactly-Once vs. At-Least-Once
Exactly-once semantics (Kafka transactions + Flink checkpointing) guarantee no duplicates but add latency and complexity. At-least-once with idempotent consumers is simpler and sufficient for most use cases — if processing the same event twice produces the same result, you don't need exactly-once. MW defaults to at-least-once with idempotent handlers and reserves exactly-once for financial transactions and billing events where duplicates have monetary impact.
WebSocket Scaling
Each WebSocket connection holds a persistent TCP connection, limiting how many clients a single server can handle (~50K-100K connections per server). MW scales WebSocket delivery through: (a) a fan-out architecture where Kafka consumers push to a Redis Pub/Sub layer that distributes to multiple WebSocket servers, (b) horizontal scaling with sticky sessions for reconnection, and (c) graceful degradation to polling for clients behind restrictive firewalls.
Real-Time Streaming Systems - System Architecture Diagram

System Architecture Overview

Technology Choices

LayerTechnologies
StreamingApache Kafka (MSK, Confluent), Kinesis, Apache Pulsar, Redpanda
CDCDebezium, AWS DMS, Maxwell
ProcessingApache Flink, Kafka Streams, Benthos, custom consumers
Real-Time DeliveryWebSocket (Socket.io), SSE, GraphQL Subscriptions
AnalyticsClickHouse, Apache Druid, Elasticsearch, TimescaleDB
ObservabilityKafka lag monitoring (Burrow), Flink metrics, custom latency tracking

When to Use / When to Avoid

Use WhenAvoid When
Business decisions need sub-second data freshness (fraud, monitoring, trading)Batch processing with hourly/daily freshness meets the business need
Multiple consumers need the same event stream (fan-out, decoupled systems)You have a single producer and single consumer — a simple queue suffices
You need event replay for debugging, reprocessing, or building new consumersThe data volume is low (< 1K events/min) and doesn't justify streaming infrastructure
CDC is needed to sync existing databases to downstream systems without code changesThe team lacks experience with distributed systems — streaming adds significant operational complexity

Our Approach

MW designs streaming systems with the "replay principle" — every stream should be replayable from a point in time, enabling new consumers to backfill historical data and existing consumers to reprocess after bug fixes. Our Kafka deployments include schema evolution policies (backward-compatible by default), consumer lag alerting (before it becomes a business-visible delay), and dead-letter topics with automated retry. We've built streaming pipelines processing 500K+ events/second for video analytics, IoT telemetry, and real-time dashboards.

Related Blueprints

Related Case Studies

Related Technologies
Cloud SolutionsAI DevelopmentDigital Consulting

Need Help Implementing This Architecture?

Our architects can help design and build systems using this pattern for your specific requirements.

Get In Touch
Contact UsSchedule Appointment