Stock Trading App System Design
Functional Requirements
- Users can search and fetch stock price details
- System connects with Stock Exchange to fetch stock prices
- Users should see real-time price updates
- Users can place buy/sell orders
- Orders are forwarded to Stock Exchange
- Support for:
- LIMIT orders
- MARKET orders
- No need to handle:
- Order matching
- Trade execution (handled by exchange)
- Users can view past order history
Non-Functional Requirements
- High Availability for:
- Price search
- Price fetch APIs
- Strong Consistency for:
- Order placement
- Highly Scalable:
- Must handle millions of users
- Read-heavy system:
- Price views >> Order placements
Scale Estimation
Users
- Consider Total users: 100 million
- Daily Active Users(DAU) (10%): 10 million
Stock Price Views
- Each user:
- Views 10 stocks
- Refreshes 20 times/day
Total Views:
10M users Γ 10 stocks Γ 20 views = 2 billion views/day = views/day
QPS:
Orders
Assume - 10% of views converts into orders
20K Γ 10% = 2K orders/sec = ~2K TPS
Peak Load
- Assume 5Γ spike
Price QPS (Normal) = 20K/sec, (Peak Time) = 100k/sec
Order TPS (Normal) = 2K/sec, (Peak Time) = 10k/sec
APIs
View Stock Prices
Fetch latest or historical stock prices.
GET /stockPrices
GET /stockPrices?symbol={symbol}&timerange={timerange}
symbol β Stock symbol (e.g., AAPL, TSLA)
timerange β Time window (e.g., 1d, 1w, 1m)
Place Order
Place a buy or sell order.
POST /orders
Request Body
{
"orderType": "buy | sell",
"type": "LIMIT | MARKET",
"symbol": "string",
"quantity": number,
"amount": number
}
LIMIT β Executes at specified price
MARKET β Executes at current market price
View Past Orders
Retrieve order history.
GET /orders
GET /orders?symbol={symbol}
symbol β Filter orders by stock symbol
High Level Design
Here weβll walk through a high-level design (HLD) of a stock trading system that allows users to:
- View stock prices
- Place buy/sell orders
- Track order history
We will focus on the core components and system interactions. Improvements will be discussed in later iterations.
At a high level, the system consists of:
- API Gateway (entry point)
- Price Service (read-heavy operations)
- Order Service (write-heavy operations)
- Stock DB and Order DB (data storage)
- Exchange Gateway (external integration)
- Stock Exchange System (external dependency)

API Gateway
The API Gateway is the single entry point for all client requests.
Responsibilities:
- Route requests to appropriate services
- Handle:
- View prices
- Place orders
- View orders
It simplifies communication between clients and backend services.
Price Service
This service handles all stock price related operations.
Responsibilities:
- Fetch stock prices from the exchange (via gateway)
- Store/update price data in Stock DB
- Serve price queries to users
Why separate?
- The system is read-heavy
- Price requests are high in volume and need independent scaling
Stock DB
Stores stock price data:
- stockId
- symbol
- timestamp
- price
Purpose:
- Fast access to price data
- Reduce calls to external exchange
Acts as a cache as well as a historical store.
Order Management Service
Handles all order-related business logic.
Responsibilities:
- Create new orders
- Store orders in Order DB
- Forward orders to the exchange
- Update order status
Order DB
Stores all order information:
- id
- order_type (buy/sell)
- type (LIMIT/MARKET)
- quantity
- amount
- status
- exchange_order_id
Acts as the source of truth for all orders. Stock Exchange will maintain its own order id, so we need to store that as well in our database.
Stock Exchange Gateway
This component integrates with the external stock exchange.
Responsibilities:
- Fetch stock prices
- Forward orders to exchange
- Receive updates like order status and exchange order ID
This layer decouples internal services from the external system.
Stock Exchange System
External system responsible for:
- Order matching
- Trade execution
- Market data
Our system does not handle matching logic. You can refer to our Stock Exchange System Design for more details.
High-Level Flows
1. View Stock Prices
- Client sends request to API Gateway
- API Gateway forwards request to Price Service
- Price Service fetches data from Stock DB
- (Optional) If data is stale, fetch from Exchange via Gateway
- Response is returned to the client

2. Place Order
- Client sends order request to API Gateway
- API Gateway forwards request to Order Service
- Order Service creates and stores order in Order DB
- Order Service sends order to Stock Exchange via Exchange Gateway
- Exchange processes the order
3. Order Status Update
- Stock Exchange sends order updates
- Exchange Gateway receives updates
- Gateway forwards update to Order Service
- Order Service updates Order DB with:
- status
- exchange_order_id
4. View Past Orders
- Client sends request to API Gateway
- API Gateway routes request to Order Service
- Order Service fetches data from Order DB
- Response is returned to the client
Deep Dive - View Price
In the initial high-level design, price fetching was handled synchronously. But there are two ways to get price from the exchange:
- Gateway polls the exchange
- Exchange pushes updates (WebSocket/Webhook)
Push-based systems are more efficient and lower latency. Instead of fetching prices per request, the system can continuously streams price updates and pushes them to clients.
We introduce a message broker (e.g., Kafka) to decouple components:
- Price updates are produced asynchronously
- Consumers process updates independently
- Clients receive real-time updates via streaming

Price Flow (Step-by-Step)
- Stock Exchange Gateway receives price updates from the exchange
- Gateway publishes events to the Price Topic
- Price Service consumes events from the topic
- Price Service updates the Stock DB
- Price Service pushes updates via SSE
- Clients receive real-time stock prices
Why Event-Driven?
This approach improves the system in multiple ways:
- Decoupling β Producers and consumers are independent
- Scalability β Services can scale horizontally
- Real-time updates β No polling required
- Reliability β Message broker provides durability
Partitioning Strategy
Partition the price topic using symbol as the key:
- Ensures ordering per stock
- Enables parallel processing
Consumer Scaling
- Run multiple instances of Price Service
- Each instance consumes a subset of partitions
This allows the system to handle high throughput.
Idempotency
Duplicate events can occur:
- Use timestamps or versioning
- Ensure updates are idempotent
Real-Time Streaming
SSE is used to push updates:
- Efficient for one-way streaming
- Works well for real-time price updates
For very large scale, consider WebSockets or a Pub/Sub fanout layer. By introducing an event-driven architecture, the price flow becomes scalable, real-time, and decoupled. This design avoids polling, handles high throughput efficiently, and provides a strong foundation for production systems.
Next, we can apply similar principles to improve the order flow, which involves stricter consistency requirements.
Deep Dive - Order Processing Flow
The initial order processing design looks simple and intuitive:
Client β API Gateway β Order Management Service β Stock Exchange Gateway β Exchange
It works. Itβs easy to reason about. But under real-world conditions β scale, failures, retries β this design starts to break down.
Problems in Old Design
-
Tight Coupling to Exchange
Order Service β Exchange Gateway β Exchange
- If the exchange is slow β your API becomes slow
- If the exchange is down β order placement fails
- No failure isolation
-
Fully Synchronous Flow
- Waits for exchange response
- High latency
- Poor UX
- Timeout risk
-
No Event-Driven Layer
- No queue between components
- No replay capability
- Tight coupling
- Limited scalability
-
Weak Order State Model
{ "status": "...", "exchange_order_id": "..." }
Missing:
- Lifecycle states
- Partial fills
- Retry states
- Failure reasons
- No Reliability Guarantees
- Exchange succeeds, DB fails β inconsistent state
- API crashes after send β unknown outcome
Improved Architecture (Conceptual Shift)
Instead of the old design, let's improve it.
New Components
- Order Topic β queue for incoming order events
- Order Status Topic β queue for order lifecycle updates
- Stock Exchange Gateway β async layer to interact with exchange (with retries)

Flow
-
Client sends order request
-
Order Management Service:
- Saves order as
PENDING - Publishes event to Order Topic
- Saves order as
-
Stock Exchange Gateway:
- Consumes from Order Topic
- Sends order to exchange
- Handles retries if needed
-
Exchange responds (sync/webhook)
-
Gateway publishes update to Order Status Topic
-
Order Management Service:
- Consumes Order Status Topic
- Updates order state (
PLACED,FILLED, etc.)
-
Client receives real-time updates (SSE/WebSocket)
-
Background processes:
- Retry failures (with backoff + DLQ)
- Run reconciliation with exchange
The Dual Write Problem
Even with an event-driven flow, a critical issue remains: dual writes. See example, where it can Happen.
In the Order Management Service:
- Save order to DB
- Publish event to Order Topic
We are writing in two separate systems β Database + Message Broker and this Is Dangerous. These operations are not atomic. Consider the below scenarios:
-
DB write succeeds, event publish fails
β Order exists but never reaches exchange -
Event publish succeeds, DB write fails
β Order gets executed but not recorded
Result: inconsistent system state
Common (But Flawed) Fixes
- Try-catch with retries β still unreliable
- Reordering operations β just shifts the problem
- Manual reconciliation β reactive, not preventive
The Right Approach: Transactional Outbox
Instead of writing to DB and broker separately:
- Save order (
PENDING) - Save event in an Outbox Table (same DB transaction)
write in Orders Table + Outbox Table β in a single atomic DB transaction
- Background worker reads from Outbox table, publishes to Order Topic and marks event as processed.

Why This Works
- DB + event write is atomic
- No event is lost
- Safe retries (idempotent publishing)
- Guarantees eventual delivery
Potential Considerations for Improvement
While the design is robust, here are a few things a final production system would need to ensure:
Idempotency: Since Kafka guarantees "at-least-once" delivery, the Order Management Service and Stock Gateway must be idempotent. If they consume the same "Create Order" message twice (due to a rebalance or retry), they must not place duplicate orders or double-update statuses.
The fix is simple: Generate a unique idempotency_key at the client, store it in Redis before processing, and include it in every Kafka message. If the same key appears again, return the cached response. Your users will never know the difference.
Dead Letter Queue (DLQ): If the Stock Gateway fails to send an order to the exchange repeatedly, the system needs a DLQ to handle "poison pill" messages so they donβt block the rest of the Order Topic.
Database Scaling: The Stock DB (under Price Service) likely experiences high write throughput. In a final design, this would likely be a time-series database or a read-replica setup to handle the high velocity of price ticks without locking up.
CDC Instead of Polling: Current outbox worker polls the database every second for new events. That's a full second of latency per order, plus unnecessary database load. In trading, a second is an eternityβprices change, opportunities vanish. Replace polling with Debezium. It reads the database transaction log (WAL/binlog) and publishes changes to Kafka instantly. Latency drops from 1000ms to < 10ms. No polling, no wasted queries, and your orders hit the exchange while the price is still relevant.
The final design is a scalable, resilient, event-driven microservices system. It prioritizes data consistency (via Outbox), loose coupling (via Kafka Topics), and real-time capabilities (via SSE), making it suitable for a financial trading platform where reliability and speed are critical.