Auction Portal System Design
An auction portal is a digital marketplace where users can list items for sale and other participants compete to purchase them by placing bids. Unlike fixed-price systems, auctions introduce dynamic pricing driven by user competition, time constraints, and bidding strategies.
Auction platforms are widely used across domains such as collectibles, advertising exchanges, financial markets, and consumer marketplaces. The defining characteristic of such systems is that item value emerges in real time as users continuously react to competing bids.
From a system design perspective, auction portals are particularly interesting because they combine:
β High write contention (concurrent bids)
β Strong consistency requirements (no lost bids)
β Real-time updates (live price discovery)
β Burst traffic patterns (bidding wars)
β Extremely read-heavy workloads (auction status, timers)
Designing such a platform requires careful handling of concurrency, latency, correctness, and scalability β especially near auction closing windows where system stress peaks.
Functional Requirements
-
Users can create auctions
-
Users can place bids
-
Users can view highest bid
-
Auctions close based on rules:
- Fixed close time
- Dynamic close β e.g., 10 minutes after last bid
Non-Functional Requirements
- Highly available system
- Strong consistency for bids (no lost bids)
- High write contention (concurrent bidding)
- Scalable for viral auctions
- Real-time bid visibility
- Read-heavy workload (auction status, bids, timers)
Scale Estimation
Assumptions:
- 1 million auctions / day
- 100 bids per auction / day
Auction Creation Rate
auctions / day β auctions / second = 10 TPS (assuming approximately seconds in a day)
Bid Throughput
auctions Γ 100 bids /day= bids / day β bids / second = 1000 bids /second β 1,000 bids / sec (average)
Peak traffic (10x) β 10,000 bids / sec (Huge bidding TPS)
Read Traffic
Assuming 10k views per auction / day
= views /day β reads / second = 100k views / second
Auction systems are overwhelmingly read heavy, with sharp spikes during popular auctions.
Core Entities
- User
- Auction
- Bid
- Leaderboard / Winner State
API Design
The auction platform exposes a minimal set of APIs to support auction creation, bidding, and real-time state retrieval.
Create an Auction
POST /auctions
{
"title": "Vintage Camera Auction",
"itemName": "Polaroid SX-70",
"startTime": "2026-02-23T10:00:00Z",
"startPrice": 5000,
"closingType": "CLOSETIME",
"closeTime": "2026-02-23T12:00:00Z",
"description": "Rare collectible camera in good condition"
}
Fields
- title β Auction display title
- itemName β Item being auctioned
- startTime β Auction start timestamp
- startPrice β Minimum starting bid
- closingType β CLOSETIME | DYNAMIC
- closeTime β Closing timestamp (if fixed)
- description β Additional details
For dynamic auctions, closing time may be extended based on bid activity (e.g., 10 minutes after the last valid bid).
Place a Bid
POST /auctions/{auctionId}/bids
{
"price": 6200
}
Behavior
- Validates bid against current highest bid
- Rejects stale or invalid bids
- Updates leaderboard atomically
- Triggers real-time updates
View Auction Status
GET /auctions/{auctionId}
Response
{
"id": "auction123",
"winningBid": 6200,
"currentWinner": "user456",
"lastBidTime": "2026-02-23T10:45:00Z",
"bidsCount": 37,
"status": "ACTIVE"
}
Returned Data
- winningBid β Current highest bid
- currentWinner β Leading bidder
- lastBidTime β Timestamp of latest bid
- bidsCount β Total bids placed
- status β ACTIVE | CLOSED | EXPIRED
This API should be heavily cached due to high read frequency.
High Level Design
An auction portal is a real-time marketplace where users create auctions, place bids, and observe dynamic price discovery.
At a high level, the architecture revolves around a centralized Auction Service, supported by fast-access storage and background processes.
Clients (Users / Bidders / Sellers)
Users interact with the system through APIs:
createAuction()β List itemsplaceBid()β Compete for itemsgetAuctionStatus()β View live state
These operations are highly latency-sensitive, especially bidding.
Auction Service
The Auction Service is the primary orchestrator responsible for:
- Managing auction lifecycle
- Validating bids
- Updating winner state
- Enforcing closing rules
- Serving auction status queries
This service sits on the critical path of both reads and writes.

Database
Stores durable auction state and metadata, bid details, user details etc.:
- Auction details (sellerId, startPrice, closeTime)
- Winner information
- Current and Historical bid states
Data Model
At the core of an auction portal lie two fundamental entities: Auction and Bid.
These entities define how auction state evolves and how the system maintains correctness under heavy concurrency.
Auction Data Model
An auction data model typically will contain these attributes:
idβ Unique auction identifiersellerIdβ User who created the auctionitemIdβ Item being auctionedstartTimeβ Auction activation timeclosingTypeβ Fixed or dynamic closing logiccloseTimeβ Scheduled closing timestampstartPriceβ Minimum acceptable bidwinningBidβ Current highest bidwinnerβ Current leading biddercreatedAtβ Creation timestamp
Bid Data Model
A bid represents a user's attempt to outbid others for an auctioned item. Bid Data Model typically includes these attributes:
idβ Unique bid identifierauctionIdβ Associated auctionuserIdβ Bidding userpriceβ Bid valuetimestampβ Time of submission
Bids are modeled as immutable records. Once created, they are never modified or deleted.
Event Sourcing Perspective
Auction systems naturally align with an event sourcing model, where bids are treated as append-only events rather than state overwrites.
Instead of updating the auction directly, the system:
- Appends a new bid event
- Recomputes derived auction state
- Updates winner and highest bid
Why Event Sourcing Works Well for Auctions
β Prevents lost updates during concurrent bidding
β Preserves complete bidding history
β Enables deterministic state reconstruction
β Simplifies failure recovery
β Improves auditability & correctness guarantees
Derived fields such as:
- Current highest bid
- Current winner
- Last bid time
- Bid count
can be computed any time.
Background Cron Job (Auction Closing)
Auctions close based on rules:
- Fixed closing time
- Dynamic closing (e.g., extend on recent bids)
A background job periodically scans auctions:
β Detect expired auctions
β Compute final winner
β Transition state β CLOSED
Decoupling closure avoids blocking bid paths.
Deep Dive - Microservice Architecture
The refined architecture introduces clearer separation of concerns, better scalability characteristics, and improved handling of read/write asymmetry β all of which are crucial for an auction platform.
Rather than relying on a single centralized service, responsibilities are now decomposed into specialized services aligned with workload patterns.
Auction CRUD Service
The Auction CRUD Service is dedicated to managing auction metadata and lifecycle operations:
- Auction creation
- Auction updates
- Auction state transitions
- Auction configuration
This prevents auction management logic from being coupled with high-contention bid processing.
Bid Service
Bid placement introduces extreme write contention and strict correctness requirements.
Extracting bidding logic into a dedicated Bid Service enables:
- Isolated concurrency control
- Optimized write paths
- Atomic bid validation
- Leaderboard updates
Because bids are latency-sensitive and burst-heavy, this service can be tuned independently.
View Service
Auction systems are overwhelmingly read-heavy. Users frequently query:
- Current highest bid
- Auction status
- Time remaining
- Bid counts
A dedicated View Service allows read optimization strategies such as:
- Aggressive caching
- Precomputed views
- Read replicas
- Denormalized models

Independent Data Stores
The updated design separates storage based on access patterns:
Auction DB
Stores durable auction state and aggregated fields:
- Auction metadata
- Current winner
- Winning bid
- Closing configuration
Optimized for lifecycle management and reads.
Bids DB
Stores immutable bid records:
- Append-only writes
- High write throughput
- Historical correctness
Optimized for write-heavy workloads.
API Gateway Introduction
The API Gateway centralizes cross-cutting concerns:
- Authentication & authorization
- Rate limiting
- Request routing
- Load balancing
This prevents business services from handling infrastructural logic.
By decomposing services:
β Read-heavy and write-heavy paths scale independently
β Hot auctions affect fewer system components
β Failures remain isolated
β Operational tuning becomes simpler
This design better reflects real-world auction traffic patterns where bid contention and read amplification behave very differently.
Deep Dive - Placing a Bid Under High Contention
Bid placement is the most write-intensive and contention-heavy operation in an auction system.
Popular auctions can attract thousands of concurrent bids, all competing to update the same logical state β the current highest bid.
This creates a classic distributed systems challenge:
- Multiple clients attempt updates simultaneously
- Strict ordering is required
- No bids can be lost
- Users expect immediate feedback
The Contention Problem
In a naΓ―ve architecture, concurrent bid requests may hit multiple bid service instances:
- Servers race to update the highest bid
- Database locks increase
- Latency spikes
- Lost updates become possible
Traditional locking mechanisms do not scale well under burst-heavy traffic patterns typical of auctions.
Introducing a Queue for Bid Serialization
Rather than allowing all service instances to directly compete for state mutation, we introduce a streaming log / queue to serialize bid updates.
Important distinction:
The queue is not used to delay validation β it is used to enforce deterministic ordering.
This preserves correctness while allowing the system to scale. Few workers read from the kafka and writes to database.
Why Kafka Fits Auction Workloads
Kafka is a strong candidate for handling bid streams because it provides:
β Extremely high write throughput
β Durability and fault tolerance
β Horizontal scalability
β Strict ordering within partitions
The ordering guarantee is the critical property for auctions.
Partitioning Strategy
Kafka guarantees ordering only within a partition, therefore:
partition_key = auctionId

However this design is not completely correct so far, because:
-
Every bid (valid or invalid) must pass through Kafka, increasing broker load and infrastructure usage.
-
Bid validation becomes slower since Kafka is not designed for fast state lookups.
-
User-facing latency increases because even losing bids require full stream processing.
-
Hot auctions can overload Kafka partitions more easily.
What if we read latest bid from Cache (Redis)?
We can store latest bid information for an auction in Cache.
auctionId β highestBid, highestBidder, auctionStatus => 32 bytes (approx)
1 million auctions per day => bytes = 32mb (can easily fit in cache)
So from Bid Service we can perform fast checks:
-
Auction still active?
-
Bid > highestBid?
If not, Reject immediately. Only valid bids pushed to Kafka. Kafka consumer / worker processes bids sequentially. We also need to update the latest bid in Cache.
Why We Still Need Kafka After Cache Check?
Cache is only an optimization, never authority.
Consider this Race example: Two users read cache β highest bid = 100
Both bid 120 and both pass cache check. Now how do we decide the winner?
Kafka partitioning solves:
β Deterministic ordering β Exactly one winner
Worker performs final validation.
Why This Works Well
Redis handles:
β Ultra-fast state lookups β Early rejection & load shedding β Low-latency reads
Kafka handles:
β Ordering β Concurrency resolution β Durable event stream
.jpg)
Deep Dive - Viewing a Bid β Real-Time Updates
An auction system is highly latency-sensitive. Whenever a new bid is placed, all other participants must immediately see the latest state:
- Current highest bid
- Leading bidder
- Auction status changes
Basically every bid has to be broadcasted to other users. This creates a classic one write β many readers real-time distribution problem.
There are different approaches for this:
Polling (NaΓ―ve Approach)
Clients repeatedly request auction state:
GET /auctions/{auctionId}
- Poll every few seconds
- Server queries database
- Return latest bid data
Issues with polling:
- Read amplification at scale
- Unnecessary network traffic
- Increased backend load
- Poor real-time experience
Polling becomes inefficient for popular auctions.
Server Push Models - Server-Sent Events (SSE)
Instead of clients asking for updates, the server pushes updates when bids occur.
Unidirectional stream (Server β Client):
- Persistent HTTP connection
- Server emits bid events
- Clients auto-receive updates
Advantages:
- Simple over HTTP
- Built-in reconnection
- Efficient for broadcast-style updates
- Works well behind load balancers
Ideal when clients mainly consume updates.
One con is slightly higher latency than websocket.
WebSockets
Bidirectional persistent connection:
- Full-duplex communication
- Very low latency updates
- Suitable for interactive systems
Advantages:
- Sub-millisecond latency
- Rich interaction support
- Real-time responsiveness
Tradeoff:
- Higher infra complexity
- Connection management overhead
Preferred for competitive bidding environments.
Broadcasting Across Servers - Redis Pub/Sub
Websocket or Push protocols alone are insufficient in such distributed systems we are trying to solve. We need to broadcast the bids.
Redis Pub/Sub provides a lightweight and efficient mechanism for distributing bid updates across distributed servers.
- Each auction is mapped to a dedicated Redis Pub/Sub channel. Consider for auction 123 we have a channel 'auction_123'.
- WebSocket server subscribes to the Redis channel
- Client connects to the WebSocket server and joins the 'auction_123' channel
- When a new bid is placed for auctio 123 - Bid Service publishes an event to Redis in the channel 'auction_123'
- Redis pushes the message to all subscribed WebSocket servers
- WebSocket servers receive the event and forward it to connected clients via WebSocket
Final Flow
Bid Placed β Persisted β Publish Event β Broadcast Layer β Clients Updated
Why Redis Pub/Sub:
- Real-Time Broadcast
- Extremely low latency
- Simple fan-out model
- Highly Scalable - Scales with Multiple Push Servers
- Simple and Lightweight
Caveats of Redis Pub/Sub
Redis Pub/Sub follows a fire-and-forget model:
- No message persistence
- Disconnected subscribers miss messages
- Not suitable as a durable event store
Because of this, Redis Pub/Sub is used for live updates, while durable systems (Kafka / DB) handle persistence. On reconnect β Client fetches latest state from Redis/DB

When Do We Publish Pub/Sub Events?
A common design question in real-time systems is when to publish the Pub/Sub event for a new bid.
Publish Before Persistence β
Flow: Bid Service β Publish Event β Persist to DB
Problem:
- Users may see a bid that never actually gets stored
- DB write / worker failure causes ghost bids
- Leads to state inconsistency and trust issues
This is dangerous for correctness-sensitive systems like auctions.
Publish After Persistence β (Preferred)
Flow: Worker β Persist to DB β Update Cache β Publish Event
Benefits:
- Database remains the source of truth
- Only valid, durable bids are broadcast
- Prevents phantom updates and UI rollback issues
Although this introduces a tiny latency, it is typically negligible and not user-perceptible.
Why This Tradeoff Makes Sense
In distributed systems:
- Correctness is more important than micro-latency
- A slightly delayed but accurate update is better UX than an instant but incorrect one
- Redis publish overhead is extremely small compared to network/UI delays

Practical Optimization
To keep UI responsive while preserving correctness:
- Acknowledge bid quickly after validation
- Show optimistic state (
Processing bid...) - Let Pub/Sub events deliver the final authoritative update
This balances speed + consistency, which is the real goal of system design.