Request pipelining is a technique to dramatically improve Redis performance by sending multiple commands to the server without waiting for individual responses. This reduces network round-trip latency and increases throughput.Documentation Index
Fetch the complete documentation index at: https://mintlify.com/redis/redis/llms.txt
Use this file to discover all available pages before exploring further.
How Pipelining Works
Without pipelining, each command follows this pattern:- Client sends command
- Client waits for network round-trip
- Server processes command
- Client receives response
- Repeat for next command
- Client sends command 1
- Client sends command 2
- Client sends command 3
- Client sends command N
- Client reads all responses in order
Protocol Implementation
Pipelining works naturally with the RESP protocol because:- Commands and responses are self-delimiting
- The server processes commands in a queue (networking.c:172-175)
- Responses are buffered in the client’s output buffer (networking.c:213-217)
- Multiple commands can be in the query buffer simultaneously
Query Buffer Processing
When multiple pipelined commands arrive, they’re stored in the client’s query buffer (c->querybuf). The server processes them sequentially:
processInputBuffer() function handles this loop, parsing and executing commands one at a time from the buffer.
Performance Benefits
Latency Reduction
Consider executing 10,000 commands with a 0.5ms network round-trip time: Without pipelining:- Time = 10,000 commands × 0.5ms = 5 seconds
- Most time spent waiting for network
- Time = 100 batches × 0.5ms = 50ms + processing time
- 100x faster for network-bound operations
Throughput Increase
Pipelining also improves throughput by:- Reducing syscall overhead: Fewer socket read/write operations
- Better CPU utilization: Server spends more time processing, less time waiting
- Efficient buffer usage: Batched operations use memory more efficiently
-P flag, you can specify the pipeline size:
Pipelining Example
Here’s what pipelined commands look like at the protocol level: Client sends (all at once):- All three commands are sent without waiting
- Responses arrive in the same order as requests
- The protocol format is identical to non-pipelined requests
Client Implementation Pattern
A typical pipelining implementation in a client:Responses always arrive in the same order as requests. The server guarantees this ordering even when using pipelining.
Buffer Management
The server manages pipelined commands through several buffer mechanisms:Input Buffering
Pipelined commands accumulate in the client’s query buffer (c->querybuf). The buffer automatically grows to accommodate large pipelines, up to the configured limit.
Constants from server.h:188-193:
PROTO_IOBUF_LEN(16KB) - Initial I/O buffer sizePROTO_INLINE_MAX_SIZE(64KB) - Maximum inline protocol sizePROTO_MBULK_BIG_ARG(32KB) - Large argument threshold
Output Buffering
Responses are queued in the client’s output buffer. The server uses two mechanisms:- Static buffer (
c->buf) - Fast fixed-size buffer (networking.c:135) - Reply list (
c->reply) - Dynamic list for large responses (networking.c:213)
Pending Write Queue
Clients with pending output are placed in a write queue. Before returning to the event loop, the server attempts to flush these buffers synchronously (networking.c:282-299). If the complete response cannot be written immediately, a write handler is installed to continue when the socket becomes writable (networking.c:258-273).Limitations and Considerations
Order Dependency
Pipelining works best with independent commands. If later commands depend on earlier results, you cannot use pipelining effectively:MULTI/EXEC) or Lua scripts.
Memory Usage
Large pipelines consume memory for buffering:- Client query buffer holds incoming commands
- Server output buffer holds pending responses
Timeout Behavior
When pipelining many commands, be aware that:- The client timeout clock starts when the connection becomes idle
- While processing a pipeline, the connection is not idle
- Very large pipelines could trigger timeout if processing takes too long
Best Practices
- Use moderate batch sizes: 100-1000 commands per pipeline is usually optimal
- Pipeline independent commands: Commands that don’t depend on each other’s results
- Handle partial failures: Even in a pipeline, individual commands can fail. Check each response.
- Monitor memory: Large pipelines increase server memory usage temporarily
-
Consider transactions: For related commands that must execute atomically, use
MULTI/EXECinstead -
Avoid blocking commands: Don’t pipeline blocking commands like
BLPOPas they’ll stall the pipeline
Pipelining in Redis Tools
Redis’s own tools demonstrate pipelining:redis-benchmark
The benchmark tool uses the-P flag to set pipeline depth (redis-benchmark.c:80, 1434-1435):
redis-cli with —pipe
Theredis-cli --pipe mode implements mass insertion using pipelining:
Measuring Pipeline Performance
To measure pipelining benefits:Implementation Notes
The server’s pipeline handling is integrated throughout the networking layer:- Command parsing: Loops through query buffer until exhausted (networking.c:2942-3048)
- Execution: Each parsed command is executed immediately (processCommand)
- Response queueing: Responses accumulate in output buffer (networking.c:485-520)
- Async writes: Output is flushed asynchronously when socket is writable (networking.c:2783-2826)