API Reference
This section provides detailed API documentation for all modules, classes, and functions in the Cache Middleware package.
Core Components
FastAPI/Starlette caching middleware implementation.
This module provides HTTP response caching capabilities through a middleware that integrates with configurable cache backends.
- class cache_middleware.middleware.CacheMiddleware(app: FastAPI, backend: CacheBackend)[source]
Bases:
BaseHTTPMiddleware
HTTP caching middleware for FastAPI/Starlette applications.
This middleware intercepts HTTP requests and provides response caching using a configurable backend. It works in conjunction with the @cache decorator to determine which endpoints should be cached.
- Parameters:
app (FastAPI) – The FastAPI application instance
backend (CacheBackend) – A fully initialized cache backend instance that implements the CacheBackend interface
Examples
>>> from cache_middleware.backends.redis_backend import RedisBackend >>> redis_backend = RedisBackend(url="redis://localhost:6379") >>> app.add_middleware(CacheMiddleware, backend=redis_backend)
- __init__(app: FastAPI, backend: CacheBackend)[source]
Initialize the cache middleware.
- Parameters:
app (FastAPI) – The FastAPI application instance
backend (CacheBackend) – A fully initialized cache backend instance
- async dispatch(request: Request, call_next)[source]
Process HTTP requests and apply caching logic.
This method intercepts incoming requests, checks if the endpoint has caching enabled via the @cache decorator, and either returns a cached response or caches the response from the endpoint.
- Parameters:
request (Request) – The incoming HTTP request
call_next (callable) – The next middleware or endpoint in the chain
- Returns:
Either a cached JSONResponse or the response from the endpoint
- Return type:
Response
Notes
The caching logic follows these steps: 1. Find the endpoint function from the application routes 2. Check if endpoint has the _use_cache attribute (set by @cache decorator) 3. Handle Cache-Control headers (no-store, no-cache) 4. Generate a unique cache key based on method, path, params, and body 5. Try to return cached response if available 6. Call the actual endpoint and cache successful JSON responses
Cache decorators for FastAPI/Starlette endpoints.
This module provides decorators to mark endpoints for caching. The actual caching logic is implemented in the CacheMiddleware.
- cache_middleware.decorators.cache(timeout: int = 300)[source]
Decorator to enable caching for a FastAPI endpoint.
This decorator marks a function as cacheable by setting internal attributes that the CacheMiddleware will inspect. The decorator itself doesn’t perform any caching - it only provides metadata to the middleware.
- Parameters:
timeout (int, default=300) – Cache expiration timeout in seconds
- Returns:
The decorator function that marks the endpoint for caching
- Return type:
callable
Examples
>>> @app.get("/items") >>> @cache(timeout=600) # Cache for 10 minutes >>> async def get_items(): ... return {"items": [1, 2, 3]}
>>> @app.post("/calculate") >>> @cache(timeout=120) # Cache for 2 minutes >>> async def calculate(data: dict): ... return {"result": sum(data.get("numbers", []))}
Notes
The decorator sets two attributes on the function: - _use_cache: Boolean flag indicating caching is enabled - _cache_timeout: Integer timeout value in seconds
The CacheMiddleware looks for these attributes to determine which endpoints should be cached and for how long.
Backend Implementations
Redis/ValKey Backend
Redis cache backend implementation.
This module provides a Redis-based cache backend for production use. It supports connection pooling, automatic reconnection, and comprehensive error handling.
- class cache_middleware.backends.redis_backend.RedisBackend(url: str = 'redis://localhost:6379', **kwargs)[source]
Bases:
CacheBackend
Redis cache backend for production deployments.
This backend uses Redis as the caching layer, providing persistence, clustering support, and high performance. It implements lazy connection initialization and comprehensive error handling.
- Parameters:
url (str, default="redis://localhost:6379") – Redis connection URL (e.g., “redis://localhost:6379”, “rediss://secure:6380”)
**kwargs – Additional keyword arguments passed to redis.asyncio.from_url() Common options include: - max_connections: Maximum connections in the pool - retry_on_timeout: Whether to retry on timeout - password: Redis password - socket_keepalive: Enable TCP keepalive
- redis
Redis client instance (initialized lazily)
- Type:
Optional[aioredis.Redis]
Examples
>>> # Basic usage >>> backend = RedisBackend(url="redis://localhost:6379") >>> >>> # With custom configuration >>> backend = RedisBackend( ... url="redis://localhost:6379", ... max_connections=20, ... retry_on_timeout=True, ... password="secret" ... )
- __init__(url: str = 'redis://localhost:6379', **kwargs)[source]
Initialize the Redis cache backend.
- Parameters:
url (str, default="redis://localhost:6379") – Redis connection URL
**kwargs – Additional connection parameters for Redis client
- async close() None [source]
Close the Redis connection and clean up resources.
This method should be called during application shutdown to properly close the Redis connection pool.
- async delete(key: str) None [source]
Delete a key from Redis.
- Parameters:
key (str) – The cache key to delete
The Redis backend is fully compatible with both Redis and ValKey databases. ValKey is a high-performance data structure server that provides 100% compatibility with Redis APIs.
Memory Backend
In-memory cache backend implementation.
This module provides a simple in-memory cache backend suitable for development, testing, and single-instance deployments where Redis is not available or needed.
- class cache_middleware.backends.memory_backend.MemoryBackend(max_size: int = 1000)[source]
Bases:
CacheBackend
In-memory cache backend for development and testing.
This backend stores cache entries in memory using a Python dictionary. It includes automatic expiration of entries and simple LRU eviction when the cache reaches its maximum size.
- Parameters:
max_size (int, default=1000) – Maximum number of entries to store in the cache
- _cache
Internal cache storage mapping keys to (value, expiry_time) tuples
Examples
>>> backend = MemoryBackend(max_size=500) >>> await backend.set("key1", "value1", 300) >>> value = await backend.get("key1")
- __init__(max_size: int = 1000)[source]
Initialize the in-memory cache backend.
- Parameters:
max_size (int, default=1000) – Maximum number of cache entries to store
- async close() None [source]
Close the cache backend and clean up resources.
For the in-memory backend, this clears all cached entries.
- async delete(key: str) None [source]
Delete a key from the cache.
- Parameters:
key (str) – The cache key to delete
Helper Functions
Helper functions for common backend configurations.
This module provides convenience functions to create and configure cache backends using environment variables and common patterns. It simplifies backend setup for different deployment environments.
- cache_middleware.helpers.auto_configure_backend() CacheBackend [source]
Auto-configure backend based on environment variables.
This function automatically selects and configures the appropriate backend based on the CACHE_BACKEND environment variable.
- Returns:
CacheBackend – Configured backend instance
Environment Variables
——————–
CACHE_BACKEND (str, default=”memory”) – Backend type to use (“redis” or “memory”)
- Raises:
ValueError – If an unknown backend type is specified
Examples
>>> os.environ["CACHE_BACKEND"] = "redis" >>> backend = auto_configure_backend() # Returns RedisBackend
- cache_middleware.helpers.create_development_backend() MemoryBackend [source]
Create in-memory backend optimized for development.
This function creates a small in-memory cache suitable for development and testing environments.
- Returns:
Development-optimized memory backend
- Return type:
- cache_middleware.helpers.create_memory_backend_from_env() MemoryBackend [source]
Create in-memory backend using environment variables.
- Returns:
MemoryBackend – Configured in-memory backend instance
Environment Variables
——————–
MEMORY_CACHE_SIZE (int, default=1000) – Maximum number of cache entries to store
Examples
>>> os.environ["MEMORY_CACHE_SIZE"] = "500" >>> backend = create_memory_backend_from_env()
- cache_middleware.helpers.create_production_redis_backend() RedisBackend [source]
Create Redis backend optimized for production.
This function creates a Redis backend with production-ready settings including connection pooling, keepalive, and health checks.
- Returns:
Production-optimized Redis backend
- Return type:
- Raises:
ImportError – If redis backend dependencies are not installed
Environment Variables –
-------------------- –
:raises REDIS_URL : str, default=”redis://localhost:6379”: Redis connection URL
- cache_middleware.helpers.create_redis_backend_from_env() RedisBackend [source]
Create Redis backend using environment variables.
This function reads Redis configuration from environment variables, making it easy to configure the backend for different deployment environments without code changes.
- Returns:
Configured Redis backend instance
- Return type:
- Raises:
ImportError – If redis backend dependencies are not installed
Environment Variables –
-------------------- –
:raises REDIS_URL : str, default=”redis://localhost:6379”: Redis connection URL :raises REDIS_MAX_CONNECTIONS : int, default=10: Maximum number of connections in the pool :raises REDIS_PASSWORD : str, optional: Redis authentication password :raises REDIS_RETRY_ON_TIMEOUT : bool, default=True: Whether to retry operations on timeout
Examples
>>> # Set environment variables >>> os.environ["REDIS_URL"] = "redis://prod-redis:6379" >>> os.environ["REDIS_MAX_CONNECTIONS"] = "20" >>> backend = create_redis_backend_from_env()
- cache_middleware.helpers.get_backend_for_environment(env: str = None) CacheBackend [source]
Get backend configured for a specific environment.
This function provides environment-specific backend configurations with sensible defaults for common deployment scenarios.
- Parameters:
env (str, optional) – Environment name (“development”, “production”, “testing”). If None, uses the ENVIRONMENT environment variable.
- Returns:
CacheBackend – Backend configured for the specified environment
Environment Variables
——————–
ENVIRONMENT (str, default=”development”) – Deployment environment when env parameter is None
Examples
>>> # Explicit environment >>> backend = get_backend_for_environment("production") >>> >>> # From environment variable >>> os.environ["ENVIRONMENT"] = "production" >>> backend = get_backend_for_environment()
Logger Configuration
Logger configuration for the cache middleware.
This module provides logger setup using loguru, which replaces the standard Python logging module for better performance and ease of use. It’s specifically named logger_config.py to avoid circular import issues with the stdlib logging module.
- cache_middleware.logger_config.configure_logger()[source]
Configure the logger for the application.
This function sets up the loguru logger with default settings. It can be extended to add custom log handlers, formatters, and output destinations.
Examples
>>> configure_logger() >>> logger.info("Application started")
Notes
Common configuration options that can be added: - File rotation: logger.add(“app.log”, rotation=”1 MB”, level=”INFO”) - Console output: logger.add(sys.stderr, level=”DEBUG”) - JSON formatting: logger.add(“app.json”, serialize=True)
Type Definitions
The Cache Middleware uses type hints throughout the codebase. Here are the key types:
from typing import Optional, Any, Dict, List, Union, Callable
from starlette.requests import Request
from starlette.responses import Response
# Cache backend type hint
CacheBackendType = CacheBackend
# Cache decorator type hint
CacheDecoratorType = Callable[[Callable], Callable]
# Request/Response types from Starlette
RequestType = Request
ResponseType = Response
# Configuration types
CacheConfigType = Dict[str, Any]
BackendConfigType = Dict[str, Union[str, int, bool]]
Class Hierarchy
Backend Classes
CacheBackend (ABC)
├── MemoryBackend
├── RedisBackend (compatible with Redis and ValKey)
└── Custom backends (user-defined)
Middleware Classes
BaseHTTPMiddleware
└── CacheMiddleware
Exception Classes
Exception
├── CacheMiddlewareError (future)
│ ├── BackendError (future)
│ ├── ConfigurationError (future)
│ └── SerializationError (future)
└── ValueError (built-in)
└── InvalidTimeoutError (future)
Usage Examples
Basic API Usage
Creating and configuring backends:
from cache_middleware import RedisBackend, MemoryBackend, CacheMiddleware, cache
# Create Redis/ValKey backend
redis_backend = RedisBackend(
url="redis://localhost:6379", # Or use port 6380 for ValKey
max_connections=10
)
# Create Memory backend
memory_backend = MemoryBackend(max_size=1000)
# Register middleware with FastAPI
app.add_middleware(CacheMiddleware, backend=redis_backend)
# Use cache decorator
@app.get("/data")
@cache(timeout=300)
async def get_data():
return {"data": "cached_response"}
Advanced API Usage
Custom backend implementation:
from cache_middleware.backends.base import CacheBackend
from typing import Optional
class CustomBackend(CacheBackend):
async def get(self, key: str) -> Optional[str]:
# Custom implementation
pass
async def set(self, key: str, value: str, timeout: int) -> None:
# Custom implementation
pass
async def delete(self, key: str) -> None:
# Custom implementation
pass
async def close(self) -> None:
# Custom implementation
pass
Helper Functions API
Configuration helpers:
from cache_middleware.helpers import (
auto_configure_backend,
get_backend_for_environment,
create_redis_backend,
create_memory_backend
)
# Auto-configure from environment
backend = auto_configure_backend()
# Environment-specific backend
backend = get_backend_for_environment("production")
# Direct backend creation
redis_backend = create_redis_backend(
url="redis://localhost:6379",
max_connections=20
)
Configuration Parameters
CacheMiddleware Parameters
Parameter |
Type |
Description |
---|---|---|
backend |
CacheBackend |
Cache backend instance (required) |
exclude_paths |
List[str] |
Paths to exclude from caching |
include_paths |
List[str] |
Only cache these specific paths |
cache_header_name |
str |
HTTP header for cache status (default: “X-Cache-Status”) |
@cache Decorator Parameters
Parameter |
Type |
Description |
---|---|---|
timeout |
int |
Cache timeout in seconds (required) |
cache_control |
bool |
Respect HTTP Cache-Control headers (default: True) |
exclude_headers |
List[str] |
Headers to exclude from cache key |
include_headers |
List[str] |
Headers to include in cache key |
vary_on |
List[str] |
Additional parameters for cache key variation |
RedisBackend Parameters
Parameter |
Type |
Description |
---|---|---|
url |
str |
Redis connection URL (required) |
max_connections |
int |
Maximum connections in pool (default: 10) |
retry_on_timeout |
bool |
Retry operations on timeout (default: True) |
socket_keepalive |
bool |
Enable TCP keepalive (default: True) |
socket_keepalive_options |
Dict[int, int] |
TCP keepalive options |
health_check_interval |
int |
Health check interval in seconds (default: 30) |
password |
str |
Redis password (optional) |
db |
int |
Redis database number (default: 0) |
ssl |
bool |
Enable SSL connection (default: False) |
ssl_keyfile |
str |
SSL key file path |
ssl_certfile |
str |
SSL certificate file path |
ssl_cert_reqs |
str |
SSL certificate requirements |
ssl_ca_certs |
str |
SSL CA certificates file path |
MemoryBackend Parameters
Parameter |
Type |
Description |
---|---|---|
max_size |
int |
Maximum number of cached items (default: 1000) |
cleanup_interval |
int |
Cleanup interval in seconds (default: 300) |
default_timeout |
int |
Default timeout for items (default: 3600) |
Environment Variables
The Cache Middleware supports configuration via environment variables:
Variable |
Description |
---|---|
CACHE_BACKEND |
Backend type: “redis” or “memory” |
REDIS_URL |
Redis connection URL |
REDIS_MAX_CONNECTIONS |
Maximum Redis connections |
REDIS_PASSWORD |
Redis password |
REDIS_DB |
Redis database number |
REDIS_SSL |
Enable Redis SSL (“true”/”false”) |
MEMORY_CACHE_SIZE |
Memory backend max size |
CACHE_DEFAULT_TIMEOUT |
Default cache timeout in seconds |
CACHE_CLEANUP_INTERVAL |
Cleanup interval in seconds |
Error Handling
The Cache Middleware provides built-in error handling. Custom exception classes will be implemented in future versions:
CacheMiddlewareError: Base exception for all cache-related errors
BackendError: Raised when backend operations fail
ConfigurationError: Raised for invalid configuration
SerializationError: Raised when response serialization fails
Error Codes
Code |
Description |
---|---|
BACKEND_ERROR |
Backend operation failed |
CONFIGURATION_ERROR |
Invalid configuration provided |
SERIALIZATION_ERROR |
Failed to serialize/deserialize response |
TIMEOUT_ERROR |
Operation timed out |
CONNECTION_ERROR |
Failed to connect to backend service |
HTTP Headers
Cache Status Headers
The middleware adds cache status information via HTTP headers:
Header |
Description |
---|---|
X-Cache-Status |
“HIT” or “MISS” indicating cache status |
X-Cache-Key |
The cache key used (debug mode only) |
X-Cache-Timeout |
Cache timeout value used |
X-Cache-Backend |
Backend type used for caching |
Request Headers
Headers that affect caching behavior:
Header |
Description |
---|---|
Cache-Control |
Standard HTTP cache control directives |
If-None-Match |
ETag validation (future feature) |
Pragma |
HTTP/1.0 cache control (no-cache support) |
Cache-Control Directives
Supported Cache-Control directives:
Directive |
Description |
---|---|
no-cache |
Bypass cache for this request |
no-store |
Do not cache this response |
max-age=seconds |
Override default cache timeout |
must-revalidate |
Force revalidation on expired cache |
Performance Metrics
The Cache Middleware provides built-in performance monitoring:
Timing Metrics
# Example metrics collected
{
"cache_get_time": 0.001, # Time to retrieve from cache
"cache_set_time": 0.002, # Time to store in cache
"backend_latency": 0.0015, # Backend operation latency
"total_request_time": 0.150 # Total request processing time
}
Hit Rate Metrics
# Cache performance metrics
{
"cache_hits": 850,
"cache_misses": 150,
"hit_rate": 0.85, # 85% hit rate
"total_requests": 1000
}
Backend Health
# Backend health status
{
"backend_type": "redis",
"status": "healthy",
"connection_pool_size": 10,
"active_connections": 3,
"last_health_check": "2024-01-15T10:30:00Z"
}
Development Tools
Testing Utilities
from cache_middleware.testing import (
MockCacheBackend,
TestCacheMiddleware,
assert_cached,
assert_not_cached
)
# Mock backend for testing
mock_backend = MockCacheBackend()
# Test assertions
async def test_caching():
await assert_cached("/api/data", timeout=300)
await assert_not_cached("/api/dynamic")
Debug Mode
Enable debug mode for development:
# Enable debug logging
import logging
logging.getLogger("cache_middleware").setLevel(logging.DEBUG)
# Add debug headers
app.add_middleware(
CacheMiddleware,
backend=backend,
debug=True # Adds X-Cache-Key header
)
Benchmarking
Built-in benchmarking tools:
from cache_middleware.benchmark import benchmark_backend
# Benchmark backend performance
results = await benchmark_backend(
backend=redis_backend,
operations=1000,
concurrency=10
)
print(f"Operations per second: {results['ops_per_second']}")
Migration Guide
From Version 1.x to 2.x
Breaking changes and migration path:
# Version 1.x (factory pattern)
from cache_middleware import CacheMiddleware, RedisBackendFactory
factory = RedisBackendFactory(url="redis://localhost:6379")
app.add_middleware(CacheMiddleware, backend_factory=factory)
# Version 2.x (dependency injection)
from cache_middleware.middleware import CacheMiddleware
from cache_middleware.backends.redis_backend import RedisBackend
backend = RedisBackend(url="redis://localhost:6379")
app.add_middleware(CacheMiddleware, backend=backend)
Configuration Changes
Updated configuration format:
# Old configuration
cache_config = {
"backend_type": "redis",
"redis_url": "redis://localhost:6379"
}
# New configuration
backend = RedisBackend(url="redis://localhost:6379")
app.add_middleware(CacheMiddleware, backend=backend)
Compatibility Notes
Python Version Support
Python 3.8+: Minimum supported version
Python 3.12: Recommended version
Python 3.13: Full support
Framework Compatibility
FastAPI: Full support (recommended)
Starlette: Full support
Django: Limited support via ASGI
Flask: Not supported (use Flask-Caching instead)
Redis Version Support
Redis 5.0+: Minimum supported version
Redis 6.x: Full support
Redis 7.x: Full support with enhanced features
Dependencies
Core dependencies and their versions:
fastapi>=0.68.0
starlette>=0.14.0
redis[hiredis]>=4.0.0
loguru>=0.6.0
pydantic>=1.8.0
Contributing
API Design Guidelines
When contributing to the API:
Type Hints: All public functions must have complete type hints
Docstrings: Use NumPy-style docstrings for all public APIs
Async/Await: All I/O operations must be async
Error Handling: Fail gracefully, log errors appropriately
Testing: Maintain 100% test coverage for new APIs
Code Style
Formatter: Use ruff format for code formatting
Linter: Use ruff check for code linting
Type Checker: Use mypy for type checking
Documentation: Use sphinx for API documentation
Pull Request Process
Fork the repository
Create a feature branch
Add tests for new functionality
Update documentation
Run the full test suite
Submit pull request with clear description
See Also
What is Cache Middleware? - Introduction to Cache Middleware
Installation - Installation instructions
User Guide - Usage examples and tutorials
Middleware Configuration - Configuration options
Extending Backends - Custom backend development