- Vulnerability Name: Inadequate Rate Limiting in Multi-Process Environments with LocMemCache
- Description:
- An attacker targets a Django application that uses
django-ratelimit
for rate limiting. - The application is configured to use
django.core.cache.backends.locmem.LocMemCache
as the cache backend. - The Django application is deployed in a multi-process environment, such as using WSGI servers like Gunicorn or uWSGI with multiple worker processes, or ASGI servers with multiple processes.
- The attacker sends requests to a rate-limited endpoint.
- Because
LocMemCache
is not shared between processes, each process maintains its own independent rate limit counters. - If the attacker's requests are distributed across multiple processes (which can happen due to load balancing or request distribution in multi-process environments), the rate limit is applied per process, not globally.
- As a result, the attacker can send more requests in total than the intended global rate limit by effectively resetting the rate limit counter in each process.
- An attacker targets a Django application that uses
- Impact: Rate limits can be circumvented, allowing attackers to bypass intended restrictions. This can lead to abuse of application resources, such as excessive API calls, brute-force attacks, or other actions that rate limiting is meant to prevent. This undermines the security functionality of the rate limiter.
- Vulnerability Rank: High
- Currently Implemented Mitigations:
- The project includes a system check (
django_ratelimit.E003
) that flagsdjango.core.cache.backends.locmem.LocMemCache
as a broken backend because it is not a shared cache. - This system check generates an error during Django's
check
command, warning developers about the unsuitable cache backend. - However, this is only a warning and does not prevent the use of
LocMemCache
. Developers can silence this check, as demonstrated in the providedtest_settings.py
file.
- The project includes a system check (
- Missing Mitigations:
- Prevent or strongly discourage the use of
LocMemCache
or other non-shared cache backends in production environments. This could involve:- Enhancing the system check to be a critical error that cannot be easily silenced, especially in production settings.
- Adding runtime warnings or exceptions when
LocMemCache
is detected in a multi-process environment.
- Improve documentation to clearly highlight the limitations of
LocMemCache
in multi-process environments and strongly recommend shared cache backends like Memcached or Redis for production deployments.
- Prevent or strongly discourage the use of
- Preconditions:
- Django application is deployed in a multi-process environment.
django-ratelimit
is configured to usedjango.core.cache.backends.locmem.LocMemCache
.
- Source Code Analysis:
/code/django_ratelimit/checks.py
: Thecheck_caches
function explicitly identifiesdjango.core.cache.backends.locmem.LocMemCache
inKNOWN_BROKEN_CACHE_BACKENDS
with the reasonCACHE_NOT_SHARED
. This results in achecks.Error
withid='django_ratelimit.E003'
./code/test_settings.py
: TheSILENCED_SYSTEM_CHECKS = ['django_ratelimit.E003', 'django_ratelimit.W001']
setting in the test configuration silences this error, indicating awareness of the issue but no active prevention in the library itself beyond the system check./code/django_ratelimit/core.py
: Theis_ratelimited
andget_usage
functions rely on Django's cache framework to store and retrieve rate limit counters. WhenLocMemCache
is configured, each process operates on its own isolated in-memory cache.- In multi-process environments, requests can be routed to different processes. Each process will independently manage rate limits based on its local
LocMemCache
. This process-local rate limiting fails to provide a global rate limit across the entire application instance.
- Security Test Case:
- Setup:
- Deploy a Django application using
django-ratelimit
. - Configure
settings.py
to useLocMemCache
:RATELIMIT_USE_CACHE = 'default' CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.locmem.LocMemCache', 'LOCATION': 'ratelimit-tests', } }
- Configure a view to be rate-limited, for example:
from django_ratelimit.decorators import ratelimit from django.http import HttpResponse @ratelimit(key='ip', rate='2/m') def my_view(request): return HttpResponse("View Accessed")
- Deploy the Django application with a multi-process server like Gunicorn (e.g.,
gunicorn -w 2 your_project.wsgi
).
- Deploy a Django application using
- Attack Simulation:
- From an attacker's machine, use a tool like
curl
or a script to send HTTP requests to the rate-limited view. - Send 3 or more requests within a minute from the same IP address. Try to distribute requests in time to potentially hit different processes if possible but even rapid requests can be distributed by OS scheduler.
- Observe the HTTP responses.
- From an attacker's machine, use a tool like
- Expected Result (Vulnerable Case - LocMemCache):
- All requests (including the 3rd and subsequent requests within the minute) will likely return a successful "View Accessed" response (HTTP 200). This indicates that the rate limit of 2 requests per minute is not being enforced globally across processes and is being bypassed.
- Mitigation Test:
- Modify
settings.py
to use a shared cache backend like Redis:(EnsureRATELIMIT_USE_CACHE = 'default' CACHES = { 'default': { 'BACKEND': 'django_redis.cache.RedisCache', 'LOCATION': 'redis://127.0.0.1:6379/1', # Replace with your Redis connection details 'OPTIONS': { 'CLIENT_CLASS': 'django_redis.client.DefaultClient', } } }
django-redis
is installed and Redis server is running). - Redeploy the Django application.
- Repeat step 2 (send 3 or more requests within a minute).
- Modify
- Expected Result (Mitigated Case - Redis):
- The first two requests will return a successful "View Accessed" response (HTTP 200).
- The third and subsequent requests within the minute should be blocked by the rate limiter, and will likely return a
Ratelimited
exception (e.g., HTTP 403 if default exception handling is in place, or a custom response if middleware is configured). This confirms that the rate limit is now being enforced globally.
- Setup:
- Vulnerability Name: Rate Limit Bypass via Cache Failure (Fail Open Behavior)
- Description:
- An external attacker targets an endpoint that is protected with rate-limiting logic (e.g., a login or password-reset endpoint decorated with
@ratelimit
). - The attacker induces cache backend unreachability or unresponsiveness (e.g., network issues, flooding, misconfigurations).
- When a request is processed,
get_usage
attempts to use the cache (viacache.add
orcache.incr
). Cache call fails or returns "no value". - The code catches the error and, if
RATELIMIT_FAIL_OPEN=True
, returnsNone
. is_ratelimited
interpretsNone
usage as "not ratelimited", permitting the request.- The attacker sends many requests, bypassing rate limits, enabling brute-force or abuse.
- An external attacker targets an endpoint that is protected with rate-limiting logic (e.g., a login or password-reset endpoint decorated with
- Impact: Bypassing rate limiting allows brute-force attacks, resource exhaustion, and abuse of protected functionality. Severe security and operational consequences in scenarios requiring strict rate limits.
- Vulnerability Rank: High
- Currently Implemented Mitigations:
get_usage
catches cache connection errors (e.g.,socket.gaierror
) and respectsRATELIMIT_FAIL_OPEN
setting.- "Fail open" approach prevents blocking legitimate traffic during cache issues.
- Missing Mitigations:
- No built-in "fail closed" alternative for enforcing rate limiting during cache unreachability.
- No fallback mechanisms (local counters, alerting) for cache connectivity issues.
- Lack of enhanced monitoring and automatic stricter rate limiting during cache failures.
- Preconditions:
- Application deployed with network-reachable caching backend susceptible to disruption.
RATELIMIT_FAIL_OPEN=True
is configured (or default "fail open" behavior).- Attacker can induce cache connectivity failures or influence cache behavior.
- Source Code Analysis:
django_ratelimit/core.py
,get_usage
function:try: added = cache.add(cache_key, initial_value, period + EXPIRATION_FUDGE) except socket.gaierror: # for redis added = False
try: count = cache.incr(cache_key) except ValueError: pass
if count is None or count is False: if getattr(settings, 'RATELIMIT_FAIL_OPEN', False): return None return { 'count': 0, ... 'should_limit': True, ... }
- Returning
None
fromget_usage
bypasses rate limit inis_ratelimited
.
- Security Test Case:
- Setup:
- Deploy application with
django-ratelimit
, rate-limited login view (@ratelimit(key='ip', rate='1/m')
). - Configure caching backend (Redis/memcached), set
RATELIMIT_FAIL_OPEN=True
.
- Deploy application with
- Simulate Cache Failure:
- Make cache backend unreachable:
- (a) Point
RATELIMIT_USE_CACHE
to non-existent server. - (b) Network partition/firewall blocking cache access.
- (a) Point
- Make cache backend unreachable:
- Execution:
- Send excessive HTTP requests from external client to protected endpoint.
- Verification:
- Observe endpoint does not throttle requests after exceeding rate limit.
- Confirm cache operations fail and
get_usage
returnsNone
, bypassing rate limiting.
- Conclusion:
- Attacker-induced cache failures bypass rate-limiting protections, verifying vulnerability.
- Setup:
- Vulnerability Name: No High-Rank Vulnerabilities Found
- Description: After a thorough review of the
django-ratelimit
project, considering the perspective of an external attacker targeting a publicly available instance, and applying the exclusion and inclusion criteria specified, no vulnerabilities of high or critical rank were identified that are inherent to the library itself and exploitable in its intended usage. Potential misconfigurations or misuse of the library in specific application contexts might lead to ineffective rate limiting, but these are not considered vulnerabilities within the scope of this analysis of thedjango-ratelimit
project's code and design. - Impact: N/A
- Vulnerability Rank: N/A
- Currently Implemented Mitigations:
- CodeQL static analysis.
- Comprehensive test suite.
- Configurable cache backend with problematic backend checks.
- Default SHA256 for cache key hashing.
- Missing Mitigations: N/A
- Preconditions: N/A
- Source Code Analysis: N/A
- Security Test Case: N/A