Skip to content

Latest commit

 

History

History
161 lines (154 loc) · 16.4 KB

File metadata and controls

161 lines (154 loc) · 16.4 KB

Vulnerability List for django-simple-captcha

This document consolidates identified vulnerabilities in the django-simple-captcha project into a single list, removing duplicates and providing detailed descriptions for each.

  • Vulnerability: Insecure Django Settings – DEBUG Enabled and Weak SECRET_KEY

    • Description: An external attacker may exploit the fact that the project’s settings file sets DEBUG = True and uses a hardcoded, trivial secret key (SECRET_KEY = "empty"). By intentionally causing an error (for example, by visiting a non‐existent URL), the attacker can trigger Django’s debug error page. This page will reveal sensitive internal configuration details (such as settings, file paths, installed apps, and even portions of the source code), providing the attacker with a roadmap of the application’s inner workings.
      • Step by step triggering:
        1. Deploy the application with the provided testproject/settings.py configuration.
        2. Visit an invalid URL or deliberately cause an unhandled exception.
        3. View the resulting debug error page showing detailed environment and configuration information.
    • Impact: Exposure of sensitive configuration data (including the weak secret key) can lead to session tampering, compromised signing of cookies and tokens, and further attacks against cryptographic functions. In a production environment, this is critical because it greatly assists an attacker in mounting additional exploits.
    • Vulnerability Rank: Critical
    • Currently Implemented Mitigations: There are no mitigations in the project. The settings explicitly set DEBUG = True and SECRET_KEY = "empty".
    • Missing Mitigations:
      • Disable debug mode (i.e. set DEBUG = False) in any publicly deployed instance.
      • Generate and use a secure, random secret key in production.
      • Properly configure ALLOWED_HOSTS to restrict accepted host headers.
    • Preconditions: The application must be deployed using the current testproject/settings.py configuration (i.e. with debug mode enabled and the weak secret key in use).
    • Source Code Analysis:
      • In /code/testproject/settings.py, the following lines are present:
        DEBUG = True
        SECRET_KEY = "empty"
        These settings are intended only for testing purposes but are extremely dangerous if the application is accessible publicly.
    • Security Test Case:
      1. Deploy the application to a publicly accessible environment using the provided settings.
      2. Navigate to a deliberately invalid URL (for example, by entering a wrong path in the browser).
      3. Observe that Django’s detailed error/debug page is displayed, showing sensitive settings, stack traces, and the insecure SECRET_KEY.
      4. Confirm that the debug output contains the hardcoded values and internal configuration details.
  • Vulnerability: Insecure Temporary File Handling in Audio Captcha Generation

    • Description: The captcha_audio view builds temporary file paths for processing audio captchas by concatenating a valid captcha key with a short random token generated via secrets.token_urlsafe(6), using the system’s temporary directory (obtained from tempfile.gettempdir()). These paths are then passed to external subprocess calls (e.g. to generate or process audio files) without using an atomic, secure file‐creation function. An attacker who can write to the shared temporary directory may attempt a race (or symlink) attack by preemptively creating a symbolic link at the predicted file path, causing the subprocess to write its output to an attacker‑controlled location.
      • Step by step triggering:
        1. An attacker (with the ability to write to the system’s temporary folder) monitors or predicts the file‑naming pattern such as <key>_<randomtoken>.wav.
        2. The attacker creates a symbolic link at a probable target name, redirecting the output to a file of their choice.
        3. The attacker then triggers the captcha_audio endpoint with a valid captcha key.
        4. The subprocess writes its output to the symlink target, thereby overwriting or corrupting an arbitrary file.
    • Impact: If exploited, this flaw could allow an attacker to overwrite sensitive files or inject unwanted data into critical files. In a worst-case scenario (for example, on a misconfigured or shared system), this might lead to privilege escalation or arbitrary code execution.
    • Vulnerability Rank: High
    • Currently Implemented Mitigations: The code does use secrets.token_urlsafe(6) to add randomness to the filename; however, the randomness does not guarantee that file creation happens atomically or that the file is safe from race conditions.
    • Missing Mitigations:
      • Use secure temporary file creation functions (such as tempfile.NamedTemporaryFile with proper flags or equivalent atomic file‑creation methods) to avoid TOCTOU race conditions.
      • Validate that any temporary file created is not a symbolic link (by checking file status before writing).
      • Enforce strict file permissions on temporary files.
    • Preconditions:
      • The attacker must have (or be able to induce) write access to the shared temporary directory (e.g. /tmp on Unix systems).
      • The attacker must be able to predict or influence the file‑naming scheme by repeatedly triggering the endpoint.
    • Source Code Analysis:
      • In /code/captcha/views.py, the audio captcha function contains the following key snippet:
        path = os.path.join(tempfile.gettempdir(), f"{key}_{secrets.token_urlsafe(6)}.wav")
        subprocess.run([settings.CAPTCHA_FLITE_PATH, "-t", text, "-o", path])
        There is no use of an atomic file‑creation call (e.g. no use of NamedTemporaryFile with delete=False and proper flags), leaving a window for a race condition.
    • Security Test Case:
      1. Identify (or obtain) a valid captcha key by first triggering a captcha image request.
      2. In a controlled test environment, pre-create a symbolic link in the system’s temporary directory following the naming convention <key>_<suffix>.wav, linking it to a harmless test file (or, in a real exploit scenario, to a sensitive file).
      3. Trigger the captcha_audio endpoint with the identified key and observe whether the subprocess call writes to the attacker‑controlled destination.
      4. Verify that the file being overwritten (or written to) is not an intended temporary file.
      5. Repeating this under concurrent conditions can help confirm the presence of a race condition.
  • Vulnerability: Global Random State Manipulation in Captcha Image Generation

    • Description: To ensure that the same captcha image is rendered for a given key, the captcha_image view seeds Python’s global random state with the user‐supplied key by calling random.seed(key). Although the code later attempts to “reset” the state by calling random.seed() with no parameter, this practice affects the global random number generator and may be unsafe in a multitasking or multithreaded environment. An attacker who can reliably trigger the view with a known key may be able to predict the sequence of random numbers used soon after—thereby undermining the randomness that other security functions (or even parts of the captcha generation) depend on.
      • Step by step triggering:
        1. The attacker calls the captcha_image endpoint with a known (or captured) captcha key, causing random.seed(key) to be invoked.
        2. In a race or if the application processes multiple requests concurrently, the modified random state may be used in subsequent operations.
        3. The attacker then observes (or deduces) the outputs of functions relying on Python’s global random state (for example, the letter color generation or even parts of a newly generated captcha challenge).
    • Impact: Predictability of the random state can weaken the captcha challenge itself or any other feature relying on randomness (such as token generation elsewhere in the application). This predictability can help an attacker craft input that bypasses the intended randomness of the captcha or other defenses.
    • Vulnerability Rank: High
    • Currently Implemented Mitigations: The function calls random.seed() (with no parameters) after generating the image in an attempt to reset the random state. However, this does not guarantee that concurrent operations are not affected by the altered global state.
    • Missing Mitigations:
      • Use an isolated instance of a random number generator (for example, instantiate a local random.Random(key) object) rather than reseeding the global random module.
      • Ensure that any deterministic behavior required for image generation does not affect the global state.
    • Preconditions: The attacker must be able to trigger the captcha_image view with a known key and then observe other functionality that relies on Python’s global random generator in a concurrent or closely timed scenario.
    • Source Code Analysis:
      • In /code/captcha/views.py, the function begins with:
        random.seed(key)  # Do not generate different images for the same key# After captcha image generation
        random.seed()
        This practice temporarily overrides the global random state based on externally provided data (the captcha key).
    • Security Test Case:
      1. Trigger the captcha_image endpoint with a known captcha key and capture details of the generated image (such as letter rotation angles or colors, if observable).
      2. Immediately invoke another functionality that also uses the global random module (for instance, requesting a new captcha challenge or any debug feature that exposes random-based output).
      3. Compare the outputs to see if they can be predicted based on the known seeding value.
      4. If the outputs are predictable (or correlate with the attacker‑controlled key), then the vulnerability is confirmed.
  • Vulnerability: Command Injection in Audio CAPTCHA Generation

    • Description: The captcha_audio view in captcha/views.py uses subprocess.run to execute external commands, specifically flite for text-to-speech conversion and optionally sox for audio processing. The text that is converted to speech is derived from the challenge field of the CaptchaStore model. If an attacker can manipulate the challenge field to contain shell commands, these commands could be executed by the server when the captcha_audio view is accessed.

      • Step by step triggering:
      1. An attacker needs to find a way to inject a malicious payload into the challenge field of a CaptchaStore record. This might be possible through other vulnerabilities or by directly manipulating the database if access is gained. For demonstration purposes, assume an attacker can create a CaptchaStore entry with a crafted challenge.
      2. The attacker crafts a malicious challenge string containing shell commands, for example: test$(whoami).
      3. The attacker obtains the hashkey of the CaptchaStore record with the malicious challenge.
      4. The attacker requests the audio CAPTCHA by accessing the URL /captcha/audio/<hashkey>.wav, replacing <hashkey> with the obtained hash key.
      5. The captcha_audio view retrieves the malicious challenge from the database.
      6. The view then uses subprocess.run([settings.CAPTCHA_FLITE_PATH, "-t", text, "-o", path]) where text is the malicious challenge.
      7. If the system shell is invoked by subprocess.run and processes the command, the shell command $(whoami) within the challenge string will be executed on the server. The output of this command might be included in the audio generated by flite or could have other unintended consequences depending on the context and the presence of sox.
    • Impact: Successful command injection can lead to arbitrary code execution on the server. The severity of the impact depends on the permissions of the user running the web application. At a minimum, it can lead to information disclosure (e.g., output of whoami or other commands being spoken in the audio or logged). In a worst-case scenario, an attacker could gain full control of the server.

    • Vulnerability Rank: High

    • Currently Implemented Mitigations: None. The code does not sanitize or escape the challenge string before passing it to subprocess.run.

    • Missing Mitigations:

      • Input sanitization: The challenge string generated and stored in CaptchaStore should be strictly controlled to prevent injection of shell-sensitive characters. Implement input validation to ensure that only expected characters are used in the challenge.
      • Command parameterization: Instead of passing the entire challenge string as a single argument to flite, if possible, explore if flite or sox APIs allow for safer ways to provide input, possibly through file streams or other mechanisms that avoid shell interpretation. If direct command execution is necessary, ensure proper escaping of shell arguments.
      • Principle of least privilege: Ensure that the web server process runs with the minimum necessary privileges to reduce the impact of successful command injection.
    • Preconditions:

      • The application must be using the audio CAPTCHA feature (CAPTCHA_FLITE_PATH configured).
      • An attacker must be able to inject malicious content into the challenge field of the CaptchaStore model. This is the primary precondition that needs further investigation to assess the real exploitability.
    • Source Code Analysis:

      • File: /code/captcha/views.py
      def captcha_audio(request, key):
          if settings.CAPTCHA_FLITE_PATH:
              try:
                  store = CaptchaStore.objects.get(hashkey=key)
              except CaptchaStore.DoesNotExist:
                  # HTTP 410 Gone status so that crawlers don't index these expired urls.
                  return HttpResponse(status=410)
      
              text = store.challenge
              if "captcha.helpers.math_challenge" == settings.CAPTCHA_CHALLENGE_FUNCT:
                  text = text.replace("*", "times").replace("-", "minus").replace("+", "plus")
              else:
                  text = ", ".join(list(text))
              path = str(
                  os.path.join(tempfile.gettempdir(), f"{key}_{secrets.token_urlsafe(6)}.wav")
              )
              subprocess.run([settings.CAPTCHA_FLITE_PATH, "-t", text, "-o", path])
              # ... (sox processing) ...
      • The vulnerability lies in the line subprocess.run([settings.CAPTCHA_FLITE_PATH, "-t", text, "-o", path]). The variable text, which comes directly from store.challenge, is passed as an argument to the -t option of the flite command. If text contains shell command injection payloads, they will be executed.
      • The code does perform some string replacements for math_challenge, but this is not related to shell escaping and does not prevent command injection if the original challenge itself is malicious.
    • Security Test Case:

      1. Setup:
        • Ensure CAPTCHA_FLITE_PATH is configured in settings.py to enable audio CAPTCHA.
        • Modify the challenge generation function temporarily to allow setting a custom challenge. For example, in captcha/helpers.py, if using random_char_challenge, temporarily change it to accept a parameter to set the challenge directly for testing. Or, directly insert a CaptchaStore record into the database.
      2. Inject Malicious Challenge:
        • Create a CaptchaStore record directly in the database or through a modified challenge generation process, setting the challenge field to test$(whoami). Note the hashkey of this record.
      3. Trigger Audio CAPTCHA Generation:
        • Access the audio CAPTCHA URL in a browser or using curl: /captcha/audio/<hashkey>.wav, replacing <hashkey> with the hashkey from the previous step.
      4. Analyze Response/Server Behavior:
        • Listen to the generated audio file. Check if the audio output contains the result of the whoami command. This would confirm command injection.
        • Examine server logs for any unusual activity or errors that might indicate command execution.
        • (Optional, for more in-depth testing): Use network monitoring tools to observe any outbound network connections initiated by the flite or sox processes, which could be a sign of successful command injection leading to further exploitation.