Skip to content

Latest commit

 

History

History
200 lines (169 loc) · 13.1 KB

File metadata and controls

200 lines (169 loc) · 13.1 KB

Combined Vulnerability List for django-bootstrap3

1. Cross-Site Scripting (XSS) Vulnerability in Bootstrap Alert Rendering (bootstrap_alert and bootstrap_messages template tags)

  • Description:

    1. An attacker can inject arbitrary HTML code through the content parameter of the bootstrap_alert template tag or by injecting a malicious message that is rendered by the bootstrap_messages template tag.
    2. The render_alert function, used by both bootstrap_alert and bootstrap_messages, directly renders the provided content as HTML without proper sanitization due to the usage of mark_safe.
    3. If an attacker can control the content parameter of bootstrap_alert (e.g., through user input) or inject a malicious message displayed by bootstrap_messages, they can inject malicious JavaScript code.
    4. When a user views the page containing the injected script, the script will execute in their browser, potentially leading to session hijacking, cookie theft, or redirection to malicious websites.
  • Impact:

    • Cross-site scripting (XSS).
    • Account takeover if session cookies are stolen.
    • Redirection to malicious websites.
    • Defacement of the web page.
    • Potential data theft.
  • Vulnerability Rank: high

  • Currently implemented mitigations:

    • None. The render_alert function in src/bootstrap3/components.py uses mark_safe on the content, which explicitly tells Django not to escape the HTML, assuming it's already safe. This applies to both bootstrap_alert and bootstrap_messages template tags.
  • Missing mitigations:

    • Input sanitization of the content parameter in the bootstrap_alert template tag and for messages rendered by bootstrap_messages.
    • Escaping HTML characters in the content before rendering it in the template, especially if the content originates from user input, messages framework or any untrusted source.
    • Avoid using mark_safe unnecessarily, especially when dealing with potentially untrusted content.
    • For bootstrap_messages, ensure messages added to the Django messages framework are sanitized before being rendered by bootstrap_messages.
  • Preconditions:

    • The application uses the bootstrap_alert or bootstrap_messages template tags to display content or messages.
    • For bootstrap_alert, an attacker can influence the content parameter passed to the tag, directly or indirectly.
    • For bootstrap_messages, an attacker can inject a malicious message into the Django messages framework, which will be rendered by the bootstrap_messages tag.
  • Source code analysis:

    1. File: src/bootstrap3/templatetags/bootstrap3.py
    @register.simple_tag
    def bootstrap_alert(content, alert_type="info", dismissable=True):
        """
        ...
        """
        return render_alert(content, alert_type, dismissable)
    
    @register.simple_tag(takes_context=True)
    def bootstrap_messages(context, *args, **kwargs):
        ...
        return render_template_file("bootstrap3/messages.html", context=context)

    The bootstrap_alert template tag directly calls render_alert. The bootstrap_messages tag renders a template, which is assumed to use render_alert for displaying individual messages.

    1. File: src/bootstrap3/components.py
    from django.utils.safestring import mark_safe
    from django.utils.html import text_value
    ...
    def render_alert(content, alert_type=None, dismissable=True):
        ...
        return mark_safe(
            render_tag(
                "div", attrs={"class": " ".join(css_classes)}, content=mark_safe(button_placeholder) + text_value(content)
            ).replace(button_placeholder, button)
        )

    The render_alert function uses mark_safe(button_placeholder) + text_value(content) to construct the alert content. mark_safe marks the entire constructed HTML as safe, including the potentially attacker-controlled content, bypassing HTML escaping and leading to XSS. This affects both bootstrap_alert and bootstrap_messages which rely on render_alert.

    graph LR
        subgraph bootstrap_alert
            A[Template using bootstrap_alert tag] --> B(bootstrap_alert template tag in templatetags/bootstrap3.py);
        end
        subgraph bootstrap_messages
            C[Template using bootstrap_messages tag] --> D(bootstrap_messages template tag in templatetags/bootstrap3.py);
            D --> E[bootstrap3/messages.html template]
            E --> F[render_alert (for each message)]
        end
        B --> G(render_alert function in components.py);
        F --> G;
        G --> H[mark_safe(content)];
        H --> I[HTML Output with potentially malicious content];
    
    Loading
  • Security test case:

    1. Create a Django template, for example, test_xss_alert.html, and load the bootstrap3 template tags.
    2. In the template, use the bootstrap_alert tag and pass a crafted JavaScript payload as the content parameter. For example:
    {% load bootstrap3 %}
    {% bootstrap_alert content='<script>alert("XSS in bootstrap_alert");</script>' alert_type='danger' %}
    1. Create another Django template, for example, test_xss_messages.html, and load the bootstrap3 template tags.
    2. In a Django view, add a message to the messages framework with malicious JavaScript:
      from django.contrib import messages
      from django.shortcuts import render
      
      def test_messages_view(request):
          messages.info(request, '<script>alert("XSS in bootstrap_messages");</script>')
          return render(request, 'test_xss_messages.html')
    3. In the test_xss_messages.html template, use the bootstrap_messages tag:
    {% load bootstrap3 %}
    <!DOCTYPE html>
    <html>
    <head>
        <title>XSS Test in bootstrap_messages</title>
        {% bootstrap_css %}
    </head>
    <body>
        <div class="container">
            {% bootstrap_messages messages %}
        </div>
        {% bootstrap_javascript jquery=1 %}
    </body>
    </html>
    1. Create Django views that render test_xss_alert.html and test_xss_messages.html.
    2. Access both views in a web browser.
    3. Observe that alert boxes with "XSS in bootstrap_alert" and "XSS in bootstrap_messages" pop up, demonstrating successful execution of injected JavaScript code in both cases.
    4. To further validate, try more harmful payloads like redirecting to an attacker's website or attempting to steal cookies in both test cases.

2. Hardcoded SECRET_KEY in Settings Files

  • Description: The project’s sample settings (found in both the test and example configurations) define a fixed, hard‐coded SECRET_KEY (for example, in tests/app/settings.py the key is set to "bootstrap3isawesome" and in example/settings.py a fixed value is used). If a publicly available instance of the application is deployed using one of these default configurations, an attacker who knows the key can forge or tamper with signed data (including session cookies). How to Trigger:

    1. Deploy the application in a public environment using the unmodified example or test settings from the repository.
    2. Note that the SECRET_KEY is exposed in the settings file.
    3. An attacker can use this known key to generate a valid session cookie or tamper with any data signed by Django.
    4. Submit the forged session with crafted credentials to achieve session hijacking or impersonation.
  • Impact:

    • Session hijacking
    • User impersonation or escalation of privileges
    • Tampering with cryptographically signed data
  • Vulnerability Rank: critical

  • Currently implemented mitigations:

    • The project provides these keys only in sample/test/example settings with the expectation that a production deployment will override them.
    • No runtime or code‐based check prevents deployment with these defaults.
  • Missing mitigations:

    • Use of environment variables (or a dedicated secrets management system) for supplying the SECRET_KEY in production.
    • Clear documentation or startup warnings stating that the hardcoded key must be changed for any production deployment.
  • Preconditions:

    • A publicly accessible deployment is made using the default settings from tests/app/settings.py or example/settings.py without overriding the SECRET_KEY.
    • The attacker has network access to the deployed instance and knowledge of Django’s signing mechanism.
  • Source Code Analysis:

    • In tests/app/settings.py, the line SECRET_KEY = "bootstrap3isawesome" clearly hard‑codes the signing key.
    • In example/settings.py, a fixed secret key is similarly provided.
    • There is no dynamic retrieval or obfuscation of this secret, meaning that if the file is used unmodified, the key is trivially known.
  • Security test case:

    1. Deploy the example application (or tests) as provided without modifying the SECRET_KEY.
    2. From an external machine, capture the session cookie (or craft one) using the known key value.
    3. Use tools or custom scripts to generate a counterfeit session cookie (or signed data) and submit it to the application.
    4. Verify that the application accepts the forged cookie, resulting in unauthorized access or session takeover.
    5. Document that the use of a known, hardcoded value allowed the attacker to bypass authentication integrity.

3. Insecure Dependabot Auto-Approve and Merge Workflow

  • Description: The repository’s GitHub Actions workflow (.github/workflows/dependabot-auto-approve-and-merge.yml) is set up to automatically approve and merge pull requests generated by dependabot[bot]. This workflow is triggered using the pull_request_target event and includes a condition that only continues if the actor is exactly dependabot[bot]. However, in certain scenarios an attacker with sufficient knowledge of PR metadata or with access to create pull requests from forks could—if they manage to spoof aspects of the Dependabot metadata—potentially get a malicious pull request auto-approved and merged without a proper manual review. How to Trigger:

    1. An attacker creates a pull request from a fork and manipulates the metadata (or leverages a misconfiguration) so that the PR appears to come from dependabot[bot].
    2. Because the workflow checks only that the actor’s username is dependabot[bot], the pull request passes the condition.
    3. The workflow then automatically approves and (if it is not a semver-major update) auto-merges the pull request.
  • Impact:

    • Unauthorized merging of code changes
    • Injection of malicious code into the main branch
    • Compromise of the repository’s integrity and potential downstream supply‑chain issues
  • Vulnerability Rank: high

  • Currently implemented mitigations:

    • The workflow condition includes if: ${{ github.actor == 'dependabot[bot]' }} to limit automatic action only to Dependabot’s official PRs.
    • A dependency is fetched from dependabot/[email protected] to help validate PR metadata.
  • Missing mitigations:

    • Additional verification (for example, checking commit signatures or more strict metadata attributes) to ensure that a PR truly originates from Dependabot rather than from an attacker who is able to forge minimal metadata.
    • Further restrictions on the trigger (avoiding the more-privileged pull_request_target when possible) or a review step before auto‑merging that requires manual approval for non‑standard dependency updates.
  • Preconditions:

    • The repository is configured to allow pull requests from forks in combination with the auto‑approve workflow.
    • An attacker is able to manipulate or spoof the PR metadata to satisfy the condition github.actor == 'dependabot[bot]' (or exploit any shortcomings in how Dependabot metadata is verified).
  • Source Code Analysis:

    • In .github/workflows/dependabot-auto-approve-and-merge.yml, the workflow is triggered on pull_request_target which provides a higher privilege context.
    • The job includes an if: ${{ github.actor == 'dependabot[bot]' }} check with no additional authentication of the PR’s origin beyond that actor name.
    • The automated steps (approval and merging) are executed using the gh CLI with repository tokens available, meaning that if an attacker can spoof the actor check, they gain the ability to merge arbitrary code without manual review.
  • Security test case:

    1. In a controlled test environment, create a pull request from a fork with modifications that include well‑crafted (but clearly malicious) code changes.
    2. Attempt to modify or inject metadata (using available CI build parameters or through controlled fork behavior) such that the PR’s event payload meets the condition github.actor == 'dependabot[bot]'.
    3. Observe the workflow execution: if it auto‑approves and merges the PR without manual intervention, this demonstrates that the protection based solely on the actor name is insufficient.
    4. Confirm that the merged code reflects the malicious changes and that there is no additional safeguard rejecting the spoofed PR.
    5. Document the successful exploitation of the workflow auto‑merge feature under the preconditions described.