Skip to content

Latest commit

 

History

History
113 lines (87 loc) · 8.68 KB

File metadata and controls

113 lines (87 loc) · 8.68 KB

Vulnerability Name: Insecure GitHub Actions Workflow – Auto‐Approve and Auto‐Merge on Pull Request Target

Description: The workflow file in “.github/workflows/dependabot-auto-approve-and-merge.yml” is configured to trigger on the pull_request_target event. It then checks that the actor is exactly "dependabot[bot]" before approving and automatically merging pull requests (except for major version updates). An external attacker who is able to submit a pull request (for example, from a fork or via a compromised dependent repository) might craft a pull request whose metadata is manipulated to impersonate Dependabot. If the actor check is bypassed or spoofed, the workflow could auto-approve and merge unauthorized, malicious code changes into the main branch.

Impact: Critical. Unauthorized code merge may lead to code injection, compromise of CI/CD pipelines, and ultimately execution of malicious code in the production environment.

Vulnerability Rank: Critical

Currently Implemented Mitigations:

  • The workflow uses an if condition: if: ${{ github.actor == 'dependabot[bot]' }} to check that the pull request comes from Dependabot.
  • It invokes the official dependabot metadata action (dependabot/[email protected]) to help distinguish update types.

Missing Mitigations:

  • No further validation is done on the origin of the pull request (for example, checking commit signatures or the source branch).
  • Reliance on the single check on github.actor may be insufficient if an attacker can manipulate the PR metadata.
  • Using the polluting pull_request_target event (which runs with elevated permissions) increases the risk if any conditions are bypassed.

Preconditions:

  • The repository must accept pull requests from external sources (e.g. from forks).
  • The attacker must be able to submit a pull request that can manipulate (or spoof) the metadata so that the actor appears as "dependabot[bot]".
  • There is no additional verification (such as commit signature verification) or branch restrictions enforced by the workflow.

Source Code Analysis:

  1. The workflow is triggered on the pull_request_target event.
  2. In the job’s condition, the workflow checks: if: ${{ github.actor == 'dependabot[bot]' }} before running the approval step.
  3. The job then calls: gh pr review --approve "$PR_URL" and later (if the update type is not a major version update) gh pr merge --auto --squash "$PR_URL" using the PR URL taken from the event context.

Security Test Case:

  1. From an external account (or by using a fork), craft a pull request that attempts to update a dependency.
  2. Modify the PR metadata (if possible in a test environment, or simulate the behavior by forking the repository and locally modifying the workflow’s event payload) so that the actor field is set to dependabot[bot] even though the creator is not Dependabot.
  3. Push the pull request into the repository and observe whether the automated workflow approves and auto-merges the PR.
  4. Verify (by checking the commit history) whether a malicious change was merged.

Vulnerability Name: Improper HTML Escaping in Bootstrap Button Rendering

Description: The function responsible for rendering buttons (defined as render_button in “src/bootstrap4/forms.py”) wraps the provided button content with mark_safe without applying any additional escaping. Because the function unconditionally marks the caller’s content as safe, if a developer inadvertently passes untrusted (or attacker‐controlled) data to the {% bootstrap_button %} template tag, any malicious HTML or JavaScript contained in the input will be rendered verbatim on the page.

Impact: High. This can lead to cross‐site scripting (XSS) attacks if user input or other untrusted content is used as the button label. An attacker could inject scripts that run in the context of the application’s domain, potentially stealing credentials or session data.

Vulnerability Rank: High

Currently Implemented Mitigations:

  • There is an attempt to “format” the tag using render_tag (which internally calls Django’s format_html), but for button content the library bypasses escaping by wrapping the content with mark_safe.
  • The library’s tests use only static strings for button labels, so no dynamic (and potentially unsafe) input is demonstrated.

Missing Mitigations:

  • The call to mark_safe(content) in render_button should be removed or replaced with a mechanism that uses proper HTML escaping (for example, using format_html on the input or ensuring that only known-safe data may be passed).
  • Documentation should warn developers that passing user-controlled data to the button tag requires extra sanitization.

Preconditions:

  • A developer passes user-supplied (or otherwise untrusted) input to the bootstrap button template tag (for example, through a template variable that isn’t pre-sanitized).

Source Code Analysis:

  1. In “src/bootstrap4/forms.py”, the render_button function is defined.
  2. After setting up the HTML tag and merging various attributes, the function returns: return render_tag(tag, attrs=attrs, content=mark_safe(content))
  3. Unlike other rendering functions that let Django’s autoescaping (via format_html) work normally, here the content is explicitly marked safe without any filtering.

Security Test Case:

  1. Create a test Django view and template that renders a button using the {% bootstrap_button %} tag.
  2. Pass a template variable (for example, coming from a GET parameter) that contains malicious HTML or JavaScript code such as <script>alert("XSS");</script>
  3. Load the page as an external attacker and inspect the rendered HTML.
  4. Verify that the injected script tag appears (without being escaped) and that attempting to interact with the page triggers the malicious JavaScript.

Vulnerability Name: Unescaped Addon Content in Input Groups

Description: In the process of rendering form fields with input group addons, the FieldRenderer’s method make_input_group_addon (located in “src/bootstrap4/renderers.py”) inserts the provided addon content (from widget attributes such as addon_before and addon_after) directly into the HTML using f-string interpolation without proper escaping. If an attacker can cause the widget’s addon content to be set from untrusted data, the raw HTML will be inserted into the page, potentially allowing script injection.

Impact: High. This may enable a cross‐site scripting (XSS) attack where an attacker supplies a malicious payload (for example, a <script> tag) via a dynamic setting for the input addon. The malicious payload would be executed in the browser of any user viewing the form.

Vulnerability Rank: High

Currently Implemented Mitigations:

  • There is no explicit escaping or sanitization performed on the addon content in the method, and the code assumes that addon values come from trusted (developer‐defined) widget attributes.
  • The tests provided in the project do not cover a scenario where addon content is user-controlled.

Missing Mitigations:

  • The addon content should be passed through proper HTML escaping (for example, using Django’s escape or format_html) before being interpolated into the HTML string.
  • Documentation should warn that if addon content is derived from user input, it must be sanitized prior to being set on widget attributes.

Preconditions:

  • A form field’s widget is configured with an addon_before or addon_after value that can be influenced by untrusted, attacker-controlled input.
  • The form renders this field, and the malicious value is output in the HTML markup without escaping.

Source Code Analysis:

  1. In “src/bootstrap4/renderers.py”, the method make_input_group_addon is defined as follows:
    • It checks if the provided content is nonempty.
    • If an inner class is provided, it wraps the content as: <span class="{inner_class}">{content}</span>
    • Then the function returns a <div> containing this content.
  2. The function uses an f-string to interpolate the raw content without any escaping or filtering, making the output susceptible to injection if the content is attacker controlled.

Security Test Case:

  1. Create or modify a test form such that one of the input field widgets has its attribute (for example, addon_before) set to an attacker-controlled string such as: <img src=x onerror=alert("XSS")>
  2. Render the form in a view that is publicly accessible.
  3. Inspect the rendered HTML of the input group; verify that the malicious payload appears unescaped within a <span> or <div> tag.
  4. Open the page in a browser and observe whether the JavaScript payload executes.