Here is the combined list of vulnerabilities, formatted as markdown:
- Vulnerability Name: Command Injection in SubProcessCompressor via Django Settings
- Description: An attacker with administrative privileges can modify Django settings such as
CSSTIDY_BINARY
,CSSTIDY_ARGUMENTS
,YUI_BINARY
,YUI_JS_ARGUMENTS
,YUI_CSS_ARGUMENTS
,TERSER_BINARY
,TERSER_ARGUMENTS
,UGLIFYJS_BINARY
,UGLIFYJS_ARGUMENTS
,CLOSURE_BINARY
,CLOSURE_ARGUMENTS
. By injecting malicious commands into these settings, they can achieve arbitrary command execution on the server when the corresponding compressor is used. To trigger this vulnerability, an attacker needs administrative access to the Django application settings. - Impact: Arbitrary command execution on the server, potentially leading to full server compromise, data breach, and other malicious activities.
- Vulnerability Rank: critical
- Currently implemented mitigations: None in the project code itself. Mitigation relies on secure Django settings management and restricting administrative access in Django applications that use django-pipeline.
- Missing mitigations: Input validation and sanitization for settings used in
SubProcessCompressor.execute_command
. While it might be argued that sanitizing Django settings withindjango-pipeline
is outside the library's scope, clear documentation highlighting the security implications of directly using user-provided data in compressor settings is crucial. Specifically, the documentation should strongly advise against allowing user-controlled values to populate these settings and recommend secure configuration practices. Implement strict validation and sanitization for all settings related to compressor binaries and arguments. Ensure that binary paths are absolute and point to trusted executables. Sanitize arguments to prevent injection of malicious commands. Consider usingshlex.quote
to properly escape arguments passed to subprocess. - Preconditions:
- Administrative access to the Django application to modify settings.
- The application uses a compressor that extends
SubProcessCompressor
(likeCSSTidyCompressor
,YUICompressor
,TerserCompressor
,UglifyJSCompressor
,ClosureCompressor
). - CSS or Javascript compression is triggered, which uses the configured compressor. An attacker needs to achieve a state where they can manipulate Django settings related to compressor configurations (e.g.,
CSSTIDY_BINARY
,CSSTIDY_ARGUMENTS
, etc.). This is typically not directly possible from the outside unless there is another vulnerability enabling settings injection or configuration compromise. One of the compressors that useSubProcessCompressor
must be configured and used in the Django application.
- Source code analysis:
- File:
/code/pipeline/compressors/__init__.py
Theclass SubProcessCompressor(CompressorBase): def execute_command(self, command, content): argument_list = [] for flattening_arg in command: if isinstance(flattening_arg, (str,)): argument_list.append(flattening_arg) else: argument_list.extend(flattening_arg) pipe = subprocess.Popen( argument_list, stdout=subprocess.PIPE, stdin=subprocess.PIPE, stderr=subprocess.PIPE, ) # ...
SubProcessCompressor.execute_command
method usessubprocess.Popen
to execute external commands. Thecommand
argument is constructed in the compressor classes that inherit fromSubProcessCompressor
. - File:
/code/pipeline/compressors/csstidy.py
(Example Compressor)Inclass CSSTidyCompressor(SubProcessCompressor): def compress_css(self, css): output_file = tempfile.NamedTemporaryFile(suffix=".pipeline") command = ( settings.CSSTIDY_BINARY, "-", settings.CSSTIDY_ARGUMENTS, output_file.name, ) self.execute_command(command, css) # ...
CSSTidyCompressor
, thecommand
is constructed usingsettings.CSSTIDY_BINARY
andsettings.CSSTIDY_ARGUMENTS
. Similar patterns exist in other compressors likeYUICompressor
,TerserCompressor
,UglifyJSCompressor
, andClosureCompressor
, using their respective settings. If an attacker can modify these settings to inject malicious commands,subprocess.Popen
will execute them, leading to command injection. TheSubProcessCompressor.execute_command
method directly executes commands usingsubprocess.Popen
based on thecommand
argument. Thecommand
is constructed in subclasses likeCSSTidyCompressor
,YUICompressor
, etc., using settings.CSSTidyCompressor
and other compressors construct thecommand
tuple using settings likesettings.CSSTIDY_BINARY
andsettings.CSSTIDY_ARGUMENTS
. If these settings are compromised, command injection is possible.
- File:
- Security test case:
- Set up a Django project and install
django-pipeline
. Configurepipeline
to useCSSTidyCompressor
for CSS compression. - Gain administrative access to the Django admin panel of the test project (or use any method to modify Django settings, e.g., directly editing
settings.py
if accessible). - Modify the
PIPELINE_CSSTIDY_ARGUMENTS
setting in Django settings to include a malicious command, for example:--optimise=2 & touch /tmp/pwned &
. The complete setting might look like:PIPELINE = { 'CSS_COMPRESSOR': 'pipeline.compressors.csstidy.CSSTidyCompressor', 'CSSTIDY_BINARY': '/usr/bin/csstidy', # Replace with actual path if needed 'CSSTIDY_ARGUMENTS': '--optimise=2 & touch /tmp/pwned &', # ... other pipeline settings }
- Trigger CSS compression. This can be done by:
- Running
python manage.py collectstatic
. Pipeline compression is typically triggered duringcollectstatic
if configured. - Accessing a page in the Django application that uses pipeline compressed CSS.
- Running
- After triggering compression, log into the server where the Django application is running and check if the file
/tmp/pwned
has been created. - If the file
/tmp/pwned
exists, it indicates successful command injection through thePIPELINE_CSSTIDY_ARGUMENTS
setting. Setup: Configure a Django project usingdjango-pipeline
and enable CSS compression usingCSSTidyCompressor
. Vulnerable Configuration: Modify the Django settings to include a malicious command insettings.CSSTIDY_BINARY
orsettings.CSSTIDY_ARGUMENTS
. For example, setPIPELINE['CSSTIDY_BINARY'] = "sh"
, andPIPELINE['CSSTIDY_ARGUMENTS'] = ("-c", "touch /tmp/pwned")
. Trigger Compression: Access any webpage that utilizes CSS compression via pipeline (e.g., using{% stylesheet %}
). This will trigger the CSS compression process, including the execution of the configured CSSTidy compressor. Observe Command Execution: Check if the injected commandtouch /tmp/pwned
is executed on the server. Verify if the file/tmp/pwned
is created. Successful creation of this file confirms command injection. Verify Command Injection: If the file/tmp/pwned
is created after triggering the pipeline, it confirms the command injection vulnerability. This demonstrates that by manipulating compressor settings, an attacker could execute arbitrary commands on the server.
- Set up a Django project and install
- Vulnerability Name: Sensitive Information Disclosure via Inline Compilation Error Details
- Description: When asset compilation (for CSS or JavaScript) fails in django‐pipeline, the error is caught and rendered inline via the pipeline templatetag code. In particular, the method
render_error
(inpipeline/templatetags/pipeline.py
) passes the full compiler error—including the command line that was executed and its error output—to the template “pipeline/compile_error.html” with only minimal filtering (removing ANSI escape sequences). An external attacker who is able to trigger a compiler error (for example, by requesting a package that is built from a malformed static asset) may force the application to reveal sensitive internal details. To trigger this vulnerability, an attacker would need to: 1. Identify a static asset that is processed by django-pipeline. 2. Manipulate or request an asset in a way that causes the compilation process to fail (e.g., by requesting a non-existent or malformed source file if possible, or by exploiting other application vulnerabilities to corrupt source files). 3. Access a page that includes the pipeline templatetag referencing the problematic asset whenDEBUG=True
orSHOW_ERRORS_INLINE=True
. - Impact: Sensitive details such as absolute filesystem paths, command‐line arguments, and environment details may be disclosed in the HTTP response. This information can aid an attacker in further reconnaissance and exploitation of the host system.
- Vulnerability Rank: High
- Currently implemented mitigations: The library uses the Django setting
SHOW_ERRORS_INLINE
(which by default is tied to the value of Django’sDEBUG
) so that in a properly configured production environment (withDEBUG=False
), inline error details will not be rendered. - Missing mitigations:
- No explicit enforcement exists within django‐pipeline to disable inline error detail output in production regardless of
DEBUG
orSHOW_ERRORS_INLINE
settings. - There is no additional sanitization (beyond stripping certain ANSI sequences) of the error output before it is inserted into HTML.
- No explicit enforcement exists within django‐pipeline to disable inline error detail output in production regardless of
- Preconditions:
- The application must be deployed with
DEBUG=True
(or withSHOW_ERRORS_INLINE
inadvertently enabled) in production. - An attacker must be able to trigger a compiler error (for example, by referencing a static asset that fails to compile).
- The application must be deployed with
- Source code analysis:
- File:
pipeline/templatetags/pipeline.py
Indef render_error(self, e, template_name="pipeline/compile_error.html"): error_message = re.sub(r'\x1b[^m]*m', '', force_str(e)) command_line = subprocess.list2cmdline(e.command) return mark_safe(render_to_string(template_name, { 'error': error_message, 'command': command_line, 'verbose': settings.DEBUG, }))
pipeline/templatetags/pipeline.py
therender_error
method receives aCompilerError
instance and applies a regex substitution to remove ANSI escape sequences. It then callsrender_to_string("pipeline/compile_error.html", …)
passing along the error message and the invoked command (viasubprocess.list2cmdline(e.command)
). No further sanitization is performed. This means that if an attacker can force a compiler error, the HTTP response will include potentially sensitive internal data. - File:
pipeline/compile_error.html
The template{% if verbose %} <div class="pipeline-error"> <pre> Command: {{ command }} Error: {{ error }} </pre> </div> {% endif %}
pipeline/compile_error.html
then displays this information.
- File:
- Security test case:
- Configure a Django installation with django‐pipeline and set
DEBUG=True
(or explicitly enableSHOW_ERRORS_INLINE
). - Configure a pipeline asset (for example, a stylesheet) whose source file is replaced or malformed so that the underlying compiler (e.g. a SASS or CoffeeScript compiler) fails. For example, you can modify a source file of a CSS asset to contain invalid syntax.
- Load a page that uses a
{% stylesheet ... %}
or{% javascript ... %}
tag referencing this asset in a browser. - Inspect the rendered HTML source code in the browser.
- Verify that the rendered HTML contains an error block (within a
div
with classpipeline-error
) that includes detailed information—such as absolute paths in thecommand
and error details in theerror
section. - Confirm that such details are no longer rendered when
DEBUG=False
(and hence whenSHOW_ERRORS_INLINE
is disabled).
- Configure a Django installation with django‐pipeline and set
- Vulnerability Name: Potential Path Traversal in Static File Serving When DEBUG Is Enabled in Production
- Description: The view function
serve_static
(inpipeline/views.py
) is intended for development use only. It first callsdefault_collector.collect(request, files=[path])
on the requested file and then passes the request along to Django’s built‑in static file server. Although Django’s static file view uses a safe join to combine the document root with the requested path, no additional validation is performed inserve_static
to reject path–traversal patterns (e.g. using “../”). An attacker can exploit this ifDEBUG=True
is enabled in production. To trigger this vulnerability, an attacker would need to: 1. Ensure the Django application is running withDEBUG=True
or the--insecure
flag enabled in production. 2. Craft a URL to the/static/pipeline/
endpoint that includes path traversal elements (e.g.,..
). For example,/static/pipeline/..%2F..%2F..%2Fetc%2Fpasswd
. 3. Access this crafted URL through a web browser. - Impact: If the application is misconfigured and deployed with
DEBUG=True
(or run with the insecure option), an attacker may be able to craft URLs with path traversal elements that could allow reading files outside the intended static directory. This could lead to exposure of sensitive files on the filesystem. - Vulnerability Rank: High
- Currently implemented mitigations: The function immediately raises an
ImproperlyConfigured
exception when Django’sDEBUG
is False (unless the--insecure
flag is explicitly used), thereby discouraging its use in production. Also, Django’s own static file serving view usessafe_join
to limit file access within Django'sSTATIC_ROOT
. - Missing mitigations:
- There is no explicit check within
serve_static
to sanitize or normalize the incomingpath
parameter beyond relying on Django’s static file serving logic. - The library depends entirely on developers ensuring that
DEBUG
is disabled in production for security.
- There is no explicit check within
- Preconditions:
- The application is deployed with
DEBUG=True
(or with the--insecure
flag) in production. - An attacker can craft a URL with path traversal sequences in the
path
parameter.
- The application is deployed with
- Source code analysis:
- File:
pipeline/views.py
Indef serve_static(request, path, insecure=False, **kwargs): if not settings.DEBUG and not insecure: raise ImproperlyConfigured( "The staticfiles serve view can only be used in debug mode or " "with the --insecure option.") normalized_path = safe_join(settings.STATIC_ROOT, path) if normalized_path is None: raise Http404("'%s' could not be located in STATIC_ROOT" % path) default_collector.collect(request, files=[path]) # Potential issue: path is not sanitized return static.serve(request, path, document_root=settings.STATIC_ROOT, **kwargs) # Django's serve uses safe_join
pipeline/views.py
, the functionserve_static
accepts apath
parameter from the request URL and passes it todefault_collector.collect
(which uses the raw file name in its logic) and then to Django’s static file server. Although Django’s implementation ofserve()
usessafe_join
to combineSTATIC_ROOT
andpath
, the lack of additional validation (or explicit rejection) in django‐pipeline before callingdefault_collector.collect(request, files=[path])
means that an attacker exploiting a misconfigured environment might attempt traversal. Thedefault_collector.collect
might operate on the path without proper sanitization before Django'sstatic.serve
handles the actual file serving.
- File:
- Security test case:
- In a test Django installation with django‐pipeline, deliberately set
DEBUG=True
insettings.py
to simulate a production misconfiguration. - Ensure you have some static files deployed and
STATIC_ROOT
is configured to a directory containing these files. - Craft a URL to the static serve view using a path that includes “../” (for example, if your static URL prefix is
/static/
, try/static/pipeline/..%2Fmanage.py
). You may need to URL-encode the path traversal characters (e.g.,%2F
for/
). - Send a GET request to this crafted URL using a browser or
curl
. - Observe whether the response reveals any file contents that lie outside the intended static directory (e.g., the content of
manage.py
or other sensitive files depending on your server's filesystem structure relative toSTATIC_ROOT
). - Verify that if
DEBUG
is later set toFalse
, accessing the same URL will result in anImproperlyConfigured
exception.
- In a test Django installation with django‐pipeline, deliberately set
- Vulnerability Name: Potential Command Execution via Malicious File Names in SubProcessCompiler
- Description: In several compiler classes (which inherit from
SubProcessCompiler
in modules such assass.py
,less.py
,coffee.py
, etc.), the command to be executed is built by “flattening” a tuple from settings (for example, settings likeSASS_BINARY
,LESS_BINARY
, orCOFFEE_SCRIPT_BINARY
). The flattening process does little or no validation of the individual components. If an attacker can somehow cause a static asset file name—or a related setting value—to include unexpected characters (such as spaces or shell metacharacters), the resulting command–line argument list may be altered. Even though subprocess execution is done withshell=False
(thus largely preventing shell injection), malformed or malicious file names or configuration values could theoretically cause unexpected behavior or, in conjunction with another vulnerability (for example, an insecure file–upload facility), lead to arbitrary command execution. To trigger this vulnerability, an attacker would need to: 1. Find a way to introduce a malicious filename into the static asset processing pipeline. This could be through an independent file upload vulnerability, or if the application dynamically generates filenames based on user input without proper sanitization, or if an attacker can somehow modify files in the static files storage. 2. Craft a malicious filename containing shell- Metacharacters or unexpected spaces that, when processed by a compiler, could alter the intended command execution. For example, a filename liketest; touch /tmp/pwned.coffee
. 3. Trigger the compilation process for the asset with the malicious filename. This might involve requesting a page that uses the asset, or by manually triggering asset compilation tasks if such functionality is exposed. - Impact: If exploited, this vulnerability could allow an attacker for whom file names are controllable to influence the command arguments passed to external compilers. This might enable the execution of unintended commands, potentially leading to full system compromise.
- Vulnerability Rank: High
- Currently implemented mitigations:
- The command is executed via
subprocess.Popen
withshell=False
so that the arguments are passed as a list rather than a concatenated string, which mitigates many shell‐injection risks. - Under normal operation, asset file names come from a static files system that is not under attacker control.
- The command is executed via
- Missing mitigations:
- There is no explicit sanitization or validation of file names or settings values when constructing the command argument list within
SubProcessCompiler.execute_command
. - The design assumes that the static asset file names (and related settings values) are trusted. An additional layer of input validation would reduce potential risk, especially in environments where file uploads or dynamic file naming is possible.
- There is no explicit sanitization or validation of file names or settings values when constructing the command argument list within
- Preconditions:
- An attacker must have an independent means (such as an insecure file upload vulnerability or a way to manipulate static file storage) to upload or modify static asset file names so that they contain malicious content.
- The affected compiler must be triggered during asset processing so that the file name is included in the flattened command argument list.
- Source code analysis:
- File:
pipeline/compilers/__init__.py
Theclass SubProcessCompiler(Compiler): # ... def execute_command(self, command, cwd=None, display_stdout=False, display_stderr=False, content=None, filename=None, **kwargs): if filename: command = tuple(command) + (filename,) # Filename appended without sanitization command = list(self.flatten_command(command)) # Flattening happens here command = list(filter(None, command)) if not command: return '', '' # ... process = Popen( command, cwd=cwd, env=env, stdout=PIPE, stderr=PIPE, stdin=PIPE, shell=False) # shell=False is good, but filename is unsanitized # ...
SubProcessCompiler.execute_command
method appends the filename to the command without sanitization. Theflatten_command
method handles lists and tuples within the command definition. Each element of the command tuple is checked: if it is a string it is added directly, and if not (list or tuple), it is iterated over and flattened recursively. The final list of arguments is then filtered with a simplefilter(None, ...)
and passed directly tosubprocess.Popen
. No escaping or sanitization is applied to arbitrary file name inputs before being appended to the command. Although using a list of arguments (withshell=False
) avoids classic shell injection, an attacker’s control over the file name could cause unexpected behavior in the called compiler because the filename is passed as a command-line argument without proper quoting or sanitization. - File:
pipeline/compilers/__init__.py
Thedef flatten_command(self, command): for arg in command: if isinstance(arg, (list, tuple)): for sub_arg in self.flatten_command(arg): # Recursive flattening yield sub_arg else: yield arg
flatten_command
method handles lists and tuples within the command definition.
- File:
- Security test case:
- Set up a test environment with django-pipeline and a compiler enabled (e.g., CoffeeScript).
- Create or simulate a scenario where a static asset filename can be controlled or influenced by an attacker. For a simplified test, you might manually create a file with a malicious name directly in your static files directory. Let's assume you create a file named
test;touch /tmp/pwned.coffee
(or similar, adapted to your OS). - Configure a pipeline asset to process this malicious file. For example, if you are using CoffeeScript, create a pipeline configuration that includes
test;touch /tmp/pwned.coffee
. - Trigger asset compilation. This could be done by running
python manage.py collectstatic
or by accessing a page that uses the relevant static asset if the compilation happens on-demand. - Monitor the command executed by the compiler. You can achieve this by:
- Replacing the actual compiler binary (e.g.,
coffee
) with a simple script that logs the command-line arguments it receives and then exits. - Using system monitoring tools to observe the processes spawned by Python and their arguments.
- Replacing the actual compiler binary (e.g.,
- Verify whether the executed command includes the malicious filename directly and unsanitized, e.g., you might see something like
coffee test;touch /tmp/pwned.coffee
. - Check if the side effect of the malicious command is executed. In the example
test;touch /tmp/pwned.coffee
, check if the file/tmp/pwned
was created. If it was, it confirms that command injection is possible through malicious filenames. - (Mitigation Test) Implement filename sanitization in
SubProcessCompiler.execute_command
to only allow safe characters in filenames before appending them to the command. Retest with the same malicious filename and verify that the vulnerability is no longer exploitable. For example, you could sanitize the filename to remove or escape shell-sensitive characters before command execution.
-
Vulnerability Name: Path Traversal via Glob Pattern in Package Sources
-
Description: The
pipeline.glob.glob
function is utilized to locate source files for CSS and JavaScript packages based on patterns specified in Django settings underPIPELINE['STYLESHEETS']
andPIPELINE['JAVASCRIPT']
. ThePackage.sources
property, defined inpipeline/packager.py
, employsglob.glob
to resolve thesource_filenames
configured for each package. If thesesource_filenames
patterns are not properly validated, and an attacker gains control or influence over them, a malicious pattern could be crafted to traverse directories beyond the intended static file locations. By exploiting this, an attacker could potentially include arbitrary files from the server's filesystem into the pipeline's processing flow. This could lead to information disclosure or other vulnerabilities, contingent on how these included files are subsequently processed. To trigger this vulnerability, an attacker needs to achieve a state where they can manipulate thesource_filenames
patterns within the Django settings forPIPELINE['STYLESHEETS']
orPIPELINE['JAVASCRIPT']
. -
Impact: Successful exploitation could allow an attacker to read sensitive files from the server, potentially exposing source code, configuration files, or confidential data.
-
Vulnerability Rank: High
-
Currently implemented mitigations: No direct mitigations are implemented within the project code to sanitize glob patterns in package configurations. The project assumes that developers will provide secure patterns in their settings.
-
Missing mitigations: Implement robust input validation and sanitization for glob patterns defined in
PIPELINE['STYLESHEETS']
andPIPELINE['JAVASCRIPT']
settings. Restrict patterns to only allow paths within designated static file directories, explicitly preventing directory traversal sequences such as..
. -
Preconditions:
- An attacker needs to achieve a state where they can manipulate the
source_filenames
patterns within the Django settings forPIPELINE['STYLESHEETS']
orPIPELINE['JAVASCRIPT']
. In typical Django deployments, direct external manipulation of settings is not feasible unless a separate configuration vulnerability exists or there is an indirect mechanism for settings injection. - The
pipeline
module must be actively enabled and in use within the Django application.
- An attacker needs to achieve a state where they can manipulate the
-
Source code analysis:
- File:
/code/pipeline/packager.py
Theclass Package: # ... @property def sources(self): if not self._sources: paths = [] for pattern in self.config.get("source_filenames", []): # Pattern from config for path in glob(pattern): # glob.glob is called here if path not in paths and find(path): paths.append(str(path)) self._sources = paths return self._sources
Package.sources
property retrievessource_filenames
patterns from the package configuration and passes them to theglob(pattern)
function. These patterns originate from the Django settings files (css_packages
,js_packages
). - File:
/code/pipeline/glob.py
def glob(pathname): """Return a list of paths matching a pathname pattern.""" return sorted(list(iglob(pathname))) def iglob(pathname): """Return an iterator which yields the paths matching a pathname pattern.""" if not has_magic(pathname): yield pathname return dirname, basename = os.path.split(pathname) # ... recursive glob logic ...
glob.glob
andglob.iglob
utilize standard Python library functionsos.path
andfnmatch
, which are inherently vulnerable to path traversal attacks if the input patterns are not properly sanitized.
- File:
-
Security test case:
- Setup: Configure a Django project using
django-pipeline
. - Vulnerable Configuration: Modify the Django settings to include a malicious path traversal pattern in
PIPELINE['STYLESHEETS']
orPIPELINE['JAVASCRIPT']
. For example, alterPIPELINE['STYLESHEETS']['screen']['source_filenames']
to("../../../../../../../etc/passwd",)
. - Trigger Pipeline: Access any webpage that utilizes the
{% stylesheet "screen" %}
template tag (or the package you modified). This action will initiate the pipeline to process the "screen" CSS package. - Observe Attempted File Access: Monitor the application's behavior and logs. Observe if the pipeline attempts to access or process the
/etc/passwd
file. In a real-world scenario, this might manifest as errors in the application logs related to file access, or potentially, depending on error handling, the contents of/etc/passwd
could be inadvertently included in the processed output. - Verify Path Traversal: If the pipeline attempts to access
/etc/passwd
(or a similar sensitive file outside the static file directory), it confirms the path traversal vulnerability. This indicates thatglob.glob
processes the malicious pattern, and a setting compromise could lead to unauthorized file access.
Note: This test case is designed to demonstrate the vulnerability assuming a hypothetical scenario where settings can be modified. In a standard deployment, direct external manipulation of Django settings is not typically possible. However, the analysis reveals that
django-pipeline
lacks sanitization of glob patterns, making it potentially vulnerable if a method to compromise settings exists, or if user-provided data is unsafely incorporated into these patterns. - Setup: Configure a Django project using