Mitigation Strategy: Limit Decompressed Size (Progressive Check within compressor
)
Description:
- Integrate with
compressor
: Ideally, this check should be integrated within thezetbaitsu/compressor
library itself, or through a very tightly coupled wrapper. This ensures the check happens as early as possible in the decompression process. - Chunk-Based Decompression: The
compressor
library (or the wrapper) should be configured or modified to decompress data in chunks. This might involve using a streaming API or manually managing buffers. - Size Check After Each Chunk: After each chunk is decompressed, the
compressor
(or wrapper) should check the cumulative decompressed size against the predetermined limit. - Immediate Termination: If the limit is exceeded, the
compressor
should immediately stop further decompression, release any allocated resources, and raise an exception or return an error code. It should not return any partially decompressed data to the calling application. - Configuration Option (Ideal): The best approach would be if
zetbaitsu/compressor
provided a configuration option to set a maximum decompressed size, handling the chunking and checks internally.
Threats Mitigated:
- Compression Bombs (DoS): Severity: High. Prevents resource exhaustion.
- Excessive Memory Consumption (DoS): Severity: High. Limits memory usage.
Impact:
- Compression Bombs (DoS): Risk reduced from High to Very Low.
- Excessive Memory Consumption (DoS): Risk reduced from High to Low.
Currently Implemented: Assuming zetbaitsu/compressor
does not have built-in support for this, it's likely not implemented.
Missing Implementation: This requires either modifying zetbaitsu/compressor
directly (if open-source and feasible), creating a wrapper around it that manages the chunking and size checks, or requesting this feature from the library maintainers.
Mitigation Strategy: Algorithm Restriction (Configuration within compressor
)
Description:
- Whitelist Approach: Configure
zetbaitsu/compressor
to only allow a specific set of whitelisted compression algorithms. This is the most secure approach. - Configuration Mechanism: Use the library's provided configuration mechanism (if any) to specify the allowed algorithms. This might involve a list of allowed algorithm names, constants, or a configuration file.
- Disable Custom Compressors (If Applicable): If
zetbaitsu/compressor
allows users to define custom compression algorithms, disable this feature unless absolutely necessary and rigorously controlled. Custom compressors introduce significant security risks. - Error Handling: Ensure that if the application attempts to use an unsupported algorithm,
zetbaitsu/compressor
raises a clear exception or returns an error.
Threats Mitigated:
- Excessive CPU Consumption (DoS): Severity: Medium. By restricting to less computationally intensive algorithms.
- Vulnerabilities in Specific Algorithms: Severity: Variable. Eliminates the risk from known vulnerabilities in disallowed algorithms.
Impact:
- Excessive CPU Consumption (DoS): Risk reduction depends on the algorithms restricted (Low to Medium).
- Vulnerabilities in Specific Algorithms: Risk reduced to Very Low for the disallowed algorithms.
Currently Implemented: Likely not implemented; the application probably uses the library's default settings, allowing all supported algorithms.
Missing Implementation: Requires configuring zetbaitsu/compressor
(through its API or configuration files) to restrict the allowed algorithms. The specific implementation depends on how the library handles configuration.
Mitigation Strategy: CPU Time Limit (Integration with compressor
- if possible)
Description:
- Ideal Integration: The ideal scenario is if
zetbaitsu/compressor
provides a built-in mechanism to set a CPU time limit for decompression operations. This would be the most reliable and efficient approach. - Wrapper with Timeouts (If Necessary): If the library doesn't offer built-in timeouts, a wrapper function could be created. This wrapper would:
- Start a timer before calling
zetbaitsu/compressor
's decompression function. - Call the decompression function.
- Periodically check the elapsed time (using a separate thread or asynchronous checks, if possible, to avoid blocking the main thread).
- If the time limit is exceeded, attempt to forcefully terminate the decompression process. This might involve using signals, killing a subprocess, or other OS-specific mechanisms. This is the least reliable part, as forcefully terminating a library might lead to instability.
- Start a timer before calling
- Configuration (Ideal): If built-in, a configuration option within
zetbaitsu/compressor
would be used to set the time limit.
Threats Mitigated:
- Excessive CPU Consumption (DoS): Severity: Medium.
Impact:
- Excessive CPU Consumption (DoS): Risk reduced from Medium to Low.
Currently Implemented: Highly unlikely to be implemented directly within zetbaitsu/compressor
. A wrapper might exist, but it's probably not robust.
Missing Implementation: Requires either a feature request to the zetbaitsu/compressor
maintainers for built-in time limits or the creation of a robust wrapper function that handles timeouts and (attempts to) safely terminate the decompression process. The wrapper approach is significantly more complex and potentially less reliable.