Skip to content

Latest commit

 

History

History
65 lines (56 loc) · 10.1 KB

File metadata and controls

65 lines (56 loc) · 10.1 KB

Mitigation Strategies Analysis for google/leveldb

Mitigation Strategy: LevelDB Cache Size Limit

  • Mitigation Strategy: LevelDB Cache Size Limit
  • Description:
    1. Analyze Memory Usage: Monitor the application's memory usage and LevelDB's cache behavior under typical and peak loads. Understand how much data your application frequently accesses from LevelDB.
    2. Configure CacheSize Option: During LevelDB database initialization, explicitly set the CacheSize option within the Options struct passed to leveldb::DB::Open(). Choose a value that balances performance and memory consumption. A larger cache can improve read performance but increases memory usage.
    3. Performance Testing: Conduct performance testing with different CacheSize values to determine the optimal setting for your application's workload. Observe read latency and overall application performance.
    4. Consider Workload: If your application has predictable access patterns, a well-tuned cache size can significantly improve performance and prevent excessive memory usage. If access patterns are random or unpredictable, a very large cache might not be as effective.
  • Threats Mitigated:
    • Memory Exhaustion DoS (Medium Severity): Prevents LevelDB from consuming excessive memory, which could lead to application crashes or system instability under heavy load or malicious attempts to exhaust resources.
  • Impact:
    • Memory Exhaustion DoS (Medium Impact): Reduces the risk of memory exhaustion DoS attacks specifically related to LevelDB's memory usage.
  • Currently Implemented: Implemented in the application's LevelDB initialization code within the DatabaseManager.cpp file. The CacheSize is currently hardcoded to 512MB in the leveldb::Options struct before opening the database.
  • Missing Implementation: The CacheSize is statically configured. Consider making it configurable via an application setting or environment variable to allow for easier adjustments without recompiling the application. Dynamic adjustment based on system memory pressure is not implemented and could be explored for future enhancements.

Mitigation Strategy: Checksum Verification

  • Mitigation Strategy: Checksum Verification
  • Description:
    1. Verify Default Configuration: Confirm that LevelDB's default checksum verification is enabled. LevelDB enables checksums by default for data blocks and metadata.
    2. Explicitly Set Options (Optional but Recommended for Clarity): While default is enabled, for clarity and to ensure it's not accidentally disabled, explicitly set Options::verify_checksums = true and Options::paranoid_checks = false (or true if desired, see next step) during LevelDB initialization.
    3. Consider paranoid_checks Option: For applications with extremely high data integrity requirements and tolerance for potential performance impact, enable the paranoid_checks option by setting Options::paranoid_checks = true. This option performs more extensive checks, including checksum verification in more code paths, but can reduce performance. Evaluate the trade-off between performance and increased data integrity guarantees.
  • Threats Mitigated:
    • Data Corruption (Medium Severity): Detects data corruption caused by hardware failures (disk errors, memory issues), software bugs within LevelDB or the application, or potentially malicious manipulation of data files. Checksums help ensure data read from LevelDB is consistent with what was written.
  • Impact:
    • Data Corruption (Medium Impact): Provides a mechanism to detect data corruption during read operations. This allows the application to handle corruption gracefully (e.g., retry read, report error, use backup data) instead of silently using corrupted data, which could lead to unpredictable application behavior or data inconsistencies.
  • Currently Implemented: Checksum verification is enabled by default in LevelDB and is implicitly active in the application. However, the code does not explicitly set Options::verify_checksums = true.
  • Missing Implementation: Explicitly set Options::verify_checksums = true in the LevelDB initialization code in DatabaseManager.cpp for better code clarity and to ensure checksums are always enabled, regardless of potential future default changes in LevelDB. Evaluate and potentially implement Options::paranoid_checks = true for critical data paths after performance testing.
  • Mitigation Strategy: Write-Ahead Logging (WAL) Management
  • Description:
    1. Verify WAL is Enabled (Default): LevelDB uses WAL by default for durability. Confirm that WAL is enabled and not explicitly disabled in your LevelDB options. WAL is crucial for ensuring data durability in case of crashes or power failures.
    2. Understand WAL Behavior: Familiarize yourself with how LevelDB manages WAL files. LevelDB automatically rotates WAL files and reuses them when possible.
    3. Consider Options::sync = true for Critical Writes: For extremely critical write operations where immediate durability is paramount, consider setting Options::sync = true during write operations using WriteOptions. This forces a disk sync after each write, ensuring data is flushed to disk before the write operation returns. However, this significantly reduces write performance and should be used judiciously only for critical data.
    4. WAL Archival/Purging Strategy (Application Level): While LevelDB manages WAL rotation, your application might need a strategy for archiving WAL files for point-in-time recovery or purging older WAL files to manage disk space if WAL files are accumulating excessively and impacting disk usage. This is typically an application-level concern, not directly configured within LevelDB options.
  • Threats Mitigated:
    • Data Loss due to System Crashes/Power Failures (High Severity): WAL ensures that committed write operations are durable and can be recovered even if the system crashes or loses power before data is flushed to the main database files.
  • Impact:
    • Data Loss (High Impact): Significantly reduces the risk of data loss due to unexpected system interruptions by providing a recovery mechanism based on the WAL.
  • Currently Implemented: WAL is enabled by default in LevelDB and is implicitly active. The application relies on LevelDB's default WAL behavior. Options::sync = true is not used for writes.
  • Missing Implementation: No explicit configuration or management of WAL is done in the application code. While relying on defaults is generally acceptable, for enhanced control and potential disaster recovery scenarios, consider:
    • Documenting the reliance on LevelDB's default WAL behavior.
    • Exploring application-level WAL archival strategies if point-in-time recovery is a requirement.
    • Evaluating the need for Options::sync = true for specific critical write operations based on data durability requirements and performance trade-offs.
  • Mitigation Strategy: Bloom Filters
  • Description:
    1. Verify Bloom Filters are Enabled (Default): LevelDB uses Bloom filters by default to reduce disk reads for non-existent keys. Confirm that Bloom filters are enabled and not explicitly disabled in your LevelDB options.
    2. Understand Bloom Filter Functionality: Bloom filters are probabilistic data structures that quickly check if a key might be present in the database. They can have false positives (saying a key might be present when it's not) but no false negatives (never saying a key is not present when it is). This helps avoid unnecessary disk reads for keys that are likely not in the database.
    3. Tune Bloom Filter Bits Per Key (Advanced): For advanced tuning, you can adjust the number of bits per key used in Bloom filters using Options::filter_policy. Increasing bits per key reduces the false positive rate but increases memory usage for Bloom filters. This is typically only needed for very large databases or specific performance optimization scenarios. For most applications, the default Bloom filter settings are sufficient.
  • Threats Mitigated:
    • Read Amplification DoS (Low to Medium Severity - Indirect): By reducing unnecessary disk reads, Bloom filters can indirectly help mitigate potential read amplification issues that could contribute to DoS if an attacker attempts to trigger many reads for non-existent keys.
    • Performance Degradation due to Excessive Disk Reads (Medium Severity): Bloom filters improve read performance, especially for lookups of non-existent keys, preventing performance degradation under normal and potentially attack scenarios involving many non-existent key lookups.
  • Impact:
    • Read Amplification DoS (Low Impact): Indirectly reduces the risk of read amplification DoS by optimizing read operations.
    • Performance Degradation (Medium Impact): Improves read performance, making the application more resilient to performance-based attacks or general performance issues related to database reads.
  • Currently Implemented: Bloom filters are enabled by default in LevelDB and are implicitly active. The application relies on LevelDB's default Bloom filter settings.
  • Missing Implementation: No explicit configuration or tuning of Bloom filters is done in the application code. For most use cases, the default settings are adequate. Consider performance profiling and potentially tuning Options::filter_policy only if read performance becomes a bottleneck or if there are specific performance optimization requirements for very large datasets.