Mitigation Strategy: Atomic Operations (using concurrent-ruby
)
-
Description:
- Identify Simple Shared Variables: Find shared variables that are subject to simple, atomic updates (e.g., incrementing a counter, setting a flag).
- Use
concurrent-ruby
Atomic Types: Replace direct access to these variables withconcurrent-ruby
's atomic primitives:Concurrent::AtomicFixnum
: For integer counters.Concurrent::AtomicBoolean
: For boolean flags.Concurrent::AtomicReference
: For holding references to other objects (use with caution, ensures atomic reference updates, not object mutability). This is crucial for atomically swapping out entire objects.
- Understand Atomic Operations: Be aware of the specific atomic operations provided by each type (e.g.,
increment
,decrement
,compare_and_set
,update
). Use the correct operation for your needs. Thecompare_and_set
(CAS) operation is particularly important for implementing more complex lock-free algorithms. - Avoid Non-Atomic Operations: Do not combine atomic operations with non-atomic operations on the same variable. For example,
if atomic_counter.value > 0 then atomic_counter.decrement end
is not atomic and still needs external synchronization.
-
Threats Mitigated:
- Data Races (Severity: High): Guarantees that simple updates to shared variables are performed atomically, preventing data corruption.
- Lost Updates (Severity: Medium): Ensures that updates from multiple threads are not lost due to concurrent access.
-
Impact:
- Data Races: Risk significantly reduced for simple variable updates.
- Lost Updates: Risk significantly reduced.
-
Currently Implemented:
Concurrent::AtomicFixnum
is used to track the number of active requests in theRequestCounter
module.
-
Missing Implementation:
- A boolean flag indicating whether the system is in maintenance mode is currently a regular instance variable and is not updated atomically.
Mitigation Strategy: Thread-Safe Data Structures (using concurrent-ruby
)
-
Description:
- Identify Shared Collections: Find shared data structures like arrays, hashes, or maps that are accessed and modified by multiple threads.
- Use
concurrent-ruby
Collections: Replace standard Ruby collections withconcurrent-ruby
's thread-safe equivalents:Concurrent::Array
Concurrent::Hash
Concurrent::Map
(often the best choice for general-purpose concurrent hash tables)
- Read Documentation Carefully: Understand the thread-safety guarantees of each collection and its methods. Some operations might not be fully atomic, particularly those involving multiple steps.
- Avoid Check-Then-Act: Be particularly cautious of "check-then-act" sequences (e.g., checking if a key exists in a
Concurrent::Map
and then inserting a value). These are not atomic and require additional synchronization (often usingcompute_if_absent
or similar methods).Concurrent::Map
provides methods likeput_if_absent
,compute_if_absent
,compute_if_present
, andmerge
that can help with these scenarios.
-
Threats Mitigated:
- Data Races (Severity: High): Provides thread-safe access to collections, preventing data corruption.
- ConcurrentModificationException (Severity: Medium): Eliminates the risk of exceptions caused by modifying a collection while it's being iterated over by another thread.
-
Impact:
- Data Races: Risk significantly reduced for collection operations.
- ConcurrentModificationException: Risk eliminated.
-
Currently Implemented:
Concurrent::Map
is used to store cached database query results in theQueryCache
module.
-
Missing Implementation:
- A list of active user sessions is currently stored in a regular Ruby
Array
and is accessed by multiple threads without proper synchronization.
- A list of active user sessions is currently stored in a regular Ruby
Mitigation Strategy: Thread Pool Management (using concurrent-ruby
)
-
Description:
- Avoid Raw Threads: Do not create threads directly using
Thread.new
unless absolutely necessary. Raw threads offer no management or resource control. - Use
concurrent-ruby
Thread Pools: Utilizeconcurrent-ruby
's thread pool implementations:Concurrent::ThreadPoolExecutor
: The most general-purpose and configurable thread pool.Concurrent::FixedThreadPool
: A pool with a fixed number of threads.Concurrent::CachedThreadPool
: A pool that creates threads as needed and reuses them, suitable for short-lived tasks.Concurrent::SingleThreadExecutor
: Executes tasks sequentially in a single background thread.Concurrent::ImmediateExecutor
: Executes tasks immediately in the calling thread (useful for testing or when concurrency is not desired).
- Configure Pool Size: Carefully configure the thread pool size based on:
- The number of available CPU cores.
- The nature of the tasks (CPU-bound vs. I/O-bound). I/O-bound tasks can often use a larger pool size.
- Available system memory.
- Monitor Resource Usage: Monitor the application's resource usage (CPU, memory, threads) to ensure that the thread pool is not over- or under-provisioned. Use tools like
concurrent-ruby
's built-in instrumentation or external monitoring systems. - Consider Adaptive Pools: Explore
concurrent-ruby
'sConcurrent::ThreadPoolExecutor
with auto-trimming features, which can dynamically adjust the number of threads based on load. - Shutdown Pools Gracefully: When shutting down the application, ensure that thread pools are shut down gracefully using
#shutdown
and#wait_for_termination
. This allows running tasks to complete before the application exits. - Use
post
method: Usepost
method to add tasks to the thread pool.
- Avoid Raw Threads: Do not create threads directly using
-
Threats Mitigated:
- Resource Exhaustion (Severity: Medium): Prevents the creation of too many threads, which can lead to resource exhaustion (memory, CPU, file descriptors).
- Thread Starvation (Severity: Low): Helps ensure that tasks are executed in a timely manner by managing the allocation of threads.
-
Impact:
- Resource Exhaustion: Risk significantly reduced.
- Thread Starvation: Risk moderately reduced.
-
Currently Implemented:
- A
Concurrent::FixedThreadPool
is used for handling background tasks related to email sending.
- A
-
Missing Implementation:
- Several parts of the application still create threads directly using
Thread.new
, without any resource management. - The
Concurrent::FixedThreadPool
for email sending is not properly shut down when the application exits.
- Several parts of the application still create threads directly using
Mitigation Strategy: Exception Handling with Future
and Promise
(using concurrent-ruby
)
-
Description:
- Use
Future
orPromise
: Wrap asynchronous operations inConcurrent::Future
orConcurrent::Promise
objects. These provide a way to manage the result of an asynchronous computation. - Use
rescue
: Use the#rescue
method (or its alias#rescue_with
) on theFuture
orPromise
to handle exceptions that occur during the asynchronous execution. This is essential for preventing unhandled exceptions from silently terminating threads.future.rescue { |reason| ... }
- Chain Operations: Use methods like
#then
,#chain
, and#flat_map
to chain together asynchronous operations and handle their results (and potential errors) in a controlled manner. - Handle Results: Use
#value
to get result of theFuture
orPromise
. Use#wait
method before getting the value.
- Use
-
Threats Mitigated:
- Silent Thread Termination (Severity: High): Prevents asynchronous tasks from failing silently due to unhandled exceptions.
- Inconsistent Application State (Severity: Medium): Allows for graceful handling of errors in asynchronous operations, preventing the application from entering an inconsistent state.
-
Impact:
- Silent Thread Termination: Risk significantly reduced.
- Inconsistent Application State: Risk moderately reduced.
-
Currently Implemented:
Future
objects are used in some parts of the code for fetching data from external APIs, but theirrescue
methods are not consistently used.
-
Missing Implementation:
- Comprehensive exception handling using
#rescue
is missing in severalFuture
implementations.
- Comprehensive exception handling using
Mitigation Strategy: Using ThreadPoolExecutor#error_callback
(using concurrent-ruby
)
- Description:
- Use
ThreadPoolExecutor
: Ensure you are usingConcurrent::ThreadPoolExecutor
for managing your thread pool. - Set
error_callback
: When creating theThreadPoolExecutor
, set theerror_callback
option to a lambda or a method that will be invoked whenever a task submitted to the pool raises an unhandled exception.executor = Concurrent::ThreadPoolExecutor.new( # ... other options ... error_callback: ->(job, reason) { Rails.logger.error("Error in thread pool task: #{reason}, job: #{job.inspect}") # Potentially notify an error tracking service } )
- Handle the Exception: Within the
error_callback
, you have access to the task (job
) and the exception (reason
). Log the error, potentially notify an error tracking service, and decide whether to take any corrective action.
-
Threats Mitigated:
- Silent Task Failures (Severity: High): Prevents tasks submitted to a
ThreadPoolExecutor
from failing silently due to unhandled exceptions. - Loss of Diagnostic Information (Severity: Medium): Provides a centralized place to log and handle exceptions from background tasks, improving debugging and monitoring.
- Silent Task Failures (Severity: High): Prevents tasks submitted to a
-
Impact:
- Silent Task Failures: Risk significantly reduced.
- Loss of Diagnostic Information: Risk significantly reduced.
-
Currently Implemented:
- Not currently implemented. The existing
FixedThreadPool
does not have anerror_callback
.
- Not currently implemented. The existing
-
Missing Implementation:
- The
ThreadPoolExecutor
used for background tasks does not have anerror_callback
configured. This is a critical missing piece for robust error handling.
- The
Mitigation Strategy: Actor Model (using concurrent-ruby
)
-
Description:
- Identify Concurrency Problems: Determine if the Actor model is a good fit for your problem. It excels in situations with complex interactions between concurrent entities.
- Define Actors: Define your actors using
Concurrent::Actor::Context
. Each actor encapsulates its own state and behavior. - Message Passing: Actors communicate exclusively through message passing. Define the messages that each actor can receive and how it should respond to them. Use
!
(tell) to send messages asynchronously. - Avoid Shared State: Actors should not share mutable state. All communication should happen through messages.
- Supervision: Consider using
Concurrent::Actor::Supervisor
to manage the lifecycle of your actors and handle failures. - Use
ask
method: Useask
method to send message and receive result.
-
Threats Mitigated:
- Data Races (Severity: High): Eliminates shared mutable state by design, preventing data races.
- Deadlocks (Severity: High): Reduces the risk of deadlocks by avoiding explicit locking.
- Complexity (Severity: Medium): Can simplify reasoning about concurrency by providing a higher-level abstraction.
-
Impact:
- Data Races: Risk significantly reduced (near elimination).
- Deadlocks: Risk significantly reduced.
- Complexity: Can increase initial complexity but reduce long-term complexity for suitable problems.
-
Currently Implemented:
- Not currently used.
-
Missing Implementation:
- The Actor model could be a good fit for managing user sessions and handling concurrent requests for the same user, replacing the current mutable
Session
object. This would require a significant refactoring.
- The Actor model could be a good fit for managing user sessions and handling concurrent requests for the same user, replacing the current mutable