Objective: To compromise a TensorFlow-based application to produce incorrect predictions, leak sensitive data, or crash, by exploiting vulnerabilities in the TensorFlow framework or its usage.
[Attacker's Goal: Compromise TensorFlow Application]
|
-----------------------------------------------------------------------------------------
| |
**[Sub-Goal 1: Incorrect Predictions]** [Sub-Goal 3: Application Crash (DoS)]
| |
------------------------- -----------------------------------------
| | |
[***][1.1 Model Poisoning] [3.1 Resource Exhaustion] [3.2 Malformed Input]
| | |
----------------- --------------------- ---------------------
| | | | |
[*A] [*C] [*M] [*P] [*Q]
Attack Tree Path: [***][1.1 Model Poisoning]
[***][1.1 Model Poisoning]
(Critical Node & High-Risk Path Start):- Description: The attacker modifies the model itself, either during the training process or after deployment, to cause it to produce incorrect predictions. This is a critical threat because it undermines the fundamental purpose of the model.
- Sub-Vectors:
[*A] Training Data Poisoning
:- Description: The attacker injects malicious data into the training dataset. This skews the model's learning process, causing it to make incorrect predictions on legitimate inputs.
- Likelihood: Medium
- Impact: High
- Effort: Medium
- Skill Level: Intermediate
- Detection Difficulty: Medium
[*C] Model File Tampering
:- Description: The attacker directly modifies the saved model file (e.g., .pb, .h5) after it has been trained. This requires gaining unauthorized access to the file system where the model is stored.
- Likelihood: Low
- Impact: Very High
- Effort: Low (once access is gained)
- Skill Level: Intermediate
- Detection Difficulty: Easy (with checksums and integrity checks)
Attack Tree Path: [*D] Adversarial Examples
[1.2 Input Manipulation]
and specifically[*D] Adversarial Examples
:- Although the entire
Input Manipulation
branch isn't included in the high-risk subtree, theAdversarial Examples
node is significant enough to warrant individual mention due to its high likelihood. [*D] Adversarial Examples
:- Description: The attacker crafts specific inputs with tiny, often imperceptible, perturbations. These perturbations are designed to cause the model to misclassify the input with high confidence, even though the input appears normal to a human observer.
- Likelihood: High
- Impact: Medium
- Effort: Low
- Skill Level: Intermediate
- Detection Difficulty: Medium
- Although the entire
Attack Tree Path: [3.1 Resource Exhaustion]
[3.1 Resource Exhaustion]
(Critical Node):- Description: The attacker overwhelms the application with requests or data, causing it to consume excessive resources (CPU, memory, GPU memory) and eventually crash or become unresponsive.
- Sub-Vectors:
[*M] Large Input Tensors
:- Description: The attacker submits extremely large input tensors to the model. These large tensors consume a significant amount of memory and processing time, potentially leading to resource exhaustion.
- Likelihood: High
- Impact: Medium
- Effort: Low
- Skill Level: Novice
- Detection Difficulty: Easy
Attack Tree Path: [3.2 Malformed Input]
[3.2 Malformed Input]
(Critical Node):- Description: The attacker provides input that, while not necessarily large, is structured in a way that triggers errors or unexpected behavior within the TensorFlow library or the application's input handling logic.
- Sub-Vectors:
[*P] Invalid Tensor Shapes
:- Description: The attacker provides input tensors with shapes that are incompatible with the model's expected input shape. This can lead to crashes or undefined behavior within TensorFlow.
- Likelihood: High
- Impact: Medium
- Effort: Low
- Skill Level: Novice
- Detection Difficulty: Easy
[*Q] Invalid Data Types
:- Description: The attacker provides input tensors with data types that are not supported by the model or by specific TensorFlow operations being used. This can also lead to crashes or unexpected results.
- Likelihood: High
- Impact: Medium
- Effort: Low
- Skill Level: Novice
- Detection Difficulty: Easy