You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* **Description:** Attackers craft subtle, often imperceptible changes to input images or videos that cause the YOLOv5 model to misclassify objects, fail to detect them, or detect non-existent objects. This is the core vulnerability of the model itself.
* **How YOLOv5 Contributes:** YOLOv5, like all deep learning models, is inherently susceptible to adversarial examples due to its complex, high-dimensional decision boundaries. The model's architecture and training process directly influence its vulnerability.
* **Example:** A small, carefully crafted sticker placed on a stop sign could cause YOLOv5 to misclassify it as a speed limit sign. Or, a digitally altered image of a person could make them "invisible" to a surveillance system.
* **Impact:** Can range from minor misclassifications to complete failure of the system's intended function. In security-critical applications, the impact can be severe.
* **Risk Severity:** **Critical**
* **Mitigation Strategies:**
* **Adversarial Training:** Retrain the YOLOv5 model with a dataset that includes adversarial examples. This is the most effective, but computationally expensive, mitigation. Use the YOLOv5 training scripts, but carefully design the adversarial example generation process.
* **Input Preprocessing/Filtering:** Implement input validation and filtering techniques *before* the image reaches the YOLOv5 model. This might involve detecting unusual pixel patterns or using separate anomaly detection models. This is *outside* the YOLOv5 codebase, but still a crucial defense *against* attacks on YOLOv5.
* **Ensemble Methods:** Use multiple YOLOv5 models (or different object detection models) and combine their predictions. This increases robustness but adds complexity. Requires custom implementation.
* **Defensive Distillation:** A technique to make the model less sensitive to small input perturbations. Research and implement; not directly supported by YOLOv5.
* **Description:** Attackers manipulate the training data used to create or fine-tune the YOLOv5 model, injecting malicious examples that cause the model to misbehave in specific ways. This is a supply chain attack *directly* targeting the model's training.
* **How YOLOv5 Contributes:** If the application uses a custom-trained YOLOv5 model, the training process itself becomes a potential attack vector. The vulnerability lies in the *data* used to train the YOLOv5 model.
* **Example:** An attacker could subtly alter images of a specific type of object in the training dataset, causing the trained model to consistently fail to detect that object.
* **Impact:** Can lead to targeted failures of the model, allowing attackers to bypass security measures or cause specific misclassifications.
* **Risk Severity:** **High**
* **Mitigation Strategies:**
* **Strict Data Provenance:** Maintain a meticulously documented and verifiable chain of custody for all training data. This is *outside* the YOLOv5 codebase, but directly impacts the security of the resulting model.
* **Data Sanitization:** Implement rigorous data cleaning and anomaly detection procedures to identify and remove potentially malicious training examples. This is *outside* the YOLOv5 codebase, but crucial for model security.
* **Use Trusted Pre-trained Weights:** If using pre-trained weights, *only* download them from the official Ultralytics GitHub repository or another extremely trustworthy source. Verify the checksums.
* **Regular Model Auditing:** Periodically test the model's performance on a clean, trusted dataset to detect any unexpected behavior.
* **Description:** Loading a YOLOv5 model (`.pt` file) that has been tampered with or comes from an untrusted source can lead to arbitrary code execution via the `torch.load()` function.
* **How YOLOv5 Contributes:** The `torch.load()` function, commonly used to load YOLOv5 models, is the direct point of vulnerability if misused.
* **Example:** An attacker provides a malicious `.pt` file disguised as a YOLOv5 model. When loaded, it executes arbitrary code.
* **Impact:** Complete system compromise.
* **Risk Severity:** **Critical**
* **Mitigation Strategies:**
* **Only Load from Trusted Sources:** *Never* load models from untrusted sources. Only load models from local storage that you control or from the official Ultralytics repository.
* **Verify Checksums:** Before loading a model, verify its checksum (e.g., SHA-256 hash) against a known good value.