A Python command injection vulnerability exists in the ...
Critical severity
Unreviewed
Published
Nov 14, 2024
to the GitHub Advisory Database
•
Updated Nov 14, 2024
Description
Published by the National Vulnerability Database
Nov 14, 2024
Published to the GitHub Advisory Database
Nov 14, 2024
Last updated
Nov 14, 2024
A Python command injection vulnerability exists in the
SagemakerLLM
class'scomplete()
method within./private_gpt/components/llm/custom/sagemaker.py
of the imartinez/privategpt application, versions up to and including 0.3.0. The vulnerability arises due to the use of theeval()
function to parse a string received from a remote AWS SageMaker LLM endpoint into a dictionary. This method of parsing is unsafe as it can execute arbitrary Python code contained within the response. An attacker can exploit this vulnerability by manipulating the response from the AWS SageMaker LLM endpoint to include malicious Python code, leading to potential execution of arbitrary commands on the system hosting the application. The issue is fixed in version 0.6.0.References