Skip to content

Commit

Permalink
docs: DOC-277: Rewrite Prompts API keys pages (#7025)
Browse files Browse the repository at this point in the history
Co-authored-by: caitlinwheeless <[email protected]>
Co-authored-by: caitlinwheeless <[email protected]>
  • Loading branch information
3 people authored Feb 5, 2025
1 parent 689259b commit 0dcc3ca
Show file tree
Hide file tree
Showing 6 changed files with 128 additions and 91 deletions.
83 changes: 1 addition & 82 deletions docs/source/guide/prompts_create.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,90 +14,9 @@ date: 2024-06-11 16:53:16

## Prerequisites

* An API key for your LLM.
* An [API key](prompts_keys) for your LLM.
* A project that meets the [criteria noted below](#Create-a-Prompt).

## Model provider API keys

You can specify one OpenAI API key and/or multiple custom and Azure OpenAI keys per organization. Keys only need to be added once.

Click **API Keys** in the top right of the Prompts page to open the **Model Provider API Keys** window:

![Screenshot of the API keys modal](/images/prompts/model_keys.png)

Once added, you will have the option to select from the base models associated with each API key as you configure your prompts:

![Screenshot of the Base Models drop-down](/images/prompts/base_models.png)

To remove the key, click **API Keys** in the upper right of the Prompts page. You'll have the option to remove the key and add a new one.

### Add OpenAI, Azure OpenAI, or a custom model

{% details <b>Use an OpenAI key</b> %}

You can only have one OpenAI key per organization. For a list of the OpenAI models we support, see [Features, requirements, and constraints](prompts_overview#Features-requirements-and-constraints).

If you don't already have one, you can [create an OpenAI account here](https://platform.openai.com/signup).

You can find your OpenAI API key on the [API key page](https://platform.openai.com/api-keys).

Once added, all supported OpenAI models will appear in the base model options when you configure your prompt.

{% enddetails %}

{% details <b>Use an Azure OpenAI key</b> %}

Each Azure OpenAI key is tied to a specific deployment, and each deployment comprises a single OpenAI model. So if you want to use multiple models through Azure, you will need to create a deployment for each model and then add each key to Label Studio.

For a list of the Azure OpenAI models we support, see [Features, requirements, and constraints](prompts_overview#Features-requirements-and-constraints).

To use Azure OpenAI, you must first create the Azure OpenAI resource and then a model deployment:

1. From the Azure portal, [create an Azure OpenAI resource](https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/create-resource?pivots=web-portal#create-a-resource).

!!! note
If you are restricting network access to your resource, you will need to add the following IP addresses when configuring network security:

* 3.219.3.197
* 34.237.73.3
* 44.216.17.242


2. From Azure OpenAI Studio, [create a deployment](https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/create-resource?pivots=web-portal#deploy-a-model). This is a base model endpoint.

When adding the key to Label Studio, you are asked for the following information:

| Field | Description|
| --- | --- |
| **Deployment** | The is the name of the deployment. By default, this is the same as the model name, but you can customize it when you create the deployment. If they are different, you must use the deployment name and not the underlying model name. |
| **Endpoint** | This is the target URI provided by Azure. |
| **API key** | This is the key provided by Azure. |

You can find all this information in the **Details** section of the deployment in Azure OpenAI Studio.

![Screenshot of the Azure deployment details](/images/prompts/azure_deployment.png)

{% enddetails %}

{% details <b>Use a custom LLM</b> %}

You can use your own self-hosted and fine-tuned model as long as it meets the following criteria:

* Your server must provide [JSON mode](https://python.useinstructor.com/concepts/patching/#json-mode) for the LLM.
* The server API must follow [OpenAI format](https://platform.openai.com/docs/api-reference/chat/create#chat-create-response_format).

Examples of compatible LLMs include [Ollama](https://ollama.com/) and [sglang](https://github.com/sgl-project/sglang?tab=readme-ov-file#openai-compatible-api).

To add a custom model, enter the following:

* A name for the model.
* The endpoint URL for the model. For example, `https://my.openai.endpoint.com/v1`
* An API key to access the model. (Optional)
* An auth token to access the model. (Optional)

{% enddetails %}


## Create a Prompt

From the Prompts page, click **Create Prompt** in the upper right and then complete the following fields:
Expand Down
3 changes: 1 addition & 2 deletions docs/source/guide/prompts_draft.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,9 +18,8 @@ With your [Prompt created](prompts_create), you can begin drafting your prompt c

1. Select your base model.

The models that appear depend on the [API keys](prompts_create#Model-provider-API-keys) that you have configured for your organization. If you have added an OpenAI key, then you will see all supported OpenAI models. If you have other API keys, then you will see one model per each deployment that you have added.
The models that appear depend on the [API keys](prompts_keys) that you have configured for your organization.

For a description of all OpenAI models, see [OpenAI's models overview](https://platform.openai.com/docs/models/models-overview).
2. In the **Prompt** field, enter your prompt. Keep in mind the following:
* You must include the text variables. These appear directly above the prompt field. (In the demo below, this is the `review` variable.) Click the text variable name to insert it into the prompt.
* Although not strictly required, you should provide definitions for each class to ensure prediction accuracy and to help [add context](#Add-context).
Expand Down
6 changes: 3 additions & 3 deletions docs/source/guide/prompts_examples.md
Original file line number Diff line number Diff line change
Expand Up @@ -95,7 +95,7 @@ This example demonstrates how to set up Prompts to predict image captions.
```
3. Navigate to **Prompts** from the sidebar, and [create a prompt](prompts_create) for the project

If you have not yet set up API the keys you want to use, do that now: [API keys](prompts_create#Model-provider-API-keys).
If you have not yet set up API the keys you want to use, do that now: [API keys](prompts_keys).

4. Add the instruction you’d like to provide the LLM to caption your images. For example:

Expand Down Expand Up @@ -174,7 +174,7 @@ This example demonstrates how to set up Prompts to evaluate if the LLM-generated

3. Navigate to **Prompts** from the sidebar, and [create a prompt](prompts_create) for the project

If you have not yet set up API the keys you want to use, do that now: [API keys](prompts_create#Model-provider-API-keys).
If you have not yet set up API the keys you want to use, do that now: [API keys](prompts_keys).

4. Add the instruction you’d like to provide the LLM to best evaluate the text. For example:

Expand Down Expand Up @@ -300,7 +300,7 @@ Let’s expand on the Q&A use case above with an example demonstrating how to us

3. Navigate to **Prompts** from the sidebar, and [create a prompt](prompts_create) for the project

If you have not yet set up API the keys you want to use, do that now: [API keys](prompts_create#Model-provider-API-keys).
If you have not yet set up API the keys you want to use, do that now: [API keys](prompts_keys).

4. Add instructions to create 3 questions:

Expand Down
106 changes: 106 additions & 0 deletions docs/source/guide/prompts_keys.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,106 @@
---
title: Model provider API keys
short: API keys
tier: enterprise
type: guide
order: 0
order_enterprise: 229
meta_title: Model provider API keys
meta_description: Add API keys to use with Prompts
section: Prompts
date: 2024-06-11 16:53:16
---

There are two approaches to adding a model provider API key.

* In one scenario, you get one provider connection per organization, and this provides access to a set of whitelisted models. Examples include:

* OpenAI
* Vertex AI
* Gemini

* In the second scenario, you add a separate API key per model. Examples include:

* Azure OpenAI
* Custom

Once a model is added via the API key, anyone in the organization who has access to the Prompts feature can select the associated models when executing their prompt.

You can see what API keys you have and add new ones by clicking **API Keys** in the top right of the Prompts page to open the **Model Provider API Keys** window:

![Screenshot of the API keys button](/images/prompts/model_keys.png)

## OpenAI API key

You can only have one OpenAI key per organization. This grants you access to set of whitelisted models. For a list of these models, see [Supported base models](prompts_overview#Supported-base-models).

If you don't already have one, you can [create an OpenAI account here](https://platform.openai.com/signup).

You can find your OpenAI API key on the [API key page](https://platform.openai.com/api-keys).

Once added, all supported models will appear in the base model drop-down when you [draft your prompt](prompts_draft).

## Gemini API key

You can only have one Gemini key per organization. This grants you access to set of whitelisted models. For a list of these models, see [Supported base models](prompts_overview#Supported-base-models).

For information on getting a Gemini API key, see [Get a Gemini API key](https://ai.google.dev/gemini-api/docs/api-key).

Once added, all supported models will appear in the base model drop-down when you [draft your prompt](prompts_draft).

## Vertex AI JSON credentials

You can only have one Vertex AI key per organization. This grants you access to set of whitelisted models. For a list of these models, see [Supported base models](prompts_overview#Supported-base-models).

Follow the instructions here to generate a credentials file in JSON format: [Authenticate to Vertex AI Agent Builder - Client libraries or third-party tools](https://cloud.google.com/generative-ai-app-builder/docs/authentication#client-libs)

The JSON credentials are required. You can also optionally provide the project ID and location associated with your Google Cloud Platform environment.

Once added, all supported models will appear in the base model drop-down when you [draft your prompt](prompts_draft).

## Azure OpenAI key

Each Azure OpenAI key is tied to a specific deployment, and each deployment comprises a single OpenAI model. So if you want to use multiple models through Azure, you will need to create a deployment for each model and then add each key to Label Studio.

For a list of the Azure OpenAI models we support, see [Supported base models](prompts_overview#Supported-base-models).

To use Azure OpenAI, you must first create the Azure OpenAI resource and then a model deployment:

1. From the Azure portal, [create an Azure OpenAI resource](https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/create-resource?pivots=web-portal#create-a-resource).

!!! note
If you are restricting network access to your resource, you will need to add the following IP addresses when configuring network security:

* 3.219.3.197
* 34.237.73.3
* 44.216.17.242

2. From Azure OpenAI Studio, [create a deployment](https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/create-resource?pivots=web-portal#deploy-a-model). This is a base model endpoint.

When adding the key to Label Studio, you are asked for the following information:

| Field | Description|
| --- | --- |
| **Deployment** | The is the name of the deployment. By default, this is the same as the model name, but you can customize it when you create the deployment. If they are different, you must use the deployment name and not the underlying model name. |
| **Endpoint** | This is the target URI provided by Azure. |
| **API key** | This is the key provided by Azure. |

You can find all this information in the **Details** section of the deployment in Azure OpenAI Studio.

![Screenshot of the Azure deployment details](/images/prompts/azure_deployment.png)

## Custom LLM

You can use your own self-hosted and fine-tuned model as long as it meets the following criteria:

* Your server must provide [JSON mode](https://python.useinstructor.com/concepts/patching/#json-mode) for the LLM.
* The server API must follow [OpenAI format](https://platform.openai.com/docs/api-reference/chat/create#chat-create-response_format).

Examples of compatible LLMs include [Ollama](https://ollama.com/) and [sglang](https://github.com/sgl-project/sglang?tab=readme-ov-file#openai-compatible-api).

To add a custom model, enter the following:

* A name for the model.
* The endpoint URL for the model. For example, `https://my.openai.endpoint.com/v1`
* An API key to access the model. An API key is tied to a specific account, but the access is shared within the org if added. (Optional)
* An auth token to access the model API. An auth token provides API access at the server level. (Optional)
21 changes: 17 additions & 4 deletions docs/source/guide/prompts_overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,6 @@ With Prompts, you can:
| **Supported object tags** | `Text` <br>`HyperText` <br>`Image` |
| **Supported control tags** | `Choices` (Text and Image)<br>`Labels` (Text)<br>`TextArea` (Text and Image)<br>`Pairwise` (Text and Image)<br>`Number` (Text and Image)<br>`Rating` (Text and Image) |
| **Class selection** | Multi-selection (the LLM can apply multiple labels per task)|
| **Supported base models** | OpenAI gpt-3.5-turbo-16k* <br>OpenAI gpt-3.5-turbo* <br>OpenAI gpt-4 <br>OpenAI gpt-4-turbo <br>OpenAI gpt-4o <br>OpenAI gpt-4o-mini<br>[Azure OpenAI chat-based models](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models)<br>[Custom LLM](prompts_create#Add-OpenAI-Azure-OpenAI-or-a-custom-model)<br><br>**Note:** We recommend against using GPT 3.5 models, as these can sometimes be prone to rate limit errors and are not compatible with Image data. |
| **Text compatibility** | Task text must be utf-8 compatible |
| **Task size** | Total size of each task can be no more than 1MB (approximately 200-500 pages of text) |
| **Network access** | If you are using a firewall or restricting network access to your OpenAI models, you will need to allow the following IPs: <br>3.219.3.197 <br>34.237.73.3 <br>4.216.17.242 |
Expand All @@ -43,6 +42,20 @@ With Prompts, you can:

</div>

## Supported base models

<div class="noheader rowheader">

| Provider | Supported models |
| --- | --- |
| **OpenAI** | gpt-3.5-turbo-16k* <br>gpt-3.5-turbo* <br>gpt-4 <br>gpt-4-turbo <br>gpt-4o <br>gpt-4o-mini <br>o3-mini<br><br>**Note:** We recommend against using GPT 3.5 models, as these can sometimes be prone to rate limit errors and are not compatible with Image data. |
| **Gemini** | gemini-2.0-flash-exp <br>gemini-1.5-flash <br>gemini-1.5-flash-8b <br>gemini-1.5-pro |
| **Vertex AI** | gemini-2.0-flash-exp <br>gemini-1.5-flash <br>gemini-1.5-pro |
| **Azure OpenAI** | [Azure OpenAI chat-based models](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models) <br><br>**Note:** We recommend against using GPT 3.5 models, as these can sometimes be prone to rate limit errors and are not compatible with Image data. |
| **Custom** | [Custom LLM](prompts_create#Add-OpenAI-Azure-OpenAI-or-a-custom-model) |

</div>

## Use cases

### Auto-labeling with Prompts
Expand All @@ -67,7 +80,7 @@ By utilizing AI to handle the bulk of the annotation work, you can significantly
3. Go to the Prompts page and create a new Prompt. If you haven't already, you will also need to add an API key to connect to your model.

* [Create a Prompt](prompts_create)
* [Model provider keys](prompts_create#Model-provider-API-keys)
* [Model provider keys](prompts_keys)
4. Write a prompt and evaluate it against your ground truth dataset.

* [Draft a prompt](prompts_draft)
Expand Down Expand Up @@ -100,7 +113,7 @@ Additionally, this workflow provides a scalable solution for continuously expand
2. Go to the Prompts page and create a new Prompt. If you haven't already, you will also need to add an API key to connect to your model.

* [Create a Prompt](prompts_create)
* [Model provider keys](prompts_create#Model-provider-API-keys)
* [Model provider keys](prompts_keys)
3. Write a prompt and run it against your task samples.
* [Draft a prompt](prompts_draft)

Expand Down Expand Up @@ -136,7 +149,7 @@ This feedback loop allows you to iteratively fine-tune your prompts, optimizing
3. Go to the Prompts page and create a new Prompt. If you haven't already, you will also need to add an API key to connect to your model.

* [Create a Prompt](prompts_create)
* [Model provider keys](prompts_create#Model-provider-API-keys)
* [Model provider keys](prompts_keys)
4. Write a prompt and evaluate it against your ground truth dataset.

* [Draft a prompt](prompts_draft)
Expand Down
Binary file modified docs/themes/v2/source/images/prompts/model_keys.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.

0 comments on commit 0dcc3ca

Please sign in to comment.