Skip to content

Commit fb87a63

Browse files
committed
Edit indent
1 parent 4f4909f commit fb87a63

File tree

2 files changed

+20
-20
lines changed

2 files changed

+20
-20
lines changed

src/AzureOpenAIProxy.PlaygroundApp/Components/UI/ParameterRangeComponent.razor

+2-2
Original file line numberDiff line numberDiff line change
@@ -31,9 +31,9 @@
3131

3232
@if (!hasNoError)
3333
{
34-
<FluentCard Class="parameter-component-error">
34+
<FluentCard Class="parameter-component-error">
3535
@errorText
36-
</FluentCard>
36+
</FluentCard>
3737
}
3838
</div>
3939

src/AzureOpenAIProxy.PlaygroundApp/Components/UI/ParametersTabComponent.razor

+18-18
Original file line numberDiff line numberDiff line change
@@ -3,27 +3,27 @@
33
<div id="@Id" class="parameter-tab">
44
@* Past Messages Range *@
55
<ParameterRangeComponent Id="range-past-messages"
6-
LabelText="Past messages included"
7-
TooltipText="Select the number of past messages to include in each new API request. This helps give the model context for new user queries. Setting this number to 10 will include 5 user queries and 5 system responses."
8-
Min="1" Max="20" Step="1" @bind-Value=@pastMessagesValue />
6+
LabelText="Past messages included"
7+
TooltipText="Select the number of past messages to include in each new API request. This helps give the model context for new user queries. Setting this number to 10 will include 5 user queries and 5 system responses."
8+
Min="1" Max="20" Step="1" @bind-Value=@pastMessagesValue />
99

1010
@* Max Response Range *@
1111
<ParameterRangeComponent Id="range-max-response"
12-
LabelText="Max response"
13-
TooltipText="Set a limit on the number of tokens per model response. The API supports a maximum of MaxTokensPlaceholderDoNotTranslate tokens shared between the prompt (including system message, examples, message history, and user query) and the model's response. One token is roughly 4 characters for typical English text."
14-
Min="1" Max="16000" Step="1" @bind-Value=@maxResponseValue />
12+
LabelText="Max response"
13+
TooltipText="Set a limit on the number of tokens per model response. The API supports a maximum of MaxTokensPlaceholderDoNotTranslate tokens shared between the prompt (including system message, examples, message history, and user query) and the model's response. One token is roughly 4 characters for typical English text."
14+
Min="1" Max="16000" Step="1" @bind-Value=@maxResponseValue />
1515

1616
@* Temperature Range *@
1717
<ParameterRangeComponent Id="range-temperature"
18-
LabelText="Temperature"
19-
TooltipText="Controls randomness. Lowering the temperature means that the model will produce more repetitive and deterministic responses. Increasing the temperature will result in more unexpected or creative responses. Try adjusting temperature or Top P but not both."
20-
Min="0" Max="1" Step="0.01" @bind-Value=@temperatureValue />
18+
LabelText="Temperature"
19+
TooltipText="Controls randomness. Lowering the temperature means that the model will produce more repetitive and deterministic responses. Increasing the temperature will result in more unexpected or creative responses. Try adjusting temperature or Top P but not both."
20+
Min="0" Max="1" Step="0.01" @bind-Value=@temperatureValue />
2121

2222
@* Top P Range *@
2323
<ParameterRangeComponent Id="range-top-p"
24-
LabelText="Top P"
25-
TooltipText="Similar to temperature, this controls randomness but uses a different method. Lowering Top P will narrow the model’s token selection to likelier tokens. Increasing Top P will let the model choose from tokens with both high and low likelihood. Try adjusting temperature or Top P but not both."
26-
Min="0" Max="1" Step="0.01" @bind-Value=@topPValue />
24+
LabelText="Top P"
25+
TooltipText="Similar to temperature, this controls randomness but uses a different method. Lowering Top P will narrow the model’s token selection to likelier tokens. Increasing Top P will let the model choose from tokens with both high and low likelihood. Try adjusting temperature or Top P but not both."
26+
Min="0" Max="1" Step="0.01" @bind-Value=@topPValue />
2727

2828
@* Stop Sequence Multi Select *@
2929
<ParameterMultiselectComponent Id="select-stop-sequence"
@@ -33,15 +33,15 @@
3333

3434
@* Frequency Penalty Range *@
3535
<ParameterRangeComponent Id="range-frequency-penalty"
36-
LabelText="Frequency penalty"
37-
TooltipText="Reduce the chance of repeating a token proportionally based on how often it has appeared in the text so far. This decreases the likelihood of repeating the exact same text in a response."
38-
Min="0" Max="2" Step="0.01" @bind-Value=@frequencyPenaltyValue />
36+
LabelText="Frequency penalty"
37+
TooltipText="Reduce the chance of repeating a token proportionally based on how often it has appeared in the text so far. This decreases the likelihood of repeating the exact same text in a response."
38+
Min="0" Max="2" Step="0.01" @bind-Value=@frequencyPenaltyValue />
3939

4040
@* Presence Penalty Range *@
4141
<ParameterRangeComponent Id="range-presence-penalty"
42-
LabelText="Presence penalty"
43-
TooltipText="Reduce the chance of repeating any token that has appeared in the text at all so far. This increases the likelihood of introducing new topics in a response."
44-
Min="0" Max="2" Step="0.01" @bind-Value=@presencePenaltyValue />
42+
LabelText="Presence penalty"
43+
TooltipText="Reduce the chance of repeating any token that has appeared in the text at all so far. This increases the likelihood of introducing new topics in a response."
44+
Min="0" Max="2" Step="0.01" @bind-Value=@presencePenaltyValue />
4545
</div>
4646

4747
@code {

0 commit comments

Comments
 (0)