Parameters
Request parameters for Chat Completions API.
This page documents the parameters accepted by the Chat Completions API. OhMyGPT passes these parameters to the underlying provider and applies sensible defaults when values are omitted.
Core parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
model | string | — | Required. The model to use (e.g., gpt-4o, claude-3-5-sonnet). |
messages | array | — | Required. The conversation messages. Each message has role and content. |
max_tokens | integer | Model default | Maximum tokens to generate. The upper limit is context length minus prompt length. |
stream | boolean | false | Enable streaming responses via SSE. |
Sampling parameters
These parameters control how the model selects tokens during generation.
| Parameter | Type | Range | Default | Description |
|---|---|---|---|---|
temperature | float | 0.0–2.0 | 1.0 | Controls randomness. Lower values produce more deterministic output. |
top_p | float | 0.0–1.0 | 1.0 | Nucleus sampling: only consider tokens with cumulative probability ≤ top_p. |
top_k | integer | ≥0 | 0 | Limit selection to top K tokens. 0 means no limit. |
min_p | float | 0.0–1.0 | 0.0 | Minimum probability relative to the most likely token. |
top_a | float | 0.0–1.0 | 0.0 | Dynamic top-p based on the highest probability token. |
Setting temperature to 0 makes output nearly deterministic. For reproducible results, also set a seed value.
Penalty parameters
These parameters reduce repetition in generated text.
| Parameter | Type | Range | Default | Description |
|---|---|---|---|---|
frequency_penalty | float | -2.0–2.0 | 0.0 | Penalize tokens based on their frequency in the text so far. |
presence_penalty | float | -2.0–2.0 | 0.0 | Penalize tokens that have appeared at all, regardless of frequency. |
repetition_penalty | float | 0.0–2.0 | 1.0 | Multiplicative penalty on repeated tokens. Values > 1.0 discourage repetition. |
Output control
| Parameter | Type | Description |
|---|---|---|
stop | array | Stop generation when any of these strings is encountered. |
response_format | object | Force output format. Use {"type": "json_object"} for JSON mode. |
seed | integer | Seed for deterministic sampling. Not all models guarantee determinism. |
JSON mode
To enable JSON mode, set response_format and include instructions in your prompt:
{
"model": "gpt-4o",
"messages": [
{"role": "system", "content": "Respond with valid JSON only."},
{"role": "user", "content": "List 3 colors as a JSON array."}
],
"response_format": {"type": "json_object"}
}Tool parameters
| Parameter | Type | Description |
|---|---|---|
tools | array | List of tool definitions in OpenAI format. See Tool Calling. |
tool_choice | string or object | Control tool usage: "auto", "none", "required", or a specific tool. |
Log probabilities
| Parameter | Type | Description |
|---|---|---|
logprobs | boolean | Return log probabilities of output tokens. |
top_logprobs | integer | Number of most likely tokens to return (0–20). Requires logprobs: true. |
Advanced parameters
| Parameter | Type | Description |
|---|---|---|
logit_bias | object | Map of token IDs to bias values (-100 to 100). Adjusts token probabilities. |
user | string | Unique identifier for the end user. Used for abuse monitoring. |
store | boolean | Store the request and response for later retrieval. See Store API. |
Provider-specific parameters
Some providers accept additional parameters that OhMyGPT passes through:
| Provider | Parameter | Description |
|---|---|---|
| Mistral | safe_prompt | Enable content moderation |
| Anthropic | metadata | Request metadata |
Unrecognized parameters are generally ignored, but may cause errors with some providers.
How is this guide?
Last updated on
