OhMyGPT

Parameters

Request parameters for Chat Completions API.

This page documents the parameters accepted by the Chat Completions API. OhMyGPT passes these parameters to the underlying provider and applies sensible defaults when values are omitted.

Core parameters

ParameterTypeDefaultDescription
modelstringRequired. The model to use (e.g., gpt-4o, claude-3-5-sonnet).
messagesarrayRequired. The conversation messages. Each message has role and content.
max_tokensintegerModel defaultMaximum tokens to generate. The upper limit is context length minus prompt length.
streambooleanfalseEnable streaming responses via SSE.

Sampling parameters

These parameters control how the model selects tokens during generation.

ParameterTypeRangeDefaultDescription
temperaturefloat0.0–2.01.0Controls randomness. Lower values produce more deterministic output.
top_pfloat0.0–1.01.0Nucleus sampling: only consider tokens with cumulative probability ≤ top_p.
top_kinteger≥00Limit selection to top K tokens. 0 means no limit.
min_pfloat0.0–1.00.0Minimum probability relative to the most likely token.
top_afloat0.0–1.00.0Dynamic top-p based on the highest probability token.

Setting temperature to 0 makes output nearly deterministic. For reproducible results, also set a seed value.

Penalty parameters

These parameters reduce repetition in generated text.

ParameterTypeRangeDefaultDescription
frequency_penaltyfloat-2.0–2.00.0Penalize tokens based on their frequency in the text so far.
presence_penaltyfloat-2.0–2.00.0Penalize tokens that have appeared at all, regardless of frequency.
repetition_penaltyfloat0.0–2.01.0Multiplicative penalty on repeated tokens. Values > 1.0 discourage repetition.

Output control

ParameterTypeDescription
stoparrayStop generation when any of these strings is encountered.
response_formatobjectForce output format. Use {"type": "json_object"} for JSON mode.
seedintegerSeed for deterministic sampling. Not all models guarantee determinism.

JSON mode

To enable JSON mode, set response_format and include instructions in your prompt:

{
  "model": "gpt-4o",
  "messages": [
    {"role": "system", "content": "Respond with valid JSON only."},
    {"role": "user", "content": "List 3 colors as a JSON array."}
  ],
  "response_format": {"type": "json_object"}
}

Tool parameters

ParameterTypeDescription
toolsarrayList of tool definitions in OpenAI format. See Tool Calling.
tool_choicestring or objectControl tool usage: "auto", "none", "required", or a specific tool.

Log probabilities

ParameterTypeDescription
logprobsbooleanReturn log probabilities of output tokens.
top_logprobsintegerNumber of most likely tokens to return (0–20). Requires logprobs: true.

Advanced parameters

ParameterTypeDescription
logit_biasobjectMap of token IDs to bias values (-100 to 100). Adjusts token probabilities.
userstringUnique identifier for the end user. Used for abuse monitoring.
storebooleanStore the request and response for later retrieval. See Store API.

Provider-specific parameters

Some providers accept additional parameters that OhMyGPT passes through:

ProviderParameterDescription
Mistralsafe_promptEnable content moderation
AnthropicmetadataRequest metadata

Unrecognized parameters are generally ignored, but may cause errors with some providers.

How is this guide?

Last updated on

On this page