Models Configuration Introduction
This tutorial explains how to configure various large language models in the SelectTranslate extension, including API keys, endpoint URLs, model selection, performance parameters, and prompt settings. Proper model configuration can improve service stability and enhance translation accuracy, response speed, and overall user experience across different translation scenarios.
1. Configure API Key
An API key is the official access credential provided by the model service provider, used to securely call its AI models and related services.
Common API Key Configuration Issues
- Incorrect API Key Format:The API key may have missing characters, extra spaces, or be truncated incorrectly. Make sure to copy the full key provided by the service and avoid including spaces, line breaks, or other invalid characters.
- API Key Mismatch with the Service Provider:API keys from different model providers are not interchangeable. For example, an OpenAI API key cannot be used with DeepSeek models.
- API Key Expired or Disabled:Your API key may have been revoked due to expiration, manual deactivation, account issues, usage limits, or security reasons. Please visit your provider's console to verify the current status of your API key.
- Endpoint URL Error:Even with a valid API Key, connection tests may fail if the endpoint URL is incorrect, missing version paths, or does not match the provider's specific requirements.
- Model Name Mismatch:Ensure the model name is spelled correctly and officially supported by your provider. Typographical errors here are frequently misidentified as API Key issues.。
- Network or Proxy Issues:Local network stability, proxy configurations, or firewall restrictions may block requests, resulting in authentication failures or request timeouts.
2. Translation Model Selection
When configuring the extension, the model name must match the service provider associated with the API key; otherwise, the model cannot be called successfully.
Supported Providers and Model Mapping
| Provider | Model Name |
|---|---|
| OpenAI | gpt-5、gpt-5-mini、gpt-5-nano |
| Gemini | gemini-2.5-flash、gemini-2.0-flash-lite |
3. Custom API URL
The extension uses the official API URL by default, so no adjustments are usually needed. Manual entry of the URL is only required in the following cases:
- Calling the model via a reverse proxy or relay service,
- Using a privately deployed large model service,
- Connecting to third-party API services that do not fully use the official default URL.
Input Guidelines:
- The URL must be a complete HTTPS endpoint, for example:
https://api.your-company.com/v1/chat/completions - The endpoint path must match the service provider’s technical documentation, and the API must correctly receive and handle requests that follow the specified format.
4. Model Parameter Tuning
The following parameters are used to adjust translation quality, model stability, and response speed. The default settings cover most use cases; adjust them as needed for specific requirements.
| Parameters | Description | Recommended Settings |
|---|---|---|
| Maximum text length per request | Used to limit the maximum number of characters that can be submitted in a single translation request. Setting this value too high may slow down API responses; keeping it within a reasonable range helps improve stability and response speed. | - Recommended 1000–1500 characters- Avoid exceeding 2000 characters to prevent response delays or content loss. |
| Maximum number of paragraphs per request | The number of paragraphs that can be submitted at once in a single translation request. Submitting too many paragraphs at once may increase processing load, leading to slower responses or reduced stability. Setting this value appropriately helps improve overall translation efficiency and reliability. | - Technical or Legal Texts:1(process paragraph by paragraph to ensure terminology accuracy)- Novels, Dialogues, or Blogs: 3–5(improves fluency) |
| Maximum requests per second | The number of translation requests sent per second. If the request rate is too high, additional requests will be queued to avoid hitting API rate limits and to maintain overall system stability. | - General Use:25- Strict Rate-Limited Services (e.g., Gemini): 8–10 |
| Temperature | Controls the randomness and variability of the model’s output. Higher temperature values produce more diverse and creative responses, while lower values result in more stable outputs, which is preferable for consistent and accurate translations. | - Recommendation for translation tasks:Official recommended temperatures vary by model. It is suggested to use the provided default and adjust as needed. |
5. Custom Prompts(System Prompt/User Prompt)
The extension includes built-in, optimized general translation prompts that cover most common translation scenarios. When specific requirements exist—such as translation style, terminology consistency, domain expertise, or tone preferences—custom prompts can be used to fine-tune the model’s output.
When creating custom translation prompts, it is recommended to follow these guidelines:
- Clear and specific instructions
specify the target language, translation style, domain, and any expressions or word choices to avoid. - Actionable instructions concrete and precise descriptions, avoiding vague or abstract statements, to ensure the model can accurately understand and follow the instructions.
- Keep prompts concise
the prompt provides sufficient information while remaining brief, avoiding redundant details that could interfere with the model’s understanding.
6. Disclaimer
🔔 Please be aware of the free tier and any fees charged by the service provider to avoid unexpected costs.

