OpenCode Integration
Connect OpenCode to Xerotier.ai as a custom provider. Use your own models as a drop-in backend for terminal-based AI coding.
Overview
OpenCode is a
terminal-based AI coding assistant similar to Claude Code and Aider. It supports
custom providers through the
@ai-sdk/openai-compatible npm adapter, which connects to any
OpenAI-compatible API.
Because Xerotier.ai exposes OpenAI-compatible Chat Completions and Responses API endpoints, OpenCode works as a drop-in integration. Point it at your Xerotier endpoint, provide an API key, and start coding with your own models.
No code changes required. OpenCode uses the standard
@ai-sdk/openai-compatible package. Xerotier works out of the box
with streaming, tool calling, and reasoning content.
Prerequisites
- An OpenCode installation (npm, brew, or binary)
- A Xerotier.ai account with an active endpoint
- An API key with
inferencescope
Installing OpenCode
npm install -g opencode
brew install opencode
Creating an API Key
In your Xerotier dashboard, navigate to
Settings > API Keys and create a new key with the
inference scope. Copy the key -- you will need it for the
configuration file.
Finding Your Endpoint URL
Your endpoint URL follows the format:
https://api.xerotier.ai/proj_ABC123/ENDPOINT_SLUG/v1
Replace ENDPOINT_SLUG with the slug shown on your endpoint's
detail page. The project ID (proj_ABC123) is visible in
your dashboard URL and project settings.
Configuration
Create or edit the OpenCode configuration file at
~/.config/opencode/opencode.json. The following example
configures Xerotier as a custom provider:
{
"$schema": "https://opencode.ai/config.json",
"provider": {
"xerotier": {
"npm": "@ai-sdk/openai-compatible",
"name": "xerotier",
"options": {
"baseURL": "https://api.xerotier.ai/proj_ABC123/my-endpoint/v1",
"headers": {
"Authorization": "Bearer xero_my-project_abc123"
}
},
"models": {
"my-model": {
"name": "deepseek-r1-distill-llama-70b",
"reasoning": true,
"tool_call": true,
"tools": true
}
}
}
},
"tools": {
"*": true
},
"model": "xerotier/my-model"
}
Replace placeholder values. Substitute
my-endpoint with your endpoint slug,
xero_my-project_abc123 with your actual API key, and
deepseek-r1-distill-llama-70b with the model name deployed on
your endpoint.
Responses API Endpoint
OpenCode can also use the
Responses API endpoint for
multi-turn conversations with server-managed state. The base URL is
the same -- the OpenAI SDK automatically selects the correct endpoint
based on the method called (client.responses.create() vs
client.chat.completions.create()).
https://api.xerotier.ai/proj_ABC123/ENDPOINT_SLUG/v1/responses
No configuration changes are required in opencode.json to
use the Responses API. The same baseURL serves both Chat
Completions and Responses endpoints.
Per-Project Configuration
You can also place an opencode.json file in your project root.
Project-level configuration overrides the global config at
~/.config/opencode/opencode.json.
Field Reference
The following table describes each field in the provider configuration.
Provider Options
| Field | Type | Description |
|---|---|---|
npm |
string | The npm adapter package. Always @ai-sdk/openai-compatible for Xerotier. |
name |
string | Provider identifier used in the model field prefix (e.g., xerotier/my-model). |
options.baseURL |
string | Xerotier endpoint URL including project ID and endpoint slug. Must end with /v1. |
options.headers |
object | HTTP headers sent with every request. Must include Authorization with your API key. |
Model Options
| Field | Type | Description |
|---|---|---|
name |
string | The actual model name as deployed on your Xerotier endpoint. |
reasoning |
boolean | Enable reasoning/thinking content passthrough. Set to true for models that support it (e.g., DeepSeek-R1, QwQ). |
tool_call |
boolean | Enable tool call response parsing. Required for OpenCode file and shell operations. |
tools |
boolean | Enable sending tool definitions in requests. Required for OpenCode to describe its available tools to the model. |
Top-Level Fields
| Field | Type | Description |
|---|---|---|
tools |
object | Controls which OpenCode tools are available. {"*": true} enables all tools. |
model |
string | Default model in provider/model-alias format (e.g., xerotier/my-model). |
Supported Features
The following features have been validated with the Xerotier API and the
@ai-sdk/openai-compatible adapter.
Streaming (SSE)
OpenCode uses Server-Sent Events (SSE) streaming by default. Xerotier's streaming implementation follows the OpenAI specification:
- Each chunk includes
id,object,created,model, andchoicesfields - Content is delivered via
choices[].delta.content finish_reasonisnulluntil the final content chunk- The
data: [DONE]sentinel terminates the stream - The final chunk includes a
usageobject withprompt_tokens,completion_tokens, andtotal_tokens
Tool Calling
OpenCode relies on tool calling (function calling) for file operations, shell commands, and code editing. Xerotier passes tool calling requests and responses through to the model without modification:
- The
toolsandtool_choicerequest fields are sent to the model - Tool call responses include
choices[].delta.tool_callswithid,type,function.name, andfunction.arguments - Streaming tool calls accumulate
argumentsacross chunks finish_reason: "tool_calls"is set when the model decides to call tools- Multi-turn tool interactions (the
toolmessage role) are passed through correctly
Model support required. Tool calling must be supported by the model deployed on your endpoint. Not all models support function calling. Check your model's documentation for tool calling compatibility.
Reasoning Content
Some models (e.g., DeepSeek-R1, QwQ) emit reasoning or "thinking" content
alongside their responses. When "reasoning": true is set in the
model configuration, OpenCode displays reasoning content in a collapsible
section.
- Xerotier relays
reasoning_contentfields in the response as-is - Reasoning content is model-dependent -- not all models produce it
Finish Reason Mapping
| Finish Reason | Description |
|---|---|
stop |
Natural end of generation. The model completed its response. |
length |
The max_tokens limit was reached. |
tool_calls |
The model wants to invoke one or more tools. |
Limitations
max_tokens Auto-Clamping
OpenCode may request a max_tokens value that exceeds your
model's capacity. Xerotier handles this gracefully by automatically clamping
the value down to the model's maximum. The
X-Xerotier-Max-Tokens-Clamped response header indicates when
clamping occurred. No action is required on your part.
Unsupported OpenAI Extensions
response_format(structured outputs / JSON mode) -- support depends on the model deployed on your endpointseed-- passed through but determinism depends on model support
Reasoning Content Availability
Reasoning content (reasoning_content in the response delta) is
only available for models that produce it (e.g., DeepSeek-R1, QwQ). Setting
"reasoning": true in the OpenCode config has no effect on models
that do not emit reasoning tokens.
Troubleshooting
401 Unauthorized
The API key is missing, invalid, or does not have the inference
scope.
- Verify the
Authorizationheader value inoptions.headers - Confirm the key format is
Bearer xero_projectslug_... - Check that the key has the
inferencescope in your dashboard
404 Not Found
The endpoint URL is incorrect or the endpoint does not exist.
- Verify the
baseURLincludes your project ID and endpoint slug - Confirm the URL ends with
/v1 - Check that the endpoint is active in your dashboard
Model Not Found
The name field in your model config does not match the deployed
model.
- The
namefield must exactly match the model name shown on your endpoint detail page - Model names are case-sensitive
Connection Timeout
Requests may time out if no workers are available.
- Check your endpoint status in the dashboard
- Verify that at least one agent is healthy and connected
- For XIM nodes, confirm the agent process is running and registered
Tool Calls Not Working
If OpenCode reports that tool calling is unavailable:
- Confirm
"tool_call": trueand"tools": trueare set in the model config - Verify your model supports function calling (not all models do)
- Check your endpoint status in the dashboard for any errors
Reasoning Content Not Displayed
If reasoning content does not appear in OpenCode:
- Confirm
"reasoning": trueis set in the model config - Verify you are using a model that produces reasoning tokens (e.g., DeepSeek-R1, QwQ)
- Check that the model deployed on your endpoint supports reasoning output
Verifying Connectivity
Use curl to test your endpoint independently of OpenCode:
curl https://api.xerotier.ai/proj_ABC123/my-endpoint/v1/chat/completions \
-H "Authorization: Bearer xero_my-project_abc123" \
-H "Content-Type: application/json" \
-d '{
"model": "deepseek-r1-distill-llama-70b",
"messages": [{"role": "user", "content": "Hello!"}],
"max_tokens": 50
}'
If this returns a valid response, the issue is in your OpenCode configuration. If it returns an error, resolve the API issue first.