Model pricing
Compare active models and their credit consumption per 1,000 tokens for input and output usage.
Credit-based usage
This app uses credits for usage. Instead of recurring payments, you buy one-time credit packages and use them as needed.
There are no recurring payments and no monthly subscription requirement. You only pay for one-time packages and then spend credits based on actual usage.
| Model | Region | Input / 1K | Output / 1K |
|---|---|---|---|
| DeepSeek V3.2 | US | 2 credits | 2 credits |
| GLM 5 | US | 4 credits | 13 credits |
| GPT-OSS | EU | 1 credits | 3 credits |
| INTELLECT 3 | EU | 1 credits | 5 credits |
| Kimi K2.5 | US | 2 credits | 10 credits |
| MiniMax M2.5 | US | 2 credits | 5 credits |
| Nemotron 3 Super | US | 2 credits | 4 credits |
| Qwen 2.5 VL 72B Instruct | EU | 1 credits | 3 credits |
| Qwen 3 | EU | 1 credits | 3 credits |
| Qwen 3.5 | US | 3 credits | 15 credits |
Tool use pricing
Some tools have fixed credit charges that are separate from model token pricing.
Web search
Without JavaScript rendering
With JavaScript rendering
More about credit usage
Expand the sections below for more details about how credit pricing and tokens work.
How credits are calculated
Credit usage is based on the currently active model pricing. Values shown here are the billed credits for 1,000 tokens in each direction.
ToolBott is an agentic system, which means one prompt can involve more than one model step behind the scenes. The app may ask the model to plan, decide whether tools are needed, read tool results, and then generate the final answer.
In simple terms: even if you send one short message, the total token usage can be larger than just your message and the final reply. Extra model calls, tool calls, and the tool results that are sent back to the model can all add to the input and output tokens.
What is a token?
A token is a small piece of text. It is not always a full word. Short words may be one token, longer words may be split into multiple tokens, and spaces and punctuation also count.
As a simple rule of thumb, 1,000 tokens is roughly 700 to 800 words of English text.
Input tokens are not only the text you type. They can also include hidden instructions sent by the app, earlier conversation messages, tool calls, and tool results that are passed to the model.
This is especially important in an agentic workflow. If ToolBott searches the web, fetches a page, or uses another tool before answering, the returned content may also be sent to the model for analysis, and that content consumes tokens too.