Token calculation is an approximation based on the average number of characters per token for each model. The formula used is:
Tokens = Math.ceil(text.length / model.tokenizationFactor)
Model | Characters per Token |
---|---|
GPT-4o | 3.3 |
GPT-4 | 3.5 |
GPT-3.5 Turbo | 4 |
Please note that this is still an approximation. Actual token counts may vary slightly, especially for non-English text or specialized content.