OpenAI’s text models have a context length, e.g.: Curie has a context length of 2049 tokens.
They provide max_tokens and stop parameters to control the length of the generated sequence. Therefore the generation stops either when stop token is obtained, or max_tokens is reached.
The issue is: when generating a text, I don’t know how many tokens my prompt contains. Since I do not know that, I cannot set max_tokens = 2049 – number_tokens_in_prompt.
This prevents me from generating text dynamically for a wide range of text in terms of their length. What I need is to continue generating until the stop token.
My questions are:
- How can I count the number of tokens in Python API? So that I will set max_tokens parameter accordingly.
- Is there a way to set max_tokens to the max cap so that I won’t need to count the number of prompt tokens?
>Solution :
As stated in the official OpenAI article:
To further explore tokenization, you can use our interactive Tokenizer
tool, which allows you to calculate the number of tokens and see how
text is broken into tokens. Alternatively, if you’d like to tokenize
text programmatically, use Tiktoken as a fast BPE tokenizer
specifically used for OpenAI models. Other such libraries you can
explore as well include transformers package for Python or the
gpt-3-encoder package for NodeJS.