Ask ChatGPT

This action gets an answer from ChatGPT for your prompt.

Example input

Getting API Key

If you see the warning message shown below, it means that you haven't added your OpenAI key.

This is how you can add it.

Step 1: Log in to your ChatGPT / OpenAI account at

Step 2: Open

Step 3: Create an API key or copy an existing one.

Step 4: Open your settings (

Step 5: Add your key here:

Customize Behavior on Error

Per default, if OpenAI returns an error, your TaskBot will stop its run (or break the current loop, if the error happens inside a loop).

You can customize this behavior.

Note: Some errors - missing prompt, wrong model id and invalid api key - will terminate TaskBot (loop) run, even if you selected continue option.

How to customize behavior on error

Example set-up

Imagine you asked ChatGPT to create a very long blog article, so that ChatGPT hits its capacity limit and returns a truncated response (you can learn more about truncated responses below). Truncated responses are categorized as errors, so per default TaskBot would stop its run. However, you would like to still save the response - and if it's truncated, you want to save it as a draft.

Step 1: Change default setting for behavior on error

Click on the settings gear in Ask ChatGPT building block.

Then select the option to continue run and record error code.

Step 2: Add conditions

Now you can add a condition that checks if the variable "OpenAI error" contains "length" ("length" is the error code for truncated responses). If it contains "length", then you can make your TaskBot proceed with saving the response as a draft. At the same time, you can also add a second condition which checks if response does not contain the word "length", in which case you can continue with a different path.

The error status (if any), error code (if provided) and error message (if provided) will be saved to a variable or column, like so: "Status: 400, code: context_length_exceeded, message: This model's maximum context length is 4097 tokens. However, you requested 200017 tokens (17 in the messages, 200000 in the completion). Please reduce the length of the messages or completion.". For truncated responses, the error will be recorded as "length".

Here is a list of errors that OpenAI can return:

Why can responses get truncated

Every ChatGPT model has a character limit. In ChatGPT, characters are calculated in tokens. So every model has a token limit.

For ChatGPT-3.5, it's 4090, which corresponds to approx. 16,000 characters in English language.

For ChatGPT-4, it's 8090, which is about 32,000 characters.

If your prompt asks ChatGPT to generate an answer of more characters than its capacity allows, the responses can get truncated. For example, this prompt "Write an article about marketing of more than 50 000 characters" is likely to result in a truncation error. This can also happen when you use the advanced prompt option called "Max token length" and enter a number that is too low.

Truncation can also happen when your prompt is too long. This is because ChatGPT token limit is shared between the prompt and the answer. For example, this prompt "Here is an article about marketing: [...pasting an article of more than 50,000 characters...]. Generate a similar one." will result in a truncation error. To avoid this particular problem, you can shorten the length of the dynamic part of your prompt, see section below where an example set-up is shown.

Token Cost and Saving Strategies

Here is the official price per token page:

Cost example:

  • Entering a prompt of 200 characters and expecting an answer of 1000 characters will cost you approx. $0.88 for GPT-4 and approx. $0.03 for GPT-3

In general, consider using GPT-3.5 over GPT-4, as the price of the former is significantly lower.

Check token consumption in your TaskBot run reports

After your TaskBot finishes its run, it will record how many tokens were consumed in your report logs for every request made. Simply open your TaskBot run reports and scroll to the log results of Ask ChatGPT building block.

This is how the amount of spent tokens is displayed in the log message:

Shorten dynamic prompt

You can use Shorten Content Length action in Format Data building block in order to shorten dynamic content in your prompt. Let's take a look at an example.

Example set-up

Imagine that you built a TaskBot that collects social media posts and then asks ChatGPT to generate auto-comments to those posts.

Consider this prompt:

You can see that the length in the column "Scraped Post Content" can greatly vary. It's possible that someone shared a very long post of over 1,000 characters. This will consume an unnecessary large amount of your tokens.

A token-saving strategy can be to shorten the post content.

This is how you can do it.

Step 1: After having collected the post, use Format Data building block to shorten its content to 150 characters (or whatever number you consider adequate).

Step 2: To avoid that ChatGPT assumes that we are asking it to complete an incomplete post, let's add a clarification to the prompt, like so:

Last updated