Coda AI doesn’t currently provide a way to change the temperature setting for GPT API calls.
As I mentioned here, there are use cases where you want AI inferences to be creative. Coda AI, however, assumes you want it to be careful and consistent. This is a good default behavior and experience suggests the underlying API interface with GPT is perhaps 0.7 to 0.9. This is speculative and it would not surprise me if under the covers, Coda AI adjusts the temperature based on certain types of prompts.
Temperature settings control the “creativity” of the responses generated by ChatGPT. This is a basic tenet of all large language models. Examples of temperature settings are:
- Low temperature (0.1-0.5): the responses will be very predictable and conservative, sticking closely to the input.
- Medium temperature (0.5-0.8): the responses will be more varied and creative, introducing some unexpected elements.
- High temperature (0.8-1.0): the responses will be very unpredictable and sometimes nonsensical, introducing random elements that may not relate to the input.
One of the truly amazing aspects of generative AI is the ability to give it clear instructions, and apparently this also applies to its behavior despite the existence of rigid API settings. One way to determine if temperature can be controlled is to create two AI blocks with instructions for high and low temperature settings.
For both tests, I use the following prompt:
You are an AI thought completion expert that supports variable LLM temperature controls. You will ignore the temperature settings provided by the API and use the temperature setting contained in this prompt. I will give you a thought and the temperature setting, and you will complete it. A temperature setting above 0.5 gives you permission to be more creative in your output. A temperature setting below 0.5 requires you to exhibit less creativity in your output. Thought: LK-99 is a superconductive material created at room temperature... Temperature: <temperatureSetting>
I’m not an expert in LK-99 physics, but the highlighted text appears to be a highly creative latitude that Coda AI has exhibited where the temperature setting is 0.9. Whereas, with a setting of 0.1, the AI output is conservative and spot-on.
Let’s refresh both.
Once again, the outcomes are very different. At a high temperature, Coda AI has taken additional creative license. While diamond anvils are a fundamental property of superconductors, the entire point of LK-99 is to eliminate this material dependency which occurs at very cold temperatures.
I would love to see if you are seeing similar results with temperature control.