Coda AI is Repetitive and How to Avoid It

The quickest of AI tips…

The second biggest complaint about AI (after being too creative) is not being creative enough. I’ve seen cases where inferences are very much alike in a repeating pattern. Here’s a quick tip that can be applied in every aspect of Coda AI. I’ll demonstrate the approach in a Coda AI Block.

This is the complaint I see often:

The ouput is a little too repetitive …

To be clear, this is not AI’s fault. Coda AI doesn’t [yet] provide users with control of the temperature parameter. You can overcome this lack of temperature control by instructing Coda AI to be more creative by ensuring it has historical context. That context can come from a table of previous inferencing results.

Avoiding repetition in Coda AI is surprisingly easy. You simply need to inform the AI what it generated previously to ensure that it is not repetitious. This is made possible by using a circular reference to the AI Block which is named.

Using this elegant approach, we simply make the output of the previous inference an input to the next inference.

IMPORTANT: In this prompt, there are some very important terms circled. You must maintain parity in the prompt with labels to the data values provided.

1 Like

that is indeed a smart solution, it shows the need for keeping track of what we did.
we agree that access to ‘temperature’ might help as well and Shishir himself was open to the idea, so I hope we will have access to it any time soon.
an alternative is that you use a prompt table like I apply here and you link back to all the previous answers. I updated this template and it becomes easier to use, however we need this native in Coda for all our AI entries (today we have 3, but Shishir in the video talked about a nbr 4 as well).

1 Like

Yep - that’s what I intimated as a more comprehensive way to sustain context. However, the term “all previous answers” doesn’t scale and will create issues in the prompt window at some point in the future, so you have to use this approach with constraints. Coda certainly has all the functionality to capture, sustain, and leverage contextual history for prompts.

Newly relevant…

As mentioned by @Bill_French, there will be a scalability issue with this approach. It will actually be a hard limit since all text based AI have a token limit of varying amounts and its not feasible to say pass an entire table into your prompt as context.

Personally I think vector databases on the backend could be a viable option. If you look at things like inworld.ai or AutoGPT etc, they store the AI’s “memory” in these vector databases, which lets the system quickly lookup information as needed. Its essentially a way to do a lookup to similar data very quickly without having to look at an entire table.

1 Like

Indeed. I have maintained (since Dec of 2022) that Coda AI will never be able to make the leap beyond the prompt window unless they fully embrace text embeddings, the foundational element of generative AI. I also believe that when (not if) such use of vectors occurs, they must expose the underlying capacity to perform similarity inferences in CFL. This is a critical step in the maturity of Coda AI’s capabilities and I fully trust that @DavidK and team know this and are likely working toward this.

To get a good sense of what it means to quickly “lookup” or filter on similarities, these articles provide a good dose of the underlying mechanics.

2 Likes