Coda AI: Input is too large error

I love Coda and have been really liking the Coda AI feature!

Many LLMs APIs have a input token limit that blocks the user from entering too many tokens.

This limits the context the user can provide and sometimes it’s not as simple as just shortening the input.

If possible, it could be nice if Coda AI circumvented this token limit in some way that allowed for a nice user experience for the user and still reasonably accomplished what they wanted to do.

Thanks for consideration!

Unfortunately, you are asking Coda to solve something that even OpenAI cannot solve. Coda, by virtue of its tight coupling with OpenAI, is limited to the features of their APIs, and token ceilings exist that cannot be changed.

This is one reason I have built all my AI applications to float on arbitrary LLMs. For example, my research with Solve for (x) uses PaLM 2, Google’s LLM which has no token ceiling or inferencing costs for that matter.

As to prompt size, it’s never wise to depend on large learner-shots because inferences tend to work better when the questions are imposed with focused information. If you have large prompts, consider breaking the process into multiple inference shots to build the outcomes you need. This is possible in Coda right now with the AI tools they provide.

Chain-of-Thought Inferencing

Imagine a table with three inferencing steps; each one dependent on the output of the previous one. As soon as the first inference is complete, fire off the second one using the output of the first, and so on. This is ostensibly the process that LangChain and AutoGPT provide, but with far simpler machinery. It is a simple approach that is in reach to every Coda user today - you just need to build the chain and the dependencies using Coda formulas and AI features.

1 Like