Coda AI: Input is too large error

Unfortunately, you are asking Coda to solve something that even OpenAI cannot solve. Coda, by virtue of its tight coupling with OpenAI, is limited to the features of their APIs, and token ceilings exist that cannot be changed.

This is one reason I have built all my AI applications to float on arbitrary LLMs. For example, my research with Solve for (x) uses PaLM 2, Google’s LLM which has no token ceiling or inferencing costs for that matter.

As to prompt size, it’s never wise to depend on large learner-shots because inferences tend to work better when the questions are imposed with focused information. If you have large prompts, consider breaking the process into multiple inference shots to build the outcomes you need. This is possible in Coda right now with the AI tools they provide.

Chain-of-Thought Inferencing

Imagine a table with three inferencing steps; each one dependent on the output of the previous one. As soon as the first inference is complete, fire off the second one using the output of the first, and so on. This is ostensibly the process that LangChain and AutoGPT provide, but with far simpler machinery. It is a simple approach that is in reach to every Coda user today - you just need to build the chain and the dependencies using Coda formulas and AI features.

1 Like