I’m a Notion user but I use Coda primarily for AI workflows as it’s 100x more capable than Notion in this area.
Unfortunately, with many of my submissions I’m getting an ‘OpenAI took too long to respond’ error due to the length of the responses I’m seeking. From what I’ve seen this is a standard response for API calls that take more than X seconds too complete. Given that we’re seeing LLMs with context windows of more than 16k (~12,000 words), Coda needs to make an exception for this limit for users to truly make the most of the tools that you’re providing them.
Coda has a huge opportunity steal customers from apps like Notion and ClickUp, and I see empowering users to build complex AI workflows as one way to do it.
I assumed this was the case. Unless you’re ready to use the source of the Pack to build a more custom approach to address your AI requirements, you don’t really have any options.
Chaining inference operations. Instead of a single, potentially lengthy OpenAI API call, separate the calls to perform more of them with less chances of overrunning run-time limits.
Embedding. Let the faster, cheaper inferencing benefits of vectors do a lot of the heavy lifting in milliseconds instead of relying solely on chat or text completions for everything.
Model choice. OpenAI was first and it is good. But there are many models that can work better and some with vastly larger prompt windows and far better performance and near-zero cost.
Same issue. Native AI is not usable for most of my usecases. I need best possible LLM and I am willing to use my own key. Now I need to split requests to 4 steps and it’s still partial solution as sometimes even 2000 tokens result in timeout. It also costs 4 times more.
Any chance to have a fix?