I’m a Notion user but I use Coda primarily for AI workflows as it’s 100x more capable than Notion in this area.
Unfortunately, with many of my submissions I’m getting an ‘OpenAI took too long to respond’ error due to the length of the responses I’m seeking. From what I’ve seen this is a standard response for API calls that take more than X seconds too complete. Given that we’re seeing LLMs with context windows of more than 16k (~12,000 words), Coda needs to make an exception for this limit for users to truly make the most of the tools that you’re providing them.
Coda has a huge opportunity steal customers from apps like Notion and ClickUp, and I see empowering users to build complex AI workflows as one way to do it.
In my view, the fix is not a longer timeout period, but simply support for streaming responses.
Coda’s AI implementation is relatively closed at the moment, so there’s no way to shape the underlying parameters or use different LLMs as I mention in this article.
The only remedy (today) is to build your own implementation in a Pack or use the OpenAI Pack.
I assumed this was the case. Unless you’re ready to use the source of the Pack to build a more custom approach to address your AI requirements, you don’t really have any options.
I overcome these roadblocks through many avenues and I’m fortunate to have some experience building packs and javascript applications. These are the guideposts that provide me with AI development agility to create positive AI outcomes.
Chaining inference operations. Instead of a single, potentially lengthy OpenAI API call, separate the calls to perform more of them with less chances of overrunning run-time limits.
Embedding. Let the faster, cheaper inferencing benefits of vectors do a lot of the heavy lifting in milliseconds instead of relying solely on chat or text completions for everything.
Model choice. OpenAI was first and it is good. But there are many models that can work better and some with vastly larger prompt windows and far better performance and near-zero cost.
I think it’s 97.5%, so let’s not sensationalize! I 100% agree that doing AI and doing it well require implementation rigor that’s difficult to bake into a no-code world.
Same issue. Native AI is not usable for most of my usecases. I need best possible LLM and I am willing to use my own key. Now I need to split requests to 4 steps and it’s still partial solution as sometimes even 2000 tokens result in timeout. It also costs 4 times more.
Any chance to have a fix?
Anybody found the solution? Unfortunately sometimes even 512 token promts get timeout (
Is there any way to send request to services like make.com and then push open AI reply into the table?
I mean : how to implement the « streaming response » within a coda formula (with fetcher) ?
(Correct me if I am wrong but Coda AI Pack doesn’t use streaming responses )
Edit : The 2 main questions I face for the code :
How to receive/handle the stream with fetcher (after setting the request json to use it I mean) ?
How to update Coda UI with that stream ?
(By example if I modify a table column with formula, how to continuously update the column with the on-going OpenAI response ?)
Correct, and why I have lobbied them to consider it and other ideas such as real-time data streaming (sockets).
The only way to integrate with a service that is either stupidly sluggish or so intensive that stream processing is required, is to build it entirely outside of Coda and use the Coda API to inject results.
Hum… So I understand we are still stucked here if we want to find a solution by developping our own OPENAI Pack (At least, I have mine now ).
Btw, I tried Perplexity API instead of OpenAI (to compare the response speed), and the models they propose are quite faster than gpt-4 (but also not responding as well for complex prompts).
I feel those products who offers multiple LLMs access will be very interesting in the future…
Perplexity is also able to be gamed. I have seen many cases where you can publish and watch the LLM update within a few weeks. This means, it can ge gamed.
First, you have to think about the limitations of a Pack. It can only run in certain contexts inside Coda, so it’s a poor pathway at the outset.
Coda AI developers need to expand the interface capabilities and become LLM-agnostic. And the CFL team needs to expose text embeddings.