Open AI took too long to respond...why don´t wait a bit more?

Hey guys,

I’ve been using the OpenAi pack and making amazing docs with it. Combining Coda with GPT is really amazing and powerful.
However, the integration is suffering from a problem that shouldn’t just affect me. Depending on the length of the prompt or response, the system returns the fateful message.

Captura de Tela 2023-04-26 às 08.19.44

We know this is because OpenAi’s API actually has a higher response time, as it’s a generative AI. It’s part of the game.
However, Coda’s system treats this API like the others and sees the delay as an error.
This, in practice, makes it impossible to use the GPT-4 model and even the creation of longer texts with the GPT-3.5 turbo. It’s really frustrating not being able to get the API response to my prompt. I would be willing to wait a lot longer for it!
Wouldn’t it be possible to create an exception in this particular integration, increasing the response time of the OpenAI API?
The world is moving towards the intensive use of AI, in almost all contexts. I believe that even for Coda it would be more interesting for doc makers to use the integration with their OpenAI account, instead of using the native solution (hello Alpha testers!) and consuming Coda tokens.
I hope the fantastic Coda team can come up with a solution to this problem soon!

4 Likes

In addition: in recent days, prompts that used to generate responses are now taking longer and being interrupted by the error.
I don’t know if it was some change on the OpenAI side or the Coda integration.

A doc that was functional and that we use with our customers has stopped working.

Looking forward to a response from the technical team.

2 Likes

Have you had any luck in resolving this mate?

Couldn’t agree more.

At my company we have moved entirely to PaLM 2 and its various models over the past few weeks. We have essentially jettisoned OpenAI for any future development and we’re working toward full adoption of Google’s models or open-source models going forward.

I have successfully integrated embeddings, chat completions, and text completions using Gecko (the tiny model) and Bison (the moderately large LLM) with really good results. It’s beta so there are still some issues but nothing that we can’t control pretty well. Using these inferencing APIs is very easy in Packs.

Performance across the board is 3x that of OpenAI, and it’s consistent. We don’t see periods of sluggishness like we did on OpenAI’s models.

The cost? There isn’t any cost, so we’re taking advantage of that and performing more inferencing which allows us to do things that were financially impractical with OpenAI. Elon was right - AI costs will soon be less than the cost of accounting for it. It wan’t to be free and it seems this has happened a lot sooner than anyone imagined.

Is PaLM 2 perfect? Hardly. But no LLM is. In my view, we know that we need AGI to be competitive, productive, and leaders in our business segment. We also realize that we need to be “platformative” concerning LLM and API implementation choices.

I would gladly build an AI Pack, but there’s no incentive to do so. Coda will likely produce one. Competing with the vendor in an aftermarket is risky. :wink:

4 Likes

I’ve had the same issue, now I know why. I’d love a fix for this. Just spent the last 2 hours building out a super useful workflow that simply wont work because of this :confused:

The AI capabilities are the only reason i use Coda. I use Notion for just about everything, but it can’t do what Coda can do with AI.

2 Likes

+1 for this topic.

Having similar issues. Previously I would only get this error if I attempted to use GPT4, but now its happening with 3.5 turbo too…but it’s intermittent.

BTW is GPT4 going to be an option?

1 Like

Unfortunately, if a single inference is unable to complete in the allotted API call interval, you will see timeouts. I believe most inferences created using Coda’s AI is a single request, so you’re really at the mercy of the LLM.

I grew tired of AI systems that were sluggish and switched to Google’s PaLM 2. It seems to be about 3X faster responding and I have seen only a few times when it was busy enough to result in a failed request.

Since it also requires no paid account, I tend to break my AI processes into multiple stages. This speeds up each request while also giving me the latitude to perform long-running automations. I have a few agents that will run for 10 minutes and not time out. [more on that topic here]

While OpenAI is the leader in LLM capabilities, I have learned that many of the open-source LLMs are powerful enough in many use cases. Ideally, Coda will make it’s AI features LLM-agnostic allowing us to choose from the dozens (soon to be hundreds) of inferencing options avialable.

4 Likes

Just checked out ‘solve for x’ — looks brilliant! Can’t wait to take it for a spin.

1 Like

I guess I better get on with making it accessible.

2 Likes

This is [partly] what Solve for (X) became.