Coda AI Models :(

Love Coda Ai in so many ways and I have one important piece of feedback:

The native Coda Ai function is SOOO much easier to use overall than the Open AI pack.

BUT I’m using the Open AI pack way more because of the models. Which is kind of frustrating because it’s so much more difficult to use.

The Coda Ai models seem to be some old complete model like davinci or something, or a model that has been fine tuned for Coda. I totally get this decision, especially from a cost perspective on Coda’s side. Right now it’s still all free.

But the model is so bad compared to the models I want to actually use (3.5 16k and soon to be GPT-4), that I just don’t use it for most things.

I would be AMAZING if I could just plug my API key into the native coda model and use the model I want and pay for it myself. I realize that maybe this isn’t so possible with how it’s build, but just FYI.

This gap between usability will grow even more extreme when GPT-4 API comes out in a week. And it just makes me sad to have to still use the pack all the time, when I now how awesome the native AI is.

For example I explicitly told the native AI to NOT use “protein” in a response or ANY numbers. Its response:
“A medium-sized serving of chicken breast with a protein content that is comparable to a 5 oz steak.”

No matter what I seem to say it just can’t do it, which just makes it not usable for me. And there so so many of these examples. Whereas GPT 3.5 out of the pack runs absolute laps around the Coda model, and gets the job done so much better.

2 Likes

interesting feedback
I created my own google AI pack and the payment runs via my own keys in the google console
it works, but since it is not native, it is not so useful.
as @Bill_French mentioned elsewhere it would be great if we could select our the AI of our choice and I added that I’d like to see options like below included we can play with per model, per usage.

parameters: {
          temperature: 0.2,
          maxOutputTokens: 256,
          topP: 0.8,
          topK: 40
}

It would be nice to read some Coda feedback on this issue.

Cheers, Christiaan

1 Like

LLMs do not respond well to any instruction to not do something. You must explicitly encourage them how you want them to behave. Share your prompt.

Precisely why I said this a few days ago.

GPT3.5 and especially GPT4 respond very well for me when I instruct them on what not to do, which is why I’m using the packs instead of the coda ai.

Dall E… not so much lol.

Great write up on AI, thanks for sharing!

I think Coda AI is GPT3.5.

I thought it was too.

But for whatever reason it vastly different results, so it might GPT3 or just a trained in a particular way. Coda ai seems to do a better job of responding to staying within sentence lengths, and some other things. But then there are outputs that are just very bad. Maybe the temperature is set a bit lower than normal too. Not sure but it doesn’t seem so great with a lot of things.

And by “trained” you mean the learner shot has some custom magic that Coda generates for every inference, right?

Agreed. They need to be more transparent about these settings because it may shape how we throw prompts at it.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.