Coda AI new credit limit system, buy extra credits?

Hi @Nikil_Ragav does it work if you give the AI prompt more instructions, such as not requiring it to be an exact match? Here’s a quick video of using a prompt to pick value from a relation table. Our support team (support@coda.io) is also happy to take a look!

Hi Bill, to share my view, we want to encourage both types of makers. Our goal is to help enable makers to use Coda AI for any scenario that adds value (but to let you decide what that value looks like), whether that’s small but tedious tasks like data cleanup, or exploring new pathways as you mentioned. Some of the product updates (like Walter just shared) should help make it easier to iterate on prompts that can be applied across your data, so you can test new ideas or make sure the output is what you want without using any credits.

That’s wonderful news! When will this policy be launched?

The new preview window is available now, and the updates to the credit packages will be coming in the next few weeks. If you don’t see the new preview window in your workspace, you should soon, or just refresh the page.

4 Likes

That’s great. I saw it in columns, but not in blocks. I suspect all testing in blocks will still consume AI credits. True?

Has anyone - Codans or otherwise - determined the comparative cost of an inference performed with Coda AI vs a Pack integrated with GPT-3.5?

I struggle to go anywhere near Coda AI [now] for critical inferencing requirements at scale because I cannot predict the reliability (will it be suddenly disabled), or the cost. While pack-based inferencing is less ideal and in some cases not possible, it is both reliable and pedictable as to cost.

3 Likes

So this thing with Coda AI. I noticed that my allotment of 1,000 credits renewed today. So the option to chat via a page appeared again. I tried to combine two summaries (as a test). And just like that, all the credits were used up. Honestly. This is a joke. Change the pricing model or I’ll be back with Notion when it comes to the benefits of integrated generative AI. In your case it is completely unclear why so many credits are being wasted at the same time. Even if I upgrade again now, I will get 3,000 credits. Is that enough for six summaries? Completely useless approach. How do you actually calculate that? I politely ask for a qualified answer.

5 Likes

I can only support the request. The 9 thousand credits we have with three Doc-Makers are usually used up within the first week. The approach of buying additional credits with new doc makers is absurd, as this also leads to follow-up costs for the packs. We had to rebuild all documents built with Coda-AI after the beta and use the OpenAI plugin. Please urgently offer credit packs for purchase. Currently this is really the stupidest strategy Coda can use (not to mention that the credits are not enough for anything).

2 Likes

As inconvenient and troubling as this economics episode has demonstrated, I am fascinated by the challenge that all software companies must now deal with. This is a problem to be admired because there are no obvious answers.

Today, there’s a severe shortage of GPUs. If you owned NVIDIA stock before Nov 20, 2022, you know this well. In 2028, there will likely be a glut of GPUs, but until then, GPU time will be at a premium. Currently, an H100 goes for about $2.50/hr. By 2030, it is projected to be about $0.025 per hour. So, even if Coda had cash stacked up like firewood out back, they couldn’t offer us unlimited inferencing.
Given these constraints, I can say with some certainty that I know what isn’t the answer. :wink:

Forcing customers to worry about limits based on an arbitrary conversion rate is more likely to trigger non-consumption as much as it will force them to find another way. Look no further - it’s already happening.

Generative AI, as delightful as it may be, could lose adoption momentum as quickly as it gained it.

For big-tech firms, they will be able to bridge the GPU shortfall. For smaller firms, this will be a big challenge. Like Coda, all the no-code platforms will need to find a balance - an algorithm that their CFOs can live with and won’t force customers to become GPU consumption police.

Both credits and tokens are a poor measure of GPU consumption. Imagine if you had an ointment for sore knees with an eyedropper applicator. And the price you paid was $0.0000238 for a light dropper squeeze. But a firm squeeze is $0.000182. How would consumers react to this?

No consumer could easily do the math well enough to feel safe about purchasing this product. Every drop of the ointment would need to be pure magic, or the pharmacist would need to find another approach. Ergo, for generative AI to be sold with a similar pricing model, it would have to be flawless.

Coda’s AI is neither unique nor disruptive. It is now officially the incumbent expectation. Generative AI has quickly advanced from quaint to basic hygiene. You must have it to remain equally competitive.

I did as well. I had no choice. Coda’s reaction time did not match the critical adoption of generative AI.

In the strata of energy inference consumption, all no-code platforms have yet another complexity in the pricing model - development activities. Unlike users, developers are the curve ball of pricing because as @Christiaan_Huizer makes clear, there could be multiple attempts to create a working prompt, blowing through many kilowatt hours before establishing a viable generative AI approach if at all.

Companies that have multiple personas of generative AI users must factor in the cost of seeing good use cases emerge without taxing those developers who will ultimately trigger greater generative AI adoption for the consuming personas.

This leaves me to only one obvious conclusion that appears to be selfishly biased - Makers should incur no costs associated with the development and testing of generative AI systems. Taxing this activity in any way will result in a constrained and choking policy on innovation. To be fair, Coda has recently made it possible to test inferences in a cell without incurring token charges. However, this defender of potential inferencing waste is nonexistent in AI Blocks and Chat, a place where lots of experimentation also occur.

In my view, and having given this very little thought, the answer is obvious.

  • The retailer (Coda) must segment generative AI consumers and apply pricing according to three strata - good, better, and best.
  • The purchase classes must be fluid; if you drift from the good class into the better class, your cost is not impacted until such time as this pattern is consistent over three billing periods.
  • The class your utilization falls into must be evident in the product - a pulsating color code should provide a real-time indication of generative AI consumption intensity.
5 Likes

Well done @Walter_Rhee !
This is a great initiative and not just helpful but cost saving :clap:

Can I suggest that you incorporate this information into your customer support management? I’ve been in touch with customer support about my frustration with the credit system, but did not get any signal that this was on your radar or that a solution was in the works. Then it occurred to me to search community to see who else is frustrated. Maybe you could include a link to this page in your reply to customers who are aggravated with this situation, or just provide the full update on your game plan?

For others who are trying to hunt down the AI that’s consuming their credits, I described the least-bad process I’ve found in this thread:

One thing I found while doing this is that there are docs that are consuming credits even though I have not touched them in months. I’m concerned this means credits get consumed as background processes in inactive docs, so I am very motivated to empty all the AI out of my docs. As someone in that other thread pointed out, the beta period encouraged me to run TONS of AI experiments that are now incurring very real costs.

Going forward my plan is to use AI as a one-off enhancement, then turn off and remove immediately. I won’t leave active AI columns or functions anywhere.

1 Like

Thanks for the feedback! If you’ve configured the AI “auto” toggle to be on for AI columns, then when inputs change it could recompute. For example this could happen with an AI column in a Pack table. We’re working on improved AI credit analytics to better see where credits are getting spent. We would also love your feedback if “auto” should be off by default or on by default. Appreciate your thoughtfulness!

1 Like

definitely vote for off by default. Thanks!

1 Like

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.