Coda AI new credit limit system, buy extra credits?

Hi @Nate_Gerber2 we will share some updates soon on what we’ve been working on behind the scenes, but in the meantime want to do everything we can to help. Can you contact our support team at support@coda.io? This will get a ticket logged so we can further dig in. Thank you!

1 Like

Hi everyone,

Thank you again for all the candid feedback on AI credits that you’ve shared with us though the community and other channels. Just to reiterate—we’ve read and discussed every comment. We’ve been working diligently on some updates behind the scenes, but I thought it might be helpful to offer some early transparency.

Taking a step back, our goals were for AI to feel included for the vast majority of makers, but we needed some limits to manage costs in extreme scenarios. We wanted to design a system that scales, and can cater to teams that have power users, while still giving flexibility for other team members to test out AI.

That said, we understand the credit system has created some challenges, particularly for soloists and small teams that use AI a lot, since they don’t have additional teammates to add as Doc Makers. We also heard your feedback about wanting better visibility into where credits are going, so they can be aligned with the use cases that provide the highest value. Given this, we quickly fast-tracked some updates, which we’re hard at work on building:

  • We’re going to introduce AI credit packages so any team, regardless of size, can get the credits they need. For those who’d rather not think about credits at all, we will include an option to purchase “unlimited AI” for your workspace, billed as a monthly charge for each Doc Maker in your workspace.
  • We’re adding self-serve dashboards to help you understand where your credits are going. While this has been on our roadmap, it’s been fast-tracked. If you have any usage questions in the meantime, our support team is happy to help answer questions about your account.
  • We’re creating a richer preview window for AI column prompts. Currently, if you add AI to a table column, you can see a small preview, which does not deduct any credits. We’ve been working on a larger and more detailed interface, so you can feel confident in the output before using credits to fill every row.
  • We’re making updates throughout the product to improve credit efficiency. We’re changing some of the default settings and doing further research on how we can help drive credit usage where makers are getting the commensurate value.

We have additional product updates in the works that will help makers get even more value from Coda AI, and we’ll share further details soon. Thank you again to everyone who has been engaging with us. We’ll share some more news with you all as soon as we can.

Thanks,
David
Coda AI Product Lead

15 Likes

Re: Coda AI, I was hoping to be able to use for the following kind of usecase. It currently doesn’t work for this at all, and it is clear that it would cost a boatload of credits even if it did work.

Usecase:
I have 2 tables that have similar columns and similar row values, but they are not the same.
(Specifically, my usecase is comparing the high school math standards from different states. They’re very similar but not identical in text, but the IDs are different, the orders are different)
I want to use an AI tool to do a relation from one table to the other and link the standards that are similar.

This should be something that a vector embeddings database system should be able to do, and LLMs should be able to do (even if you have to chunk up the context window)

1 Like

Thanks for the update @DavidK .

I don’t get the sense from this message or any previous messages that Codans understand this.

Maybe you can clarify the nuances surrounding generative AI and how Coda views the subtle persona differences between Makers who are power users vs Makers who also seek unknown and uncharted pathways to innovate with Coda AI.

2 Likes

Oh, no-- I understand it. I was considering cancelling my plan until I saw this. In teams where there are many people, there are larger numbers of credits available and it sounds like teams have “power users” who consume the majority of the credits. When you have a small team, there is no such possibility and adding new doc makers for a few more credits becoming incredibly expensive.

I tend to use it for tasks related to datasets requiring text-based analyses, rather than a finished product, if that helps.

Hi everyone,

I’m an Engineer on the Coda AI team. As David mentioned, we’ve been working on a richer preview window for AI column, making it easier to view AI-generated results based on your current prompt.

These previews do not consume any credits, so feel free to iterate on the prompt as much as you’d like, and to view the corresponding results for each row. When you’re happy with the output, you can select Fill column to apply to all rows.

Our team will continue to share broader updates on Coda AI functionality and the credit packages. Let us know if you have any questions about the new preview window.

3 Likes

Hi @Nikil_Ragav does it work if you give the AI prompt more instructions, such as not requiring it to be an exact match? Here’s a quick video of using a prompt to pick value from a relation table. Our support team (support@coda.io) is also happy to take a look!

Hi Bill, to share my view, we want to encourage both types of makers. Our goal is to help enable makers to use Coda AI for any scenario that adds value (but to let you decide what that value looks like), whether that’s small but tedious tasks like data cleanup, or exploring new pathways as you mentioned. Some of the product updates (like Walter just shared) should help make it easier to iterate on prompts that can be applied across your data, so you can test new ideas or make sure the output is what you want without using any credits.

That’s wonderful news! When will this policy be launched?

The new preview window is available now, and the updates to the credit packages will be coming in the next few weeks. If you don’t see the new preview window in your workspace, you should soon, or just refresh the page.

4 Likes

That’s great. I saw it in columns, but not in blocks. I suspect all testing in blocks will still consume AI credits. True?

Has anyone - Codans or otherwise - determined the comparative cost of an inference performed with Coda AI vs a Pack integrated with GPT-3.5?

I struggle to go anywhere near Coda AI [now] for critical inferencing requirements at scale because I cannot predict the reliability (will it be suddenly disabled), or the cost. While pack-based inferencing is less ideal and in some cases not possible, it is both reliable and pedictable as to cost.

3 Likes

So this thing with Coda AI. I noticed that my allotment of 1,000 credits renewed today. So the option to chat via a page appeared again. I tried to combine two summaries (as a test). And just like that, all the credits were used up. Honestly. This is a joke. Change the pricing model or I’ll be back with Notion when it comes to the benefits of integrated generative AI. In your case it is completely unclear why so many credits are being wasted at the same time. Even if I upgrade again now, I will get 3,000 credits. Is that enough for six summaries? Completely useless approach. How do you actually calculate that? I politely ask for a qualified answer.

5 Likes

I can only support the request. The 9 thousand credits we have with three Doc-Makers are usually used up within the first week. The approach of buying additional credits with new doc makers is absurd, as this also leads to follow-up costs for the packs. We had to rebuild all documents built with Coda-AI after the beta and use the OpenAI plugin. Please urgently offer credit packs for purchase. Currently this is really the stupidest strategy Coda can use (not to mention that the credits are not enough for anything).

2 Likes

As inconvenient and troubling as this economics episode has demonstrated, I am fascinated by the challenge that all software companies must now deal with. This is a problem to be admired because there are no obvious answers.

Today, there’s a severe shortage of GPUs. If you owned NVIDIA stock before Nov 20, 2022, you know this well. In 2028, there will likely be a glut of GPUs, but until then, GPU time will be at a premium. Currently, an H100 goes for about $2.50/hr. By 2030, it is projected to be about $0.025 per hour. So, even if Coda had cash stacked up like firewood out back, they couldn’t offer us unlimited inferencing.
Given these constraints, I can say with some certainty that I know what isn’t the answer. :wink:

Forcing customers to worry about limits based on an arbitrary conversion rate is more likely to trigger non-consumption as much as it will force them to find another way. Look no further - it’s already happening.

Generative AI, as delightful as it may be, could lose adoption momentum as quickly as it gained it.

For big-tech firms, they will be able to bridge the GPU shortfall. For smaller firms, this will be a big challenge. Like Coda, all the no-code platforms will need to find a balance - an algorithm that their CFOs can live with and won’t force customers to become GPU consumption police.

Both credits and tokens are a poor measure of GPU consumption. Imagine if you had an ointment for sore knees with an eyedropper applicator. And the price you paid was $0.0000238 for a light dropper squeeze. But a firm squeeze is $0.000182. How would consumers react to this?

No consumer could easily do the math well enough to feel safe about purchasing this product. Every drop of the ointment would need to be pure magic, or the pharmacist would need to find another approach. Ergo, for generative AI to be sold with a similar pricing model, it would have to be flawless.

Coda’s AI is neither unique nor disruptive. It is now officially the incumbent expectation. Generative AI has quickly advanced from quaint to basic hygiene. You must have it to remain equally competitive.

I did as well. I had no choice. Coda’s reaction time did not match the critical adoption of generative AI.

In the strata of energy inference consumption, all no-code platforms have yet another complexity in the pricing model - development activities. Unlike users, developers are the curve ball of pricing because as @Christiaan_Huizer makes clear, there could be multiple attempts to create a working prompt, blowing through many kilowatt hours before establishing a viable generative AI approach if at all.

Companies that have multiple personas of generative AI users must factor in the cost of seeing good use cases emerge without taxing those developers who will ultimately trigger greater generative AI adoption for the consuming personas.

This leaves me to only one obvious conclusion that appears to be selfishly biased - Makers should incur no costs associated with the development and testing of generative AI systems. Taxing this activity in any way will result in a constrained and choking policy on innovation. To be fair, Coda has recently made it possible to test inferences in a cell without incurring token charges. However, this defender of potential inferencing waste is nonexistent in AI Blocks and Chat, a place where lots of experimentation also occur.

In my view, and having given this very little thought, the answer is obvious.

  • The retailer (Coda) must segment generative AI consumers and apply pricing according to three strata - good, better, and best.
  • The purchase classes must be fluid; if you drift from the good class into the better class, your cost is not impacted until such time as this pattern is consistent over three billing periods.
  • The class your utilization falls into must be evident in the product - a pulsating color code should provide a real-time indication of generative AI consumption intensity.
5 Likes

Well done @Walter_Rhee !
This is a great initiative and not just helpful but cost saving :clap:

Can I suggest that you incorporate this information into your customer support management? I’ve been in touch with customer support about my frustration with the credit system, but did not get any signal that this was on your radar or that a solution was in the works. Then it occurred to me to search community to see who else is frustrated. Maybe you could include a link to this page in your reply to customers who are aggravated with this situation, or just provide the full update on your game plan?

For others who are trying to hunt down the AI that’s consuming their credits, I described the least-bad process I’ve found in this thread:

One thing I found while doing this is that there are docs that are consuming credits even though I have not touched them in months. I’m concerned this means credits get consumed as background processes in inactive docs, so I am very motivated to empty all the AI out of my docs. As someone in that other thread pointed out, the beta period encouraged me to run TONS of AI experiments that are now incurring very real costs.

Going forward my plan is to use AI as a one-off enhancement, then turn off and remove immediately. I won’t leave active AI columns or functions anywhere.

1 Like

Thanks for the feedback! If you’ve configured the AI “auto” toggle to be on for AI columns, then when inputs change it could recompute. For example this could happen with an AI column in a Pack table. We’re working on improved AI credit analytics to better see where credits are getting spent. We would also love your feedback if “auto” should be off by default or on by default. Appreciate your thoughtfulness!

1 Like

definitely vote for off by default. Thanks!

1 Like

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.