I am using the Coda API to store answers from an external form for many users and I am having trouble with the rate limit. Even after implementing retries to add delays with exponential backoff, I am getting Too Many Requests errors. I was confused why this was happening with such a small load and I decided to test the rate limit. For some reason, my GET requests (reading rows) rate limit is 200/minute, but my POST request rate limit (inserting rows) is only 10/minute! Why is this happening?
Note that I am using the coda-js wrapper library. This should not affect the rate limit though.
Hi @Grace_Dessert - Welcome to the Coda community, although I wish it was under better circumstances. 10 requests per minute does seem to be quite low, I’ll have to dig into this with the engineers to see if this is intentional or not. Which API endpoint and/or library function are you using when you hit this limit? What is the level of traffic you will need to support once your solution is live?
P.S. - I did some testing of my own, and I do see 429’s starting after the first 10 insert rows requests, but the rate limit error clears up quickly. I’m seeing about 10 requests per 10 seconds, not 10 requests per minute. Let me know if you see the same.
Me again! I chatted with an engineer and currently we limit most writes requests via the API to 10 requests per 6 seconds. The short window means that your traffic can’t be too spikey, but it averages out to 100 requests per minute, which seems like it’s probably enough for most use cases. You can also add multiple rows in a single request, so you could also look into batching the row changes if that limit becomes an issue.
We’re also working on documenting these limits in the API documentation, to make this clearer going forward.
Define “request” in the context of what really happens in an HTTPs POST that includes a JSON data payload. Are all of these data conveyances considered one request event?
This seems low to me compared to the other APIs that I have interacted with. I’m most familiar with Airtable’s API which is 5 requests per second, but I have worked with other APIs (and most have faster rate limits).
When working with an API, I personally like to have a rate limit such that I can perform a series of API calls synchronously without having to worry about the rate limit. For example, I can make a series of API calls, one after the other as soon as I get a response and not have to worry about the rate limit, simply due to the length of time it takes to get a response.
Ten requests per 6 seconds is also a weird number to try to remember or think about, especially for people that have to roll their own rate-limiting strategy. It is not the same as 100 request per minute, even if it averages out to be the same.
This is super helpful, thank you so much for the speedy response and information!! We may batch the submissions in the future but I think our load is small enough and our retry system is strong enough to keep this working for now.
Thanks for the candid feedback @Kuovonne, and I’ll pass this along to the engineering team. I totally agree that the best rate limits are the ones you don’t need to worry about
For context, this rate limit is set where it is to help protect some of our backend systems. Our collaborative document model makes API writes somewhat resource expensive, so we place lower limits on those requests. I imagine there are investments we can make in our infrastructure to reduce that cost, and I hope we will get there in the future.
Thanks for the explanation! It is much easier to be understanding of limits when we know what the reasons are.
To close the loop, we’ve now updated our API documentation to include the current rate limits.
Helpfully this will help you and other developers better understand and plan for them.