Handling Large Tables and Cross-Docs Performance in Coda

I’m considering using Coda for a project with very large datasets. I have a couple of questions:

  1. Is it possible to use a Cross-Doc with more than 10,000 rows without severe performance issues?
  2. Can a single table handle around 400,000 rows efficiently, or is this likely to cause the doc to become very slow or unusable?

If you have experience with large datasets in Coda, I’d love to hear your recommendations or best practices for handling this kind of scale.

Thanks in advance!

Coda can definitely handle up to like 70-100k rows in one table as long as you don’t start creating views out of that table. It also depends on what columns you have and what info you store within the cells.

I have experimented though with a table 320k rows) but it was an almost bare one table with only 4 columns (all text columns) and it still works. You have to find the limits yourself at your own risk.

More than this, I have docs that are in range of 600mb, if you have a good internet connection and just few tabs open it loads within a decent timeframe, like 30-40 seconds.

At this point I do not recommend Coda for large datasets (rows + lot of columns) as is not made for this kind of things, even larger tools like Airtable etc have limits for their solutions (500k rows I think).

Maybe later down the road Coda will be able to ingest and process more data.

1 Like