Realistic number of rows in a table

What is the number of table rows that I should expect Coda to handle? Under 50? 500? Under 10K? I’m just looking for a ballpark number. We moved some things into Google Docs from Excel and you are much more limited to the number of cells. I’m trying to think about what I should keep in Excel / Access vs putting in Coda.

Thanks for your advice.

4 Likes

I have a similar question/concern. With how powerful Coda is, I wonder what is going to happen once my tables get beyond even 1,000 rows, with 50+ functions running against it on the same page.

Lloyd

I have done a good bit of trial and error here

This an example screenshot of the sort of complexity in my database

In all there are about 18 fields, of which about half are hidden with underlying formulas.

My back of napkin conclusion is as follows:

0 to 1,000 records: No issue

1,000 to 5,000 records: Takes progressively longer to load the document but has mostly decent response time once it is loaded

Over 5,000 records: Not feasible

I faced the dilemma that I do have data tables substantially over 5,000 records, but I also really like the Coda format for organizing the document. My solution was to use MySQL for the data and then embed the resulting report into Coda using a neat MySQL front-end:

https://www.seektable.com

I can share details on the MySQL process if you are interested.

Bottom-line - Coda has no equal with regard to document presentation, but it has only very basic database capabilities. But the new embedding and document sharing capabilities make it easy to get a great outcome by using different software packages for the jobs they are each best at.

8 Likes

Thank you Richard! This is very helpful

i wonder if google pivot could be embedded like that. It reads tables with horrible lookups because its not a RDB but it does reports. Seektable is $25pcm unless self hosted.

Yes, you can embed a Google Sheets report or pivot table in Coda. G-Drive also works really well to embed a PDF viewer in Coda.

Google sheets starts to slow down performance in my case in the 50,000 or so records range although I think in theory they can handle up to a million records. MySQL works well in my case with about 30 million records.

Seektable is $25 per month for advanced features such as custom CSS per row on a report or exporting to a PDF.

Here is an example of a Coda doc as an iframe with an Embedded complex Seektable report exported as a PDF. And also there is an example of a Google Sheet embedded in the same Coda doc:

http://coda.reference.solutions

Hi Kim, I’m founder of SeekTable. Regarding

Seektable is $25pcm unless self hosted.

SeekTable.com has fully-functional free accounts; you don’t need to have a paid subscription to use SQL database as a data source and publish your reports to web (and embed them with IFRAME). Feel free to contact me if you have any questions on SeekTable.

2 Likes

You won’t be able to do anything with the data, but I suspect comments drastically reduce the database burden. Try using these for all interactions?

Also I feel the slow down much more so with formula’s that rollup complex equations more so that # of rows so far, but only around 500.

Agreed. I have a formula that sums 7 numbers from a table of fewer than 2000 rows and it takes a full minute to complete and show the colour-coded result. Sadly this is completely unusable which is really frustrating because otherwise the app is ready to roll out to 50+ people.

Maybe it’s something I’m doing wrong, but right now I have no way to proceed other than to wait and hope for a performance patch somewhere down the line. :frowning:

/edit - the pages appear in good time (due to filtering) - it’s literally the entering of data and subsequence calculation that’s too slow. If the column currently shows 0 and I enter 16, when I press enter it still shows zero for a whole minute, then the 16 appears and the colour changes. I need that to happen instantly.

Simply put - Coda does not scale. It only works at present basically for small projects

Perhaps this will be something for the Coda team to address when they get to paid accounts, i.e. different fees for different performance levels. It seems unlikely that free accounts will scale to large projects. Therefore it therefore seems unlikely that scaling to large projects will be available until Coda is out of beta.

1 Like

how about making a formula field or single cell table,
enter the data
and “push” the text so that it calculates it at one end point?
a necessary hack…

or did you try filtering tables into a view first to minimize the calculation overhead?

Thanks for the tips. I’ve not tried any optimisations as yet - I’m still very new and don’t really know what’s what yet.

What I do know is that I’ve got closer to solving this problem (and replacing a horrible spreadsheet!) than any other tool I’ve tried so far, so if you detect any frustration it’s because I’m so close. :slight_smile:

1 Like

Hey everyone, thanks for the question. At the moment, Coda should be able to handle most docs in the range of 5-10K rows but things do start to slow down beyond that. We are actively working on improving the performance of docs to take it far beyond that number.

In our experience, it’s more often the case that some expensive formula or schema is slowing things down that can be optimized. For those of you who are seeing any performance issues on docs, I’d be happy to help look at your doc and figure out how to make things run faster. @Lloyd_Montgomery & @Nick_Milner, I should probably be able to help you optimize those docs since you’re still at the 1K-2K rows range.

It’s super valuable for us to see what specific issues are affecting user docs the most so we can prioritize our efforts accordingly. If you have any performance issues at all, please feel free to send a message on Intercom and mention me directly. I’d be happy to take a look at your docs or screen recordings or even jump on a video call.

Hi @Angad, I discovered that if I replaced all instances of AND(expr1, expr2) with expr1&&expr2 the time taken to enter data reduced from around 41 seconds to around a second and a half. No idea why - even if the latter form took advantage of lazy evaluation I sitll don’t see why performance would be affected that much. Anyway, I sent an example of my project to support for them to play with.

Thanks for the reply!

1 Like

Hey Nick that’s a great catch. Looks like that is true within the context of Filter formulas. Filter(AND(X, Y)) is worse than Filter(X and Y). The reason behind this is that we have an optimizer that tries to look at filter formulas and speed up calculations that are repetitive but that optimizer doesn’t work if you use the and formula. So in your case, making this change did help a lot because the expressions in the filter formula were used multiple times. Thanks for catching this, I’ll add this to a list of tips for how to fix slow docs.

5 Likes