What is the number of table rows that I should expect Coda to handle? Under 50? 500? Under 10K? I’m just looking for a ballpark number. We moved some things into Google Docs from Excel and you are much more limited to the number of cells. I’m trying to think about what I should keep in Excel / Access vs putting in Coda.
I have a similar question/concern. With how powerful Coda is, I wonder what is going to happen once my tables get beyond even 1,000 rows, with 50+ functions running against it on the same page.
In all there are about 18 fields, of which about half are hidden with underlying formulas.
My back of napkin conclusion is as follows:
0 to 1,000 records: No issue
1,000 to 5,000 records: Takes progressively longer to load the document but has mostly decent response time once it is loaded
Over 5,000 records: Not feasible
I faced the dilemma that I do have data tables substantially over 5,000 records, but I also really like the Coda format for organizing the document. My solution was to use MySQL for the data and then embed the resulting report into Coda using a neat MySQL front-end:
I can share details on the MySQL process if you are interested.
Bottom-line - Coda has no equal with regard to document presentation, but it has only very basic database capabilities. But the new embedding and document sharing capabilities make it easy to get a great outcome by using different software packages for the jobs they are each best at.
i wonder if google pivot could be embedded like that. It reads tables with horrible lookups because its not a RDB but it does reports. Seektable is $25pcm unless self hosted.
Yes, you can embed a Google Sheets report or pivot table in Coda. G-Drive also works really well to embed a PDF viewer in Coda.
Google sheets starts to slow down performance in my case in the 50,000 or so records range although I think in theory they can handle up to a million records. MySQL works well in my case with about 30 million records.
Seektable is $25 per month for advanced features such as custom CSS per row on a report or exporting to a PDF.
Here is an example of a Coda doc as an iframe with an Embedded complex Seektable report exported as a PDF. And also there is an example of a Google Sheet embedded in the same Coda doc:
SeekTable.com has fully-functional free accounts; you don’t need to have a paid subscription to use SQL database as a data source and publish your reports to web (and embed them with IFRAME). Feel free to contact me if you have any questions on SeekTable.
Agreed. I have a formula that sums 7 numbers from a table of fewer than 2000 rows and it takes a full minute to complete and show the colour-coded result. Sadly this is completely unusable which is really frustrating because otherwise the app is ready to roll out to 50+ people.
Maybe it’s something I’m doing wrong, but right now I have no way to proceed other than to wait and hope for a performance patch somewhere down the line.
/edit - the pages appear in good time (due to filtering) - it’s literally the entering of data and subsequence calculation that’s too slow. If the column currently shows 0 and I enter 16, when I press enter it still shows zero for a whole minute, then the 16 appears and the colour changes. I need that to happen instantly.
Simply put - Coda does not scale. It only works at present basically for small projects
Perhaps this will be something for the Coda team to address when they get to paid accounts, i.e. different fees for different performance levels. It seems unlikely that free accounts will scale to large projects. Therefore it therefore seems unlikely that scaling to large projects will be available until Coda is out of beta.
Thanks for the tips. I’ve not tried any optimisations as yet - I’m still very new and don’t really know what’s what yet.
What I do know is that I’ve got closer to solving this problem (and replacing a horrible spreadsheet!) than any other tool I’ve tried so far, so if you detect any frustration it’s because I’m so close.
Hey everyone, thanks for the question. At the moment, Coda should be able to handle most docs in the range of 5-10K rows but things do start to slow down beyond that. We are actively working on improving the performance of docs to take it far beyond that number.
In our experience, it’s more often the case that some expensive formula or schema is slowing things down that can be optimized. For those of you who are seeing any performance issues on docs, I’d be happy to help look at your doc and figure out how to make things run faster. @Lloyd_Montgomery & @Nick_Milner, I should probably be able to help you optimize those docs since you’re still at the 1K-2K rows range.
It’s super valuable for us to see what specific issues are affecting user docs the most so we can prioritize our efforts accordingly. If you have any performance issues at all, please feel free to send a message on Intercom and mention me directly. I’d be happy to take a look at your docs or screen recordings or even jump on a video call.
Hi @Angad, I discovered that if I replaced all instances of AND(expr1, expr2) with expr1&&expr2 the time taken to enter data reduced from around 41 seconds to around a second and a half. No idea why - even if the latter form took advantage of lazy evaluation I sitll don’t see why performance would be affected that much. Anyway, I sent an example of my project to support for them to play with.
Hey Nick that’s a great catch. Looks like that is true within the context of Filter formulas. Filter(AND(X, Y)) is worse than Filter(X and Y). The reason behind this is that we have an optimizer that tries to look at filter formulas and speed up calculations that are repetitive but that optimizer doesn’t work if you use the and formula. So in your case, making this change did help a lot because the expressions in the filter formula were used multiple times. Thanks for catching this, I’ll add this to a list of tips for how to fix slow docs.
A client asks me how many tables x rows can a Coda doc safely support, assuming we’re following best performance practices as explained in that article?
We’re not doing many aggregations, but we’re doing lots of lookups based on “ID” columns (mirroring our MySQL DB)
This topic got bumped up for me now, so here’s some sort of an answer.
Coda API and cross-doc stops working when the doc’s JSON goes over 125 MB limit. You can see the limit in developer tools — that would be the largest file loading. Usually it’s much smaller though in compressed state — 125 MB is the limit of uncompressed one.
Got this info from Coda support. That said, the doc would still load in UI even if it’s over that limit.
Hi @Angad-- What determines “expensive formulas” ?
number operations </> text operations
lookups </> counts
how many “.” in the formula?
how many nested methods per formula?
Also what is considered an “expensive schema”?
first assumption is too many columns… but usually database tables are set up to handle lots of columns easily
columns refer to each other within the table?
columns refer to columns in other tables?
The maker community is filled with optimizers. If you guys could moderate a thread collecting all the best doc-optimization hacks… I know myself and (perhaps) other community members would appreciate.
If I’m just too new and haven’t found this holy grail of tips, please link me