I’ve read several topics about how much data a Doc supports before going beyond practical. Are we talking about data in sight, or does it account every row all document wide?
I will tell you my concern:
I am building a management application over coda Doc. I first considered spread it through different docs, yet it seemed unpractical back then… everything is linked.
So I have a huge fear for scalability.
I am tracking processes, activities, execution logs (system records, such as results and KPI’s), and other business related stuff such as relevant organizations (customers, partners…), and so. Beyond that, I have modelled yet another dimension to track each process and results to its own management system, since our service provides operation for customer’s own systems.
So, as you can imagine, just in docs and execution records, tables can grow, FAST.
First I modelled every piece of information related to processes in several tables, and I had to hack them to a joint table so I could easily relate them to the due process. Then, I thought, “why not model a big, sparse table containing every single document and record”.
Enter my huge fear: one way or another, there will be soon lots and lots of records in some tables. Some of them, I can break in smaller tables, some other I can’t (such as communications).
I could set some really complicated way to archive them, and somehow hack Coda to show them in a more or less “composed” document.
But… I have read limits of scalability relates to number of rows, and i am wondering… All rows in document, or just the ones loaded in current view of the document?
Because problem can be significantly reduced if only the rows loaded are the first practical limit for document, since I can easily prevent source query, and force filters for everyone.