Hej there,
just wanted to share and gather some input on enormous tables.
What started as an experiment on “when coda would break”, might become a tool we’ll use in our workflow.
I embarked on the journey to calculate solar PV yields for our construction projects. To do that, I had to download gigantic CSV tables from a website, recording solar radiation levels.
For every hour of a year
For 18 different directions into the sky
For 5 different tilt angles
Thats 788,400 rows…
I imported over 140k rows already with 5 numeric columns and a text column which I’m converting into a date-time using a formula. In addiction, there’s formula columns for the time, hour, date, weekday and weeknumber and two select columns for angle and tilt.
I absolutely did not expect coda to handle that. But besides some reloading of that doc I am very surprised on how well it works. Even conditional formatting works well.
I want to continue that experiment until I have a tool or something breaks.
Does anyone else have some tips and experiences to share with tables that go beyond the recommended 20k row-count?
What’s your current doc size? (use Doc Map). When it comes to big tables, you may find that if you want to create a view of that big table, the doc may crash and stay in this loop.
You should be able to select the size. Asking this because I have some docs with a lot of rows (300k rows) but very few columns, and the column types are mostly text, which are very light when calculating the total MB.
I would do this in Excel… Coda works, then slowly, and until it suddenly doesn’t…
You may end up with an unrecoverable crashed Doc.
I understand the Coda team has been working in improving performance and handling larger databases (Excel can handle over slightly over 1 Million rows for more than 17 years now…)
If everything works, I want it to be an interactive doc in which you can adjust roof angle, PV type,battery storage, primary heating source,… which will calculate the respective ROI. I feel like coding that in excel would be a nightmare.
It’s currently a separate doc - if it will be unrecoverable, I really don’t mind, since I don’t rely on it.
I’ll also check out if interpolating data is accurate enough so I don’t have to import so many different radiation-angles.
I see. @Konrad_Mertner, another idea would be to push all the heavy lifting calculations to a proper DB system. You could check if you can get away with Supabase’s PostgreSQL free tier (and connect it to Coda to sync a results table or get it straight to Excel). LLMs have gotten pretty good to write SQL for you so you could basically push all the lookups you are doing + formulas in Coda to a set of queries in Supabase
I wonder whats the most efficient way to use formulas and data entries in coda.
Regarding formulas:
I like to break down complex operations into smaller formulas and reference them later in a bigger formula. I’m wondering if it has any performance implications. Is it more efficient to have all the smaller formulas inside that bigger formula or does it have no effect?
Regarding data entries:
Is it more efficient to store data in a column or in a row? In my example above it would be a question of storing all the different radiation values for every angle inside one row that represents an hour or would it be better to store every radiation value for every hour in a separate row that has the additional value of the angle attached. In other words: Is it more efficient to store data in one dimension or two dimensions?