As an SAP ERP consultant, my answer is, it depends…
I am not certain exactly what you mean with ERP.?
When you mention that you want to segment your table into smaller tables, what would be the basis for your segmentation?
When you say you have 20 views of the big table, what are those views? Simply filters to show 5,000 records at a time? Or do they have sets of columns, and filter based on a characteristic that binds those columns?
In S4/HANA for example, SAP has collapsed about 13 different tables making up financial documents into a single table, which they refer to as the Universal journal. Whether the transaction is a goods movement, an asset related entry, a sales entry, they are all in the same table. Including all the characteristics needed.
In some cases their are related tables, e.g. about goods movements in the Materials Management part of the system. There is a lot happening in logistics before a financial transaction is raised. That information (Shipping details, terms of trade, warehouse mgmt, etc etc.) is not needed in the universal journal, and is stored in the materials management database.
Opinions differ, but in general I am in favour of fewer tables. I would also do the initial design around data storage requirements, and then review for performance problems. And this would also be an ongoing exercise to review performance. In year 6 you might have very different performance characteristics than in years 1-4.
Another important criteria would be the granularity of the data that you require. Typically the further you go into history, the less detail you need. You can then archive that data. And their again you have different options
You could simply move all data to a history table,
or you could summarise the data into a history table.
I am utilizing history tables in another doc via time based automations for rows created more than 365 days ago, among other things.
One of the tricks is lots of the data needs to be searchable years into the future. And we want to search just once, not in multiple docs.
I think I’ll go with one big table and many small filtered views, each on a different page. And test different schema configurations as the doc scales.
Even still, I’m curious where Coda’s limits are these days, and how to extend those limits to the max.
Performance can get complicated, but Coda continues to make massive improvements.
In the page below I provided a table that lookup from a 100.000+ table, and the response was amazingly fast
For another client we did a quick check working and filtering a table with 13,000 rows. Response on filter was reasonable, and improved a lot when filtered views were implemented.
Enjoy building your magnus opus, I am convinced that Coda will be able to support you in all axcept the largest of set-ups.