They’re needed for little private APIs as well. We have many web services that are special-purpose micro-APIs based on serverless architectures that listen for events. Big or tiny - webhooks (driven by events) drive data seamlessly using very few computing resources.
What is “sizable”?
In my data science work, a million records is moderate and considered outside the realm of browser-based apps, but I do this often with the help of Streamlit. And then I embed Streamlit apps into Coda. With that, document users have access to big data analytics and a framework for building narratives, exploring analytics, etc. There are many tools that support embedded blending to give Coda users access to “sizable” data sets.
I also have one client that uses Google App Script to cache forward customer survey data directly into a raw Coda table from which they perform various analyses and reports by slicing off views. I advise them to be cautious about bitting of large collections.
That particular solution allows users to formulate a query by selecting filtering criteria in a table and setting a refresh flag. A chron process checks for those filter settings in the Coda document and via the API, it refreshes the data. I’ve seen them populate 10,000+ rows in a raw data table, but this takes time. Typically they make a request before leaving the office and the data is ready in the morning. The chron process adds 100 records every five minutes until complete. This is the pacing aspect I was referring to earlier.
Another client has about 1,000 rows of transaction data per month and they spread 18 months across 18 documents. This us ugly but they’re thrilled with it.
Coda can handle some data volume, but don’t expect it to be browsable in a performant fashion; you need to keep larger record sets out of view of the user and pull summaries and other metrics into smaller sets with views.
Major sacrifices? Absolutely.
I think this is where you need to ask - for what purpose?
In my view, Coda (as a data store) is ideal for day-to-day operational information management. It is well-positioned to wrap context, summations, narratives, and some decent charts around aggregations for useful reporting and administrative tasks.
Again - just my view – it is not [presently] a safe harbor for large historical collections of data. And to be a little harsh (perhaps), a quick glance at the feature set should give us a clear picture of how Coda can be used advantageously. Large data set support is not what I see when I read about Coda’s features - just sayin’.
I recommend careful experimentation and a general strategy that paginates seamlessly to mimic the idea that the Coda app has unlimited data scale. This concept is difficult given the lack of HTTP POST/GET capability, but there are ways to overcome these shortcomings.
Someday it will be easier. Today, not so much.