Why would you NOT try to manage all a Tech Start-Up's stuff in Coda?

Yes - this is important. I want to see this as well. But I get it - it’s also important for Coda to be careful about telegraphing their ideas and plans.

I believe this should happen in the context of a Maker/Partner program where some level of confidentiality surrounds the narratives. Until such a program has been formalized I can understand why they might be a little reluctant to expose strategic plans.


I’m curious about this example. Please explain why the API is involved and its purpose in a simple task manager and what the nature of “drops off” means.

This is a great discussion. I love @Federico_Stefanato’s point about “looking at the potential and believing in it” - this is a nice philosophical reflection of what a great job they’ve done at making something that so clearly demonstrates its potential. Like being able to understand an artists style and recognize an uncontextualized painting as one of theirs.

Re @Bill_French’s ask for an example … I mean anything that connects their doc outside of Coda! Any feature extension or connection or anything that isn’t through a native pack. These outcomes will vary from catastrophic to mundane. The problem mostly arising from the expectation not being met. Here’s a rather mundane example. A shipping company uses Coda to manage their orders/to-dos. They create an automation that when an order is completed it sends a Bill of Lading to the warehouse printer via the API using Google Cloudprint. This works for 8 months, then one day the truck arrives to pick up the $100,000 order but the BOL isn’t printed. The truck can’t pick up the order without it and doesn’t have time for the team to research why their printer isn’t working like it always has, and the driver has to leave to make his other pickups. Now the company doesn’t get their order picked up and they suffer some big financial penalty for not meeting their contractual obligation.

Now what do they do? Their ‘Maker’ didn’t know this was coming, and how do they get ‘back on’ the API? They need to go in and start cleaving away at their existing, valuable data? Its a uncomfortable fail paradigm … the fact that currently you don’t even know about the impending disaster until it strikes - and its totally foreseeable - is the main problem here.

There are unlimited scenarios where makers will extend this incredible functionality via IOT and API and all other three letter sandwiches to create magic … and dropping off the API will be problematic.

I love your metaphor, but I think you’re understating my example. I’m talking about using a simple task manager adding one row at a time. I can forsee people wanting to take the Cambridge Analytica database and put it into Coda for manipulation and analysis. This would be hundreds of millions of rows. This where I think your metaphors become more realistic. But to sell the product as a do-it-all thing, even albeit little ‘docs’ like manage your project to-dos … 4 actions a day for 8 months to break the thing? I think we’re still in the shallow end of the pool in terms of where this thing should break.

That said, they aren’t doing this to be vindictive or punitive or for some bad reason. It’s very difficult doing what they’ve pulled off an they need limits … I understand that. The opportunity I see for Coda now is to better manage the expectation and provide some warnings so the car doesn’t break down in the middle of a busy highway.

1 Like

:zap: Unlimited scale!!! :zap:

I definitely agree it is a long road for a product like Coda to become an enterprise-scale system that can smoothly support customers with thousands or even hundreds of employees.

But we’re talking financial spreadsheets or API’s that become unresponsive or unusable in the couple thousand entries area. Numbers that can be hit by single users, in simple use cases, and guaranteed to be hit by a small team project that continues for a typical amount of time.

Just as an example we recently created a product that has an iOS component, with a view that a typical user would probably have 10-20 entries in, but we filled it with 10,000 entries in our test to make sure nothing goes bust and that performance scales reasonably, and it worked just fine. Of course 1,000 entries, or even 100 might have been acceptable in that particular case but we take performance seriously.

In my opinion Coda should aim to comfortably support, in terms of performance and reliability, within an order of magnitude of the general use case. It’s common for small teams to have thousands of tasks accumulate in a year or two, so the tool should make sure it can support at least 10x that (~100,000). As mentioned in this thread it’s better not to be 3 years into a project and suddenly your tool that contains all workflow + knowledge stops working or slows down to the point of being nearly unusable and needs to be migrated.

I assume the devs at Coda agree, and it’s hard to give any commitments about when optimizations would make it into a release, but it would be nice to know what we should expect as long term goals and priorities, since PM and CRM etc are templates and example use cases that the product provides.


I just wanted to clarify that I’m just getting started with Coda and although I’ve built out a CRM and PM structure, I only have a handful of entries in each of the various tables. I haven’t run into any performance issues myself, and am just interpreting what I’ve read here and the other mentioned thread.

So I may be misunderstanding the performance situation, for example perhaps it is limited to certain, isolated use cases that most small teams would not encounter. But it is something I need to understand if we move all our processes and knowledge to this lovely tool. :slight_smile:

Okay, so these are hypothetical calamities that you are warning us and Coda about. Let’s examine these a bit…

  • No system is impervious to the failures you are describing. Even systems that are 100% custom made and even domain-specific systems that are created specifically to print shipping labels can, have, and will fail.

  • Any shipment process that is gated by a system crafted as an add-on to a document-centric app and cast in a mission-critical role will likely be carefully vetted, stringently tested, and probably scrutinized by many people. I would be skeptical if Coda was the winning technical choice for this specific example. As such, asserting that Coda would get a black eye for such a failure is probably a stretch. Does C++ get a bad wrap when a poorly-skilled engineer writes a bad printer driver?

  • Every minute of every day there are dozens of users who pose as skilled technicians and through masterful convolutery create incredible sh*t-shows that people - the likes of you and I - must untangle. This is neither a new thing or anything that will ever likely stop.

I still don’t have a clear understanding of this phrase. Describe the nature of a process that causes the API to “drop off”.

I certainly agree with all comments here that performance in a given expectation envelope is important and there’s no question the team has work to do.

Yes @Bill_French you are correct in your points. I used a printer in my example because you could have just said ‘what if it runs out of toner!’ I wanted it to be mundane and simple … AND I can see Doctor’s and hospitals building ‘mundane’ workflows that cascade into higher stakes scenarios. I mean I can see unlimited potential!! Which is why I care enough to write in this forum.

My point is that once it breaks, now what?? There’s no good solution. I’ve had to rebuild my ‘doc to rule them all’ 4 times from scratch, and now it lives as 6 different docs. It was a TREMENDOUS opportunity to do this, and its made me the coda wizard I am now, but its not a good model for the business as a whole. As you said, most people are not technical innovators who geek out on this stuff. @Ed_Liveikis laid out the best argument for why it needs to support ~10x the general use case for safety. Much like how buildings are overdesigned for their weight limits.

As far as being thrown off the API, perhaps you haven’t had the honor yet? Typically I get a Zapier notification. In peak distress I created this post when I got hit by the truck and didn’t know what to do … had so much invested in my doc and then BOOM everything breaks. All the people I cajoled into using Coda were pissed … business processes failed … it was a total disaster. As I said earlier, my suggestion at this time isn’t for them to artificially raise the limit beyond what is technically possible at the moment. I have confidence in Moore’s Law and that they will be able to increase these limits. Its how Coda handles the failure (and how at the time of my post they were publicly claiming to handle hundreds of thousands of rows but then kicked me to the curb at 3,000 rows) now that I think could be optimized. AND I have confidence they are working on it and its a priority … my purpose here isn’t so much to chide Coda as provide guidance to other early pioneering innovators that they might not hit their head on the doorframes I’ve already passed through.

1 Like

Until such time that Coda is able to allocate more resources into scaling performance, I would appreciate more user facing diagnostic tools like the “Debug calculations” tool, which I use all the time.

What gets measured, gets managed.

Expose as much diagnostic data as possible for robust users, so that we can proactively manage approaching performance issues long before our Coda projects unexpectedly blow up—costing who knows how much downtime.

Empowering us to partner with you on managing performance limitations will leverage your limited resources.


Awesome conversation, btw.


Well said @Ander! Totally agree.

2¢ quite valuable.
Thank you.



You trusted a codeless, near-free black-box platform to provide a mission-critical integration. I’m sorry it went sideways, but this is precisely why solution designers need to be skeptical of the many gears and machines they choose to separate processes from data.

At the heart of this problem is a failure to recognize that Coda (like Airtable) provide no event handlers. Because of this, Zapier and Integromat are forced to poll for data changes just as you would be forced had you created a custom integration with the Coda API. This is why it failed. And while it’s fair to lay [some] blame at Coda’s feet, we all must accept at least some of the responsibility. This limitation is not related to Coda’s performance or their API per-se - it’s caused because we all conspire to oversaturate the Coda infrastructure with millions of unnecessary requests every hour - perhaps every minute.

We’re high on glue - the Zapier and Integromat glue. We think it’s wonderful and yet, it is ultimately harmful without webhooks and events that drive them.

Avoiding this (for us) is not easy and adhesives the likes of Zapier only exacerbate the problem by offering vast integration options without cost or near-free and easy to setup. This serves to endear less-knowledgeable users to elevate the polling insanity because they know not what’s happening under the covers.

If the Internet depended on polling architectures, we would have shut it down in 2005. But we got smart and we realized that events are critical to interoperability success patterns. Why Coda chose to ship before events and webhooks were possible is not something I can easily rationalize.

My client solutions do not suffer from this calamity because (a) I asked deep questions about the architecture before getting clients on to it, and (b) for any processes that require some degree of event handling, I either optimize the crap out of the polling detection scheme and set clear latency expectations, or I use Slack events to mimic actual events. I also avoid polling at all costs and push manually-driven event triggers on to actual users until such time as events and webhooks may become available.

Where does that leave us?

Well, we can’t blame Zapier - they want events and webhooks as much as we do. We can certainly blame the glue-factories for referring to these integrations with Coda as “webhooks”. At best it’s a deeply harmful mischaracterization of their service.

Coda can change the API performance game with two key architectural enhancements that work the way almost every SaaS solution presently achieves optimized interoperability. I don’t think they have a choice. In the meantime, there be thin ice - best to skate carefully my friend.

[UPDATE]: BTW, a couple of key things are circling this drain.

  1. Someone asked about the API accessing data via views during the Maker webinar and @shishir mentioned that he thought it was already possible but not likely in the SDKs. On this point alone it would allow us to discover changed records easily and avoid massive table-level polling to find data patterns. I will be spending a fair bit of time on this in the coming weeks because this is a significant advantage lacking webhooks and events.

  2. The glue-factories do not pace or filter their API calls. In many of my integrations, I pace the change detection and processing tasks by calling a small number of API requests over many hours. This allows me to perform API tasks across many thousands of records without triggering alarms. But it does require some carefully scripted machinery to segment the activities to the record set.


@Bill_French great post. Webhooks or things like grpc definitely are needed for change notifications in big public-facing API’s.

I’m trying to read between the lines here - are you implying that you are successfully using Coda on sizable datasets with your clients without any major performance issues or major sacrifices? I’m considering working on code to automatically fill up my PM and CRM projects with a 2-3 year simulated data-load, but would rather avoid the effort if y’all can confirm things are working decently enough for you :slight_smile:

1 Like

They’re needed for little private APIs as well. We have many web services that are special-purpose micro-APIs based on serverless architectures that listen for events. Big or tiny - webhooks (driven by events) drive data seamlessly using very few computing resources.

What is “sizable”?

In my data science work, a million records is moderate and considered outside the realm of browser-based apps, but I do this often with the help of Streamlit. And then I embed Streamlit apps into Coda. With that, document users have access to big data analytics and a framework for building narratives, exploring analytics, etc. There are many tools that support embedded blending to give Coda users access to “sizable” data sets.

I also have one client that uses Google App Script to cache forward customer survey data directly into a raw Coda table from which they perform various analyses and reports by slicing off views. I advise them to be cautious about bitting of large collections.

That particular solution allows users to formulate a query by selecting filtering criteria in a table and setting a refresh flag. A chron process checks for those filter settings in the Coda document and via the API, it refreshes the data. I’ve seen them populate 10,000+ rows in a raw data table, but this takes time. Typically they make a request before leaving the office and the data is ready in the morning. The chron process adds 100 records every five minutes until complete. This is the pacing aspect I was referring to earlier.

Another client has about 1,000 rows of transaction data per month and they spread 18 months across 18 documents. This us ugly but they’re thrilled with it.

Coda can handle some data volume, but don’t expect it to be browsable in a performant fashion; you need to keep larger record sets out of view of the user and pull summaries and other metrics into smaller sets with views.

Major sacrifices? Absolutely.

I think this is where you need to ask - for what purpose?

In my view, Coda (as a data store) is ideal for day-to-day operational information management. It is well-positioned to wrap context, summations, narratives, and some decent charts around aggregations for useful reporting and administrative tasks.

Again - just my view – it is not [presently] a safe harbor for large historical collections of data. And to be a little harsh (perhaps), a quick glance at the feature set should give us a clear picture of how Coda can be used advantageously. Large data set support is not what I see when I read about Coda’s features - just sayin’.

I recommend careful experimentation and a general strategy that paginates seamlessly to mimic the idea that the Coda app has unlimited data scale. This concept is difficult given the lack of HTTP POST/GET capability, but there are ways to overcome these shortcomings.

Someday it will be easier. Today, not so much.

1 Like

Another client has about 1,000 rows of transaction data per month and they spread 18 months across 18 documents. This us ugly but they’re thrilled with it.

Well it’s good they’re happy with it - not sure if this is something I’m willing to deal with on the PM side on our small team, the tool should be reducing friction.

I think this is where you need to ask - for what purpose?

Well as mentioned at least PM, CRM for a software firm. On a 2-year project with 5-10 people, it’s typical to have in the range of several thousand project management items created in the form of goals, tasks and bugs. These are viewed and edited on a daily basis by each team member and the PM in particular will spend a significant portion of the day working in the tool. Data structure and views wouldn’t change much, but rows would.

Most of the time only incomplete work needs to be reviewed, which is usually in the hundreds of items range. Perhaps the filtering of views will make this workflow (mostly) performant. I am hiding all the actual data source tables in its own folder.

Our purpose is to avoid jumping from tool to tool, with all the cost that brings, for things that are supported in Coda, and already converging in the existing tools. Like you can use Asana to do CRM, but awkwardly and insufficiently, as you can do task management in Excel, or a CRM, etc etc. These tools are adding more and more features to be used in different ways but still within rigid structures that were initially intended for a certain use case.

Where we would normally use (the actual tools are only examples) Jira (PM), Zoho (CRM), Confluence (docs, wiki), Typora (markdown notes, quick designs), GDocs/Excel (custom reports, collaborative documentation, etc), we can do these in Coda with custom streamlined workflow that only has what we need it in. Ambitious, yes, but after playing with Coda, and Notion to lesser extent, this is clearly achievable with such a tool.

In my view, Coda (as a data store) is ideal for day-to-day operational information management. It is well-positioned to wrap context, summations, narratives, and some decent charts around aggregations for useful reporting and administrative tasks.

Again - just my view – it is not [presently] a safe harbor for large historical collections of data.

Thank you for the input. That makes sense. It seems I’ll need to keep evaluating, and probably use the API to fill up our dataset with a max load and see how the app performs.


Indeed. These are all good reasons to remake and compress the toolset. This is my incentive for the most part as well - to help clients become more productive by doing less, reducing complexity, increasing usability, and enabling convenience.

Which brings us to the question of archiving. :wink:

We tend to assume databases should grow, and only grow. Are we just being lazy? Or, are there key business requirements for instant access to everything? Perhaps it’s time to think about data-at-rest and where it should reside when at rest.

In my view, we tend to need these historic items for two key objectives:

  1. Search
  2. Analytics

Why not stand up an instance of ElasticSearch and push the past off the table and into something that can make the data more useful?


Why not stand up an instance of ElasticSearch and push the past off the table and into something that can make the data more useful?

It’s an interesting proposal, especially given what you mentioned earlier, that webapps in general struggle to represent and deal with large amounts of data.

At the same time it is something I am trying to avoid - exporting data and dealing with it somewhere else because the main tool is unable to.

It’s very rare when all items, including archived ones, or even only archived ones, need to be inspected one, by one, individually. I don’t have any uses for this in my current workflows.

They are still usually needed for administrative or team wide analytics…e.g. what’s our overall rate of opening and closing of work items over time, since the beginning of time? The delta between these lines is informative for project trajectory, milestone tracking and also as a historical guide for how future projects may track or be informed to create a better process.

I’d be happy to have a place where I define the fields/columns for tables (schema definition place) - without the app needing to show the actual data, in lieu of the “Data” folder I currently have with the table sources. And then all tables in the app can be views, probably all with some sort of filter/limits on data that the frontend model and UI needs to deal with. If the API ends up being powerful and speedy as it should be, users can make their own custom app views with their UI’s of choice if the web-app is too slow.

Now excluding all other Coda users, the data set complexity I’m referring to offers a trivial load for something like a Postgres DB, and many teams would be happy to host their own Coda servers. If Coda doesn’t solve this problem, someone else eventually will. :mantelpiece_clock: :slight_smile:

Coda has a nice set of feature requests on their recent voting page, so it’s a tough call what to prioritize first, but you can’t go wrong with good performance and some bugfixing.

1 Like

Eager to get back in here as so much interesting being said! Real education for me…

First, @Johg_Ananda and @Bill_French - you both have answered the question of this post - I’m going to say Scaleability is a reason a team would NOT use Coda! I am not using the API yet, but I do want to. I have CRM-level clients that I wanted to bring into Coda with the API. I was previously bringing them into Salesforce, which if all goes well I’ll be dropping when the license ends. Was not aware of the API limits, nor the poor way this seems to be handled by otherwise stellar Coda Support. But a 3k limit in a doc - that is a serious limit if you plan to set up Coda as your total CRM and Task/Project Management solution!

I think this goes hand-in-hand with what @Paul_Danyliuk is pointing out re: issues with automations over here:

So, my example of the Y-combinator Batch Company using Coda would come under serious question if Coda is going to be this limited with scaling.

Very eye-opening, and in fact you’ve got me wondering about my own plans with Coda! I also agree that Cross-Doc seems like a way to solve this, by spreading content into other docs, but I have issues here:

  • Cross-Doc now weak, only one-way. But I do trust the team to solve this.
  • As I’ve been discussing, mainly here of late:

I have issues with my schema in figuring out how to link stuff to other stuff in Coda. I think it would be solved by reciprocal linking, as I’ve also talked about in the community, but don’t seem to have any insight back as to whether Coda itself is interested in introducing this soon. The method of a table linking to essentially just one other table via lookups is very limiting. I have needs where I need tables to link to many tables, such as Meeting Notes and other decision-type tables I’m counting on to help my team in its roadmapping and goal planning. So I currently have a bunch of stuff in the same tables that really shouldn’t be, in order to limit my tables to get around this. To have to try to figure out how to link several topic areas out of Cross-Doc seems even more challenging than within one doc.

And @Johg_Ananda agreed 100% that Coda is starting to promote itself as a team solution, that is part of why I started this post in the first place!

The solutions you guys put forth are really outstanding, and count me in as one participant in this convo who’d really like to find out more how Coda’s Product Team thinks about all this. I hope they weigh in soon, either here or somewhere that will give us all some reassurance that investing in Coda now as a Team Solution will not run into Scaleability roadblocks in short order! My example of an engineering team moving 50 engineers off Jira into Coda, with today’s existing Coda architecture, seems foolish though, in light of what you guys presented here.

And @Federico_Stefanato and @Ander great to hear from you both - you’ve both helped me before tremendously in the community - Federico I owe you a response on your last answer in fact! All the more this dialogue’s education for me is immense. I respect all 4 of you as serious Makers. I am wavering about investing in my team’s future solution in Coda, and you guys are presenting big-time details of how Coda really works. Invaluable, and thanks again!


One and only real answer: poor performance.

Compared to other cloud products like Excel, Airtable, etc. Coda is very slow and flaky. In 2020 you’re used to robust products that can take the stress of lots of data. Unfortunately, Coda isn’t one of them

Performance and Coda’s opaque attitude around it is sadly the primary reason.

I invested quite a bit of time creating a setup inside Coda that provided proper project management. I also did my due dilligence on stress testing the setup in terms of both row count, complexity and automation.

Sadly, Coda started becoming sluggish for relative small workloads (~4000 rows) on seemingly simple table setups. On top of that this decrease in response time wasn’t even limited to pages containing the heavy table.

Over the past few months subjectively I’d have to say that sadly the performance decreased even further while we haven’t been doing any major table work - mostly simple pages.

The secondary reason being that in a lot of aspects it just didn’t feel “finished” and I’ve reported numerous UX & UI bugs in the past year. The QA process I went through also didn’t give me the impression that these (IMHO) basic bugs would get solved anytime soon :frowning:.

E.g.: The trivial CSS bug which doesn’t collapse an image view on cards if it is not set. (One could argue that this is preference, although I think it’s not the 95% case) - Another example being the UX around Categories for Cards, which is disabled in Card view, but an be set if you change it in a different view and change the view back.


Hey @GJ_Roelofs, very cool to have you chime in here. I was in fact thinking earlier of including one of your quotes in this post, which I think is one of the best summaries I’ve ever seen of the appeal of Coda, here it is:

So I’m glad to see you appear on your own!

It seems there is a big task ahead of Coda to lick these performance issues. From my experience though, these are can be less challenging than fundamental design and architecture of a team management app that can potentially solve the problem I feel has become huge - proliferation of apps to manage a business. Hundreds of apps out there are trying to solve this, I think none is as close as Coda.

I think most agree that Notion is not that app.

Airtable is very thin on team-friendly elements. For example, there is no “Rich Text” field type, let alone one you can pop out and write into like a doc. I consider Airtable a modern database, but Coda offers so much more to create a complete solution.

Zenkit is way too inflexible.

Trying to mold the likes of Jira, Wrike, Asana, Clickup, etc. to meet varying needs of teams is extremely painful.

And I would not allow “low code” solutions like Quickbase or Appian into this comparison.

In other words, is there anything else out there that is leading the way towards solving this problem better than Coda? In my extensive research, the answer today is “no.”

@Bill_French, I guess this is what I am “drunk” on with Coda!

Every time I give up with Coda, and there have been many, I try another solution, as I am desperate to solve this problem. And when I get to another solution, I invariably find myself saying “oh wait, I could accomplish this thing I can’t do here with Coda!” and come right back here.


Hi @ABp, and thanks for referring to my posts. It helps to have extra eyes on shared topics of interest.

The problem for me is that there is Coda the product, and Coda the company.
I can only evaluate based on the information that is given to me.
All information given so far from the company seems to point to the idea that it is either not in their interest, or it is technically not possible to use the Coda solution for one-stop-replacement of project management tools. (edited, added: project management)

The focus seems to be single (small) team projects with short duration. (edit: short in terms of the length in which we can review content produced before performance reasons require that content to be curated.)

My usecase, where I can have multiple projects running, with multiple teams - each with their own knowledgebase - and the ability to provide adhoc filters & views on this for different perspectives (individual, team, company, etc) is simply out of scope for Coda.

It’s not feasible to have it run in a single document in terms of both performance and UX.

And the solution that Coda the company seems to be heading for is to break everything up in multiple documents. And sadly this for me is a showstopper:

  • Search doesn’t work cross-doc.
  • Referencing doesn’t work cross-doc.
  • Cross-doc table UX isn’t the same as in-doc table UX (slower, error prone, etc).
  • Tables are required to be disjoined instead of singular - losing the capability to do global filtering/reporting/etc.


1 Like