Why would you NOT try to manage all a Tech Start-Up's stuff in Coda?

I just wanted to clarify that I’m just getting started with Coda and although I’ve built out a CRM and PM structure, I only have a handful of entries in each of the various tables. I haven’t run into any performance issues myself, and am just interpreting what I’ve read here and the other mentioned thread.

So I may be misunderstanding the performance situation, for example perhaps it is limited to certain, isolated use cases that most small teams would not encounter. But it is something I need to understand if we move all our processes and knowledge to this lovely tool. :slight_smile:

Okay, so these are hypothetical calamities that you are warning us and Coda about. Let’s examine these a bit…

  • No system is impervious to the failures you are describing. Even systems that are 100% custom made and even domain-specific systems that are created specifically to print shipping labels can, have, and will fail.

  • Any shipment process that is gated by a system crafted as an add-on to a document-centric app and cast in a mission-critical role will likely be carefully vetted, stringently tested, and probably scrutinized by many people. I would be skeptical if Coda was the winning technical choice for this specific example. As such, asserting that Coda would get a black eye for such a failure is probably a stretch. Does C++ get a bad wrap when a poorly-skilled engineer writes a bad printer driver?

  • Every minute of every day there are dozens of users who pose as skilled technicians and through masterful convolutery create incredible sh*t-shows that people - the likes of you and I - must untangle. This is neither a new thing or anything that will ever likely stop.

I still don’t have a clear understanding of this phrase. Describe the nature of a process that causes the API to “drop off”.

I certainly agree with all comments here that performance in a given expectation envelope is important and there’s no question the team has work to do.

Yes @Bill_French you are correct in your points. I used a printer in my example because you could have just said ‘what if it runs out of toner!’ I wanted it to be mundane and simple … AND I can see Doctor’s and hospitals building ‘mundane’ workflows that cascade into higher stakes scenarios. I mean I can see unlimited potential!! Which is why I care enough to write in this forum.

My point is that once it breaks, now what?? There’s no good solution. I’ve had to rebuild my ‘doc to rule them all’ 4 times from scratch, and now it lives as 6 different docs. It was a TREMENDOUS opportunity to do this, and its made me the coda wizard I am now, but its not a good model for the business as a whole. As you said, most people are not technical innovators who geek out on this stuff. @Ed_Liveikis laid out the best argument for why it needs to support ~10x the general use case for safety. Much like how buildings are overdesigned for their weight limits.

As far as being thrown off the API, perhaps you haven’t had the honor yet? Typically I get a Zapier notification. In peak distress I created this post when I got hit by the truck and didn’t know what to do … had so much invested in my doc and then BOOM everything breaks. All the people I cajoled into using Coda were pissed … business processes failed … it was a total disaster. As I said earlier, my suggestion at this time isn’t for them to artificially raise the limit beyond what is technically possible at the moment. I have confidence in Moore’s Law and that they will be able to increase these limits. Its how Coda handles the failure (and how at the time of my post they were publicly claiming to handle hundreds of thousands of rows but then kicked me to the curb at 3,000 rows) now that I think could be optimized. AND I have confidence they are working on it and its a priority … my purpose here isn’t so much to chide Coda as provide guidance to other early pioneering innovators that they might not hit their head on the doorframes I’ve already passed through.

1 Like

Until such time that Coda is able to allocate more resources into scaling performance, I would appreciate more user facing diagnostic tools like the “Debug calculations” tool, which I use all the time.

What gets measured, gets managed.

Expose as much diagnostic data as possible for robust users, so that we can proactively manage approaching performance issues long before our Coda projects unexpectedly blow up—costing who knows how much downtime.

Empowering us to partner with you on managing performance limitations will leverage your limited resources.

$.02.

Awesome conversation, btw.

6 Likes

Well said @Ander! Totally agree.

2¢ quite valuable.
Thank you.

2 Likes

@Johg_Ananda,

You trusted a codeless, near-free black-box platform to provide a mission-critical integration. I’m sorry it went sideways, but this is precisely why solution designers need to be skeptical of the many gears and machines they choose to separate processes from data.

At the heart of this problem is a failure to recognize that Coda (like Airtable) provide no event handlers. Because of this, Zapier and Integromat are forced to poll for data changes just as you would be forced had you created a custom integration with the Coda API. This is why it failed. And while it’s fair to lay [some] blame at Coda’s feet, we all must accept at least some of the responsibility. This limitation is not related to Coda’s performance or their API per-se - it’s caused because we all conspire to oversaturate the Coda infrastructure with millions of unnecessary requests every hour - perhaps every minute.

We’re high on glue - the Zapier and Integromat glue. We think it’s wonderful and yet, it is ultimately harmful without webhooks and events that drive them.

Avoiding this (for us) is not easy and adhesives the likes of Zapier only exacerbate the problem by offering vast integration options without cost or near-free and easy to setup. This serves to endear less-knowledgeable users to elevate the polling insanity because they know not what’s happening under the covers.

If the Internet depended on polling architectures, we would have shut it down in 2005. But we got smart and we realized that events are critical to interoperability success patterns. Why Coda chose to ship before events and webhooks were possible is not something I can easily rationalize.

My client solutions do not suffer from this calamity because (a) I asked deep questions about the architecture before getting clients on to it, and (b) for any processes that require some degree of event handling, I either optimize the crap out of the polling detection scheme and set clear latency expectations, or I use Slack events to mimic actual events. I also avoid polling at all costs and push manually-driven event triggers on to actual users until such time as events and webhooks may become available.

Where does that leave us?

Well, we can’t blame Zapier - they want events and webhooks as much as we do. We can certainly blame the glue-factories for referring to these integrations with Coda as “webhooks”. At best it’s a deeply harmful mischaracterization of their service.

Coda can change the API performance game with two key architectural enhancements that work the way almost every SaaS solution presently achieves optimized interoperability. I don’t think they have a choice. In the meantime, there be thin ice - best to skate carefully my friend.

[UPDATE]: BTW, a couple of key things are circling this drain.

  1. Someone asked about the API accessing data via views during the Maker webinar and @shishir mentioned that he thought it was already possible but not likely in the SDKs. On this point alone it would allow us to discover changed records easily and avoid massive table-level polling to find data patterns. I will be spending a fair bit of time on this in the coming weeks because this is a significant advantage lacking webhooks and events.

  2. The glue-factories do not pace or filter their API calls. In many of my integrations, I pace the change detection and processing tasks by calling a small number of API requests over many hours. This allows me to perform API tasks across many thousands of records without triggering alarms. But it does require some carefully scripted machinery to segment the activities to the record set.

4 Likes

@Bill_French great post. Webhooks or things like grpc definitely are needed for change notifications in big public-facing API’s.

I’m trying to read between the lines here - are you implying that you are successfully using Coda on sizable datasets with your clients without any major performance issues or major sacrifices? I’m considering working on code to automatically fill up my PM and CRM projects with a 2-3 year simulated data-load, but would rather avoid the effort if y’all can confirm things are working decently enough for you :slight_smile:

1 Like

They’re needed for little private APIs as well. We have many web services that are special-purpose micro-APIs based on serverless architectures that listen for events. Big or tiny - webhooks (driven by events) drive data seamlessly using very few computing resources.

What is “sizable”?

In my data science work, a million records is moderate and considered outside the realm of browser-based apps, but I do this often with the help of Streamlit. And then I embed Streamlit apps into Coda. With that, document users have access to big data analytics and a framework for building narratives, exploring analytics, etc. There are many tools that support embedded blending to give Coda users access to “sizable” data sets.

I also have one client that uses Google App Script to cache forward customer survey data directly into a raw Coda table from which they perform various analyses and reports by slicing off views. I advise them to be cautious about bitting of large collections.

That particular solution allows users to formulate a query by selecting filtering criteria in a table and setting a refresh flag. A chron process checks for those filter settings in the Coda document and via the API, it refreshes the data. I’ve seen them populate 10,000+ rows in a raw data table, but this takes time. Typically they make a request before leaving the office and the data is ready in the morning. The chron process adds 100 records every five minutes until complete. This is the pacing aspect I was referring to earlier.

Another client has about 1,000 rows of transaction data per month and they spread 18 months across 18 documents. This us ugly but they’re thrilled with it.

Coda can handle some data volume, but don’t expect it to be browsable in a performant fashion; you need to keep larger record sets out of view of the user and pull summaries and other metrics into smaller sets with views.

Major sacrifices? Absolutely.

I think this is where you need to ask - for what purpose?

In my view, Coda (as a data store) is ideal for day-to-day operational information management. It is well-positioned to wrap context, summations, narratives, and some decent charts around aggregations for useful reporting and administrative tasks.

Again - just my view – it is not [presently] a safe harbor for large historical collections of data. And to be a little harsh (perhaps), a quick glance at the feature set should give us a clear picture of how Coda can be used advantageously. Large data set support is not what I see when I read about Coda’s features - just sayin’.

I recommend careful experimentation and a general strategy that paginates seamlessly to mimic the idea that the Coda app has unlimited data scale. This concept is difficult given the lack of HTTP POST/GET capability, but there are ways to overcome these shortcomings.

Someday it will be easier. Today, not so much.

1 Like

Another client has about 1,000 rows of transaction data per month and they spread 18 months across 18 documents. This us ugly but they’re thrilled with it.

Well it’s good they’re happy with it - not sure if this is something I’m willing to deal with on the PM side on our small team, the tool should be reducing friction.

I think this is where you need to ask - for what purpose?

Well as mentioned at least PM, CRM for a software firm. On a 2-year project with 5-10 people, it’s typical to have in the range of several thousand project management items created in the form of goals, tasks and bugs. These are viewed and edited on a daily basis by each team member and the PM in particular will spend a significant portion of the day working in the tool. Data structure and views wouldn’t change much, but rows would.

Most of the time only incomplete work needs to be reviewed, which is usually in the hundreds of items range. Perhaps the filtering of views will make this workflow (mostly) performant. I am hiding all the actual data source tables in its own folder.

Our purpose is to avoid jumping from tool to tool, with all the cost that brings, for things that are supported in Coda, and already converging in the existing tools. Like you can use Asana to do CRM, but awkwardly and insufficiently, as you can do task management in Excel, or a CRM, etc etc. These tools are adding more and more features to be used in different ways but still within rigid structures that were initially intended for a certain use case.

Where we would normally use (the actual tools are only examples) Jira (PM), Zoho (CRM), Confluence (docs, wiki), Typora (markdown notes, quick designs), GDocs/Excel (custom reports, collaborative documentation, etc), we can do these in Coda with custom streamlined workflow that only has what we need it in. Ambitious, yes, but after playing with Coda, and Notion to lesser extent, this is clearly achievable with such a tool.

In my view, Coda (as a data store) is ideal for day-to-day operational information management. It is well-positioned to wrap context, summations, narratives, and some decent charts around aggregations for useful reporting and administrative tasks.

Again - just my view – it is not [presently] a safe harbor for large historical collections of data.

Thank you for the input. That makes sense. It seems I’ll need to keep evaluating, and probably use the API to fill up our dataset with a max load and see how the app performs.

2 Likes

Indeed. These are all good reasons to remake and compress the toolset. This is my incentive for the most part as well - to help clients become more productive by doing less, reducing complexity, increasing usability, and enabling convenience.

Which brings us to the question of archiving. :wink:

We tend to assume databases should grow, and only grow. Are we just being lazy? Or, are there key business requirements for instant access to everything? Perhaps it’s time to think about data-at-rest and where it should reside when at rest.

In my view, we tend to need these historic items for two key objectives:

  1. Search
  2. Analytics

Why not stand up an instance of ElasticSearch and push the past off the table and into something that can make the data more useful?

3 Likes

Why not stand up an instance of ElasticSearch and push the past off the table and into something that can make the data more useful?

It’s an interesting proposal, especially given what you mentioned earlier, that webapps in general struggle to represent and deal with large amounts of data.

At the same time it is something I am trying to avoid - exporting data and dealing with it somewhere else because the main tool is unable to.

It’s very rare when all items, including archived ones, or even only archived ones, need to be inspected one, by one, individually. I don’t have any uses for this in my current workflows.

They are still usually needed for administrative or team wide analytics…e.g. what’s our overall rate of opening and closing of work items over time, since the beginning of time? The delta between these lines is informative for project trajectory, milestone tracking and also as a historical guide for how future projects may track or be informed to create a better process.

I’d be happy to have a place where I define the fields/columns for tables (schema definition place) - without the app needing to show the actual data, in lieu of the “Data” folder I currently have with the table sources. And then all tables in the app can be views, probably all with some sort of filter/limits on data that the frontend model and UI needs to deal with. If the API ends up being powerful and speedy as it should be, users can make their own custom app views with their UI’s of choice if the web-app is too slow.

Now excluding all other Coda users, the data set complexity I’m referring to offers a trivial load for something like a Postgres DB, and many teams would be happy to host their own Coda servers. If Coda doesn’t solve this problem, someone else eventually will. :mantelpiece_clock: :slight_smile:

Coda has a nice set of feature requests on their recent voting page, so it’s a tough call what to prioritize first, but you can’t go wrong with good performance and some bugfixing.

1 Like

Eager to get back in here as so much interesting being said! Real education for me…

First, @Johg_Ananda and @Bill_French - you both have answered the question of this post - I’m going to say Scaleability is a reason a team would NOT use Coda! I am not using the API yet, but I do want to. I have CRM-level clients that I wanted to bring into Coda with the API. I was previously bringing them into Salesforce, which if all goes well I’ll be dropping when the license ends. Was not aware of the API limits, nor the poor way this seems to be handled by otherwise stellar Coda Support. But a 3k limit in a doc - that is a serious limit if you plan to set up Coda as your total CRM and Task/Project Management solution!

I think this goes hand-in-hand with what @Paul_Danyliuk is pointing out re: issues with automations over here:

So, my example of the Y-combinator Batch Company using Coda would come under serious question if Coda is going to be this limited with scaling.

Very eye-opening, and in fact you’ve got me wondering about my own plans with Coda! I also agree that Cross-Doc seems like a way to solve this, by spreading content into other docs, but I have issues here:

  • Cross-Doc now weak, only one-way. But I do trust the team to solve this.
  • As I’ve been discussing, mainly here of late:

I have issues with my schema in figuring out how to link stuff to other stuff in Coda. I think it would be solved by reciprocal linking, as I’ve also talked about in the community, but don’t seem to have any insight back as to whether Coda itself is interested in introducing this soon. The method of a table linking to essentially just one other table via lookups is very limiting. I have needs where I need tables to link to many tables, such as Meeting Notes and other decision-type tables I’m counting on to help my team in its roadmapping and goal planning. So I currently have a bunch of stuff in the same tables that really shouldn’t be, in order to limit my tables to get around this. To have to try to figure out how to link several topic areas out of Cross-Doc seems even more challenging than within one doc.

And @Johg_Ananda agreed 100% that Coda is starting to promote itself as a team solution, that is part of why I started this post in the first place!

The solutions you guys put forth are really outstanding, and count me in as one participant in this convo who’d really like to find out more how Coda’s Product Team thinks about all this. I hope they weigh in soon, either here or somewhere that will give us all some reassurance that investing in Coda now as a Team Solution will not run into Scaleability roadblocks in short order! My example of an engineering team moving 50 engineers off Jira into Coda, with today’s existing Coda architecture, seems foolish though, in light of what you guys presented here.

And @Federico_Stefanato and @Ander great to hear from you both - you’ve both helped me before tremendously in the community - Federico I owe you a response on your last answer in fact! All the more this dialogue’s education for me is immense. I respect all 4 of you as serious Makers. I am wavering about investing in my team’s future solution in Coda, and you guys are presenting big-time details of how Coda really works. Invaluable, and thanks again!

4 Likes

One and only real answer: poor performance.

Compared to other cloud products like Excel, Airtable, etc. Coda is very slow and flaky. In 2020 you’re used to robust products that can take the stress of lots of data. Unfortunately, Coda isn’t one of them

Performance and Coda’s opaque attitude around it is sadly the primary reason.

I invested quite a bit of time creating a setup inside Coda that provided proper project management. I also did my due dilligence on stress testing the setup in terms of both row count, complexity and automation.

Sadly, Coda started becoming sluggish for relative small workloads (~4000 rows) on seemingly simple table setups. On top of that this decrease in response time wasn’t even limited to pages containing the heavy table.

Over the past few months subjectively I’d have to say that sadly the performance decreased even further while we haven’t been doing any major table work - mostly simple pages.

The secondary reason being that in a lot of aspects it just didn’t feel “finished” and I’ve reported numerous UX & UI bugs in the past year. The QA process I went through also didn’t give me the impression that these (IMHO) basic bugs would get solved anytime soon :frowning:.

E.g.: The trivial CSS bug which doesn’t collapse an image view on cards if it is not set. (One could argue that this is preference, although I think it’s not the 95% case) - Another example being the UX around Categories for Cards, which is disabled in Card view, but an be set if you change it in a different view and change the view back.

3 Likes

Hey @GJ_Roelofs, very cool to have you chime in here. I was in fact thinking earlier of including one of your quotes in this post, which I think is one of the best summaries I’ve ever seen of the appeal of Coda, here it is:

So I’m glad to see you appear on your own!

It seems there is a big task ahead of Coda to lick these performance issues. From my experience though, these are can be less challenging than fundamental design and architecture of a team management app that can potentially solve the problem I feel has become huge - proliferation of apps to manage a business. Hundreds of apps out there are trying to solve this, I think none is as close as Coda.

I think most agree that Notion is not that app.

Airtable is very thin on team-friendly elements. For example, there is no “Rich Text” field type, let alone one you can pop out and write into like a doc. I consider Airtable a modern database, but Coda offers so much more to create a complete solution.

Zenkit is way too inflexible.

Trying to mold the likes of Jira, Wrike, Asana, Clickup, etc. to meet varying needs of teams is extremely painful.

And I would not allow “low code” solutions like Quickbase or Appian into this comparison.

In other words, is there anything else out there that is leading the way towards solving this problem better than Coda? In my extensive research, the answer today is “no.”

@Bill_French, I guess this is what I am “drunk” on with Coda!

Every time I give up with Coda, and there have been many, I try another solution, as I am desperate to solve this problem. And when I get to another solution, I invariably find myself saying “oh wait, I could accomplish this thing I can’t do here with Coda!” and come right back here.

2 Likes

Hi @ABp, and thanks for referring to my posts. It helps to have extra eyes on shared topics of interest.

The problem for me is that there is Coda the product, and Coda the company.
I can only evaluate based on the information that is given to me.
All information given so far from the company seems to point to the idea that it is either not in their interest, or it is technically not possible to use the Coda solution for one-stop-replacement of project management tools. (edited, added: project management)

The focus seems to be single (small) team projects with short duration. (edit: short in terms of the length in which we can review content produced before performance reasons require that content to be curated.)

My usecase, where I can have multiple projects running, with multiple teams - each with their own knowledgebase - and the ability to provide adhoc filters & views on this for different perspectives (individual, team, company, etc) is simply out of scope for Coda.

It’s not feasible to have it run in a single document in terms of both performance and UX.

And the solution that Coda the company seems to be heading for is to break everything up in multiple documents. And sadly this for me is a showstopper:

  • Search doesn’t work cross-doc.
  • Referencing doesn’t work cross-doc.
  • Cross-doc table UX isn’t the same as in-doc table UX (slower, error prone, etc).
  • Tables are required to be disjoined instead of singular - losing the capability to do global filtering/reporting/etc.

:frowning:

1 Like

This is not entirely true. The rich-text support is in beta and I’ve been using it for about 90 days - flawless and quite useful. Although, it is not truly “rich” - rather, it’s an implementation of Markdown under the covers.

And thankfully, your comments show sober periods followed by more drinking. :wink:

2 Likes

I’d like to understand this in more detail. Seems like they have the same permission structure as Google Drive or am I missing something?

This is a very broad assertion that is deeply ambiguous. A one-stop-replacement of other tools is so vast and the requirements are so varied that I would fear engaging a team to attempt this or even allow it to.

In contrast, I enjoyed the tone and tenor of your comments - they are astute observations that pin down many of the roadblocks that we all face from time-to-time in Coda. And to be clear, not all of us are trying to use Coda as a “one-stop-replacement of other tools”, so while we bump into many of your pain-points, we do so for different reasons. Limited search functionality is a good example; you don’t have to be building anything to appreciate unified search.

I don’t want to speak for Codans, but I don’t think they are superimposing their assumptions about use-case duration. If you have that feeling, it’s easy to find examples of long-running documents that ebb and flow with data month-after-month.

As to the “single (small) team projects” observation, you may be right, but I doubt this is an intentional design limitation or chosen strategy. Rather, I would guess it’s more an inescapable aspect of the relative adolescence of the product. In the quest to compress time, steps, and tools - Codans need [ironically] time and space to explore the best ways to get you to your product vision.

1 Like

To avoid any ambiguity, most, if not all, of my comments can be read within the context of my usecase - as outlined in the same post. The one-stop-replacement of other tools was a shorthand for the set as defined by those tools that deal with project and knowledge management.

True again, and to me, this statement was again made within my context and usecase - which I should have been more specific about.

Short in single (small) team projects with short duration. mostly referred to the fact that any document that produces content over time will be required to delete content just for the sake of performance.

I.e.: either your document requires no memory and the timespan doesn’t matter, or the span and longevity is defined by the rate at which content is produced - hence “short”.

I particularly find this last issue quite large, as the role that Coda fills at face value for some unaware newcomers (document management, knowledge base) is a time-bomb of sorts where the user at some point either needs to give up crucial UX that lead to the choice of Coda as a platform or accept that he needs to lose data integrity.

The above continguent on my understanding of where Coda is heading is correct.
However, given the recent posts on API & row content limits (i.e.: if a realistic number of ~50k rows on a simple table was feasible, then we wouldn’t be getting 1k limits - on certain API, and a number I have not personally run up against, but referenced in a newly created topic on API limits), it looks like the performance issue is rather fundamental and not easily solvable.


All of the above is under the understanding that I want to make very clear where certain “hidden” limitations of Coda lie AND the fact that I still currently use it for the role of knowledge base.