Why would you NOT try to manage all a Tech Start-Up's stuff in Coda?

Indeed. These are all good reasons to remake and compress the toolset. This is my incentive for the most part as well - to help clients become more productive by doing less, reducing complexity, increasing usability, and enabling convenience.

Which brings us to the question of archiving. :wink:

We tend to assume databases should grow, and only grow. Are we just being lazy? Or, are there key business requirements for instant access to everything? Perhaps it’s time to think about data-at-rest and where it should reside when at rest.

In my view, we tend to need these historic items for two key objectives:

  1. Search
  2. Analytics

Why not stand up an instance of ElasticSearch and push the past off the table and into something that can make the data more useful?

Why not stand up an instance of ElasticSearch and push the past off the table and into something that can make the data more useful?

It’s an interesting proposal, especially given what you mentioned earlier, that webapps in general struggle to represent and deal with large amounts of data.

At the same time it is something I am trying to avoid - exporting data and dealing with it somewhere else because the main tool is unable to.

It’s very rare when all items, including archived ones, or even only archived ones, need to be inspected one, by one, individually. I don’t have any uses for this in my current workflows.

They are still usually needed for administrative or team wide analytics…e.g. what’s our overall rate of opening and closing of work items over time, since the beginning of time? The delta between these lines is informative for project trajectory, milestone tracking and also as a historical guide for how future projects may track or be informed to create a better process.

I’d be happy to have a place where I define the fields/columns for tables (schema definition place) - without the app needing to show the actual data, in lieu of the “Data” folder I currently have with the table sources. And then all tables in the app can be views, probably all with some sort of filter/limits on data that the frontend model and UI needs to deal with. If the API ends up being powerful and speedy as it should be, users can make their own custom app views with their UI’s of choice if the web-app is too slow.

Now excluding all other Coda users, the data set complexity I’m referring to offers a trivial load for something like a Postgres DB, and many teams would be happy to host their own Coda servers. If Coda doesn’t solve this problem, someone else eventually will. :mantelpiece_clock: :slight_smile:

Coda has a nice set of feature requests on their recent voting page, so it’s a tough call what to prioritize first, but you can’t go wrong with good performance and some bugfixing.

1 Like

Eager to get back in here as so much interesting being said! Real education for me…

First, @Johg_Ananda and @Bill_French - you both have answered the question of this post - I’m going to say Scaleability is a reason a team would NOT use Coda! I am not using the API yet, but I do want to. I have CRM-level clients that I wanted to bring into Coda with the API. I was previously bringing them into Salesforce, which if all goes well I’ll be dropping when the license ends. Was not aware of the API limits, nor the poor way this seems to be handled by otherwise stellar Coda Support. But a 3k limit in a doc - that is a serious limit if you plan to set up Coda as your total CRM and Task/Project Management solution!

I think this goes hand-in-hand with what @Paul_Danyliuk is pointing out re: issues with automations over here:

So, my example of the Y-combinator Batch Company using Coda would come under serious question if Coda is going to be this limited with scaling.

Very eye-opening, and in fact you’ve got me wondering about my own plans with Coda! I also agree that Cross-Doc seems like a way to solve this, by spreading content into other docs, but I have issues here:

  • Cross-Doc now weak, only one-way. But I do trust the team to solve this.
  • As I’ve been discussing, mainly here of late:

I have issues with my schema in figuring out how to link stuff to other stuff in Coda. I think it would be solved by reciprocal linking, as I’ve also talked about in the community, but don’t seem to have any insight back as to whether Coda itself is interested in introducing this soon. The method of a table linking to essentially just one other table via lookups is very limiting. I have needs where I need tables to link to many tables, such as Meeting Notes and other decision-type tables I’m counting on to help my team in its roadmapping and goal planning. So I currently have a bunch of stuff in the same tables that really shouldn’t be, in order to limit my tables to get around this. To have to try to figure out how to link several topic areas out of Cross-Doc seems even more challenging than within one doc.

And @Johg_Ananda agreed 100% that Coda is starting to promote itself as a team solution, that is part of why I started this post in the first place!

The solutions you guys put forth are really outstanding, and count me in as one participant in this convo who’d really like to find out more how Coda’s Product Team thinks about all this. I hope they weigh in soon, either here or somewhere that will give us all some reassurance that investing in Coda now as a Team Solution will not run into Scaleability roadblocks in short order! My example of an engineering team moving 50 engineers off Jira into Coda, with today’s existing Coda architecture, seems foolish though, in light of what you guys presented here.

And @Federico_Stefanato and @Ander great to hear from you both - you’ve both helped me before tremendously in the community - Federico I owe you a response on your last answer in fact! All the more this dialogue’s education for me is immense. I respect all 4 of you as serious Makers. I am wavering about investing in my team’s future solution in Coda, and you guys are presenting big-time details of how Coda really works. Invaluable, and thanks again!

3 Likes

One and only real answer: poor performance.

Compared to other cloud products like Excel, Airtable, etc. Coda is very slow and flaky. In 2020 you’re used to robust products that can take the stress of lots of data. Unfortunately, Coda isn’t one of them

Performance and Coda’s opaque attitude around it is sadly the primary reason.

I invested quite a bit of time creating a setup inside Coda that provided proper project management. I also did my due dilligence on stress testing the setup in terms of both row count, complexity and automation.

Sadly, Coda started becoming sluggish for relative small workloads (~4000 rows) on seemingly simple table setups. On top of that this decrease in response time wasn’t even limited to pages containing the heavy table.

Over the past few months subjectively I’d have to say that sadly the performance decreased even further while we haven’t been doing any major table work - mostly simple pages.

The secondary reason being that in a lot of aspects it just didn’t feel “finished” and I’ve reported numerous UX & UI bugs in the past year. The QA process I went through also didn’t give me the impression that these (IMHO) basic bugs would get solved anytime soon :frowning:.

E.g.: The trivial CSS bug which doesn’t collapse an image view on cards if it is not set. (One could argue that this is preference, although I think it’s not the 95% case) - Another example being the UX around Categories for Cards, which is disabled in Card view, but an be set if you change it in a different view and change the view back.

3 Likes

Hey @GJ_Roelofs, very cool to have you chime in here. I was in fact thinking earlier of including one of your quotes in this post, which I think is one of the best summaries I’ve ever seen of the appeal of Coda, here it is:

So I’m glad to see you appear on your own!

It seems there is a big task ahead of Coda to lick these performance issues. From my experience though, these are can be less challenging than fundamental design and architecture of a team management app that can potentially solve the problem I feel has become huge - proliferation of apps to manage a business. Hundreds of apps out there are trying to solve this, I think none is as close as Coda.

I think most agree that Notion is not that app.

Airtable is very thin on team-friendly elements. For example, there is no “Rich Text” field type, let alone one you can pop out and write into like a doc. I consider Airtable a modern database, but Coda offers so much more to create a complete solution.

Zenkit is way too inflexible.

Trying to mold the likes of Jira, Wrike, Asana, Clickup, etc. to meet varying needs of teams is extremely painful.

And I would not allow “low code” solutions like Quickbase or Appian into this comparison.

In other words, is there anything else out there that is leading the way towards solving this problem better than Coda? In my extensive research, the answer today is “no.”

@Bill_French, I guess this is what I am “drunk” on with Coda!

Every time I give up with Coda, and there have been many, I try another solution, as I am desperate to solve this problem. And when I get to another solution, I invariably find myself saying “oh wait, I could accomplish this thing I can’t do here with Coda!” and come right back here.

1 Like

Hi @ABp, and thanks for referring to my posts. It helps to have extra eyes on shared topics of interest.

The problem for me is that there is Coda the product, and Coda the company.
I can only evaluate based on the information that is given to me.
All information given so far from the company seems to point to the idea that it is either not in their interest, or it is technically not possible to use the Coda solution for one-stop-replacement of project management tools. (edited, added: project management)

The focus seems to be single (small) team projects with short duration. (edit: short in terms of the length in which we can review content produced before performance reasons require that content to be curated.)

My usecase, where I can have multiple projects running, with multiple teams - each with their own knowledgebase - and the ability to provide adhoc filters & views on this for different perspectives (individual, team, company, etc) is simply out of scope for Coda.

It’s not feasible to have it run in a single document in terms of both performance and UX.

And the solution that Coda the company seems to be heading for is to break everything up in multiple documents. And sadly this for me is a showstopper:

  • Search doesn’t work cross-doc.
  • Referencing doesn’t work cross-doc.
  • Cross-doc table UX isn’t the same as in-doc table UX (slower, error prone, etc).
  • Tables are required to be disjoined instead of singular - losing the capability to do global filtering/reporting/etc.

:frowning:

This is not entirely true. The rich-text support is in beta and I’ve been using it for about 90 days - flawless and quite useful. Although, it is not truly “rich” - rather, it’s an implementation of Markdown under the covers.

And thankfully, your comments show sober periods followed by more drinking. :wink:

1 Like

I’d like to understand this in more detail. Seems like they have the same permission structure as Google Drive or am I missing something?

This is a very broad assertion that is deeply ambiguous. A one-stop-replacement of other tools is so vast and the requirements are so varied that I would fear engaging a team to attempt this or even allow it to.

In contrast, I enjoyed the tone and tenor of your comments - they are astute observations that pin down many of the roadblocks that we all face from time-to-time in Coda. And to be clear, not all of us are trying to use Coda as a “one-stop-replacement of other tools”, so while we bump into many of your pain-points, we do so for different reasons. Limited search functionality is a good example; you don’t have to be building anything to appreciate unified search.

I don’t want to speak for Codans, but I don’t think they are superimposing their assumptions about use-case duration. If you have that feeling, it’s easy to find examples of long-running documents that ebb and flow with data month-after-month.

As to the “single (small) team projects” observation, you may be right, but I doubt this is an intentional design limitation or chosen strategy. Rather, I would guess it’s more an inescapable aspect of the relative adolescence of the product. In the quest to compress time, steps, and tools - Codans need [ironically] time and space to explore the best ways to get you to your product vision.

To avoid any ambiguity, most, if not all, of my comments can be read within the context of my usecase - as outlined in the same post. The one-stop-replacement of other tools was a shorthand for the set as defined by those tools that deal with project and knowledge management.

True again, and to me, this statement was again made within my context and usecase - which I should have been more specific about.

Short in single (small) team projects with short duration. mostly referred to the fact that any document that produces content over time will be required to delete content just for the sake of performance.

I.e.: either your document requires no memory and the timespan doesn’t matter, or the span and longevity is defined by the rate at which content is produced - hence “short”.

I particularly find this last issue quite large, as the role that Coda fills at face value for some unaware newcomers (document management, knowledge base) is a time-bomb of sorts where the user at some point either needs to give up crucial UX that lead to the choice of Coda as a platform or accept that he needs to lose data integrity.

The above continguent on my understanding of where Coda is heading is correct.
However, given the recent posts on API & row content limits (i.e.: if a realistic number of ~50k rows on a simple table was feasible, then we wouldn’t be getting 1k limits - on certain API, and a number I have not personally run up against, but referenced in a newly created topic on API limits), it looks like the performance issue is rather fundamental and not easily solvable.


All of the above is under the understanding that I want to make very clear where certain “hidden” limitations of Coda lie AND the fact that I still currently use it for the role of knowledge base.

Let’s revisit this assertion because I cannot recall anytime this has happened with Coda and I have some client tables that exceed far more than 1,000 records. What is the original source of this claim?

I’ve also updated my initial comment to better outline that I personally have not run up against any hard-measurable limit. (Mostly soft throttling that I have not been able to pin down to specific hard limits, but which seem to be in the same order of magnitute (<1k rows) for automated operations)

@GJ_Roelofs:

it looks like the performance issue is rather fundamental and not easily solvable.

This is what I want to know. Are the performance characteristics we are seeing a result of a race to provide some initial featureset via a solid architecture while putting performance on the backburner? Or does the architecture have fundamental problems that make reaching performance targets for the use cases @GJ_Roelofs and I are talking about out of reach in the short to medium term.

The Year of the Maker Coda team doc has performance as number 2 voted issue. I would love to see a 1 (or 2!) month sprint by Coda focusing strictly on performance and bugfixes. I think if the reason is the former, than we will all be super happy and Coda’s adoption rate will see a nice bump.

2 Likes

In my opinion, this is a flawed claim at best. While I certainly believe @Johg_Ananda experienced this error, I also believe it was entirely avoidable.

Recall my reaction to this [alleged] limitation here. I’ll summarize by saying that (a) Zapier is not kind to APIs, and (b) Zapier must look at every record to determine when absolutely nothing has changed.

I don’t think it’s fair to hang an assessment on a conclusion about Coda’s API that is fundamentally based on the results of a likely bad actor (i.e., Zapier). Codeless integration tools are generally heavy-handed and unaware of discrete knowledge about the data in the integrated system. And let’s be rational about this - the Coda API is Beta 1.

If you want to create sound API-based solutions and you are building a commercial-grade application, the last thing you should be doing is using a glue factory to wire it all together. Adhesive tools like this are fine for prototyping a solution or using in limited and personal projects, but it is very risky to bank on them.

Making it Better

I hate to bitch about something and not offer alternative approaches, so here’s how I program interfaces with the Coda API knowing it is Beta 1 and performance is probably at issue -

  1. Use queries - retrieving records with filters is a must; you cannot optimize without them.

  2. Use visibleOnly - find ways to hide records that aren’t relevant; it has a positive impact on performance.

  3. Pace your processes - chunkify the process over many hours to lessen the calls.

  4. Use views - instead of reading from tables, read from deeply optimized views. This was mentioned in the Maker webinar but I’m not sure this is released yet - will test later this week but the idea is simple - instead of a table, integrate via a view that vastly limits the activity.

  5. Use PUT - sustain discrete record IDs in the middle-tier so you can instantly update any item in Coda without performing repeated GETs just to find that which you want to update.

3 Likes

@GJ_Roelofs, really great commentary in here.

So I’m forced to ask - what else have you considered besides Coda? One alternative I was looking at is a new tool, Fibery. They have a great approach, loaded with potential, but very fledgling.

I am essentially trying to replace most of the tools that I listed at the top of this thread, I believe based on your posts that I’ve read that you have a similar goal. Most of these tools are based on some sort of db or another, and if I can do stuff like product plan, as well as manage tasks, in one place, I certainly want to accomplish that!

You raise great points - I too dread Cross Doc. In fact, to get back to the subject of reciprocal linking, I already am handcuffed with trying to relate all the tables I have, due to the fact that lookups can only handle a one-table to one-table relationship. Unless some expert developer-level formulas are brought into the mix? I’m willing to deal with some stuff that I have in one table that should really be in separate tables, though. But Cross Doc is much less something I’d like to deal with…

My hope in reading all this is that, indeed, Coda will address the Scalability over time. I have the sense, although as you state it’s not clearly published, that they are interested in serving as a Team Hub solution, and not just limiting to “short” projects as you aptly put it. This is interesting in the context of the Template Repository. I think you have here a bunch of great ideas, but they are all isolated, and shouldn’t be. Meeting Notes, but no tasks connected. OKR’s, but no customers or projects closely connected. In fact, there is no real mention of even connecting these docs in the first place. Those type of Coda docs are exactly what I’m not looking for, as they put your team in isolation, having to rely on some other app to track the rest of your data.

And @Bill_French, touche. As usual agree with most of what you say. Interesting to hear about Airtable rich text feature. Can you @mention any entity in a Base like with Coda?

I could make a list comparing Coda to AirTable, but for me, there was no contest as to which is superior for team use. Would you choose AirTable for what I and I think @GJ_Roelofs are looking for? And I believe Air Table still has as a top request the ability to link bases’ data. As much as I fear Cross Doc, I do have confidence in the Coda team on this front. They are the first to admit this is an initial cut, and they did ship it in a matter of months after 1.0. I think AirTable hasn’t addressed their own lack of “cross base” in years.

Finally, I think a big point of hope is your assertion of the “adolescence” of Coda. That is probably something that gives me the greatest confidence in moving forward. I believe that if enough highly skilled Makers continue to flock to Coda, as most of you posting here, they will for sure prioritize this issue of Scalability. In a sense my whole point with this thread is about the potential of Coda. I’m sure many of the Coda team has read a lot of this commentary, and I believe deep down they will take you Power Makers seriously and we will continue to see Scalability addressed at the forefront of the roadmap!

Wow, did this turn into a big topic!

I’ve read and re-read this pretty carefully and wanted to address the concerns that have been brought up so far and I also want to thank everyone here for putting so much time into writing these posts and offering this feedback.

There is a lot of great stuff in here about how Coda allows everyone to build much more than they were able to with other systems, and with much less effort. There is also a very big array of other software that Coda has taken the place of, from as big as fully featured project management systems to smaller meeting notes apps.

The big questions popped up around scalability, doc size, and how to manage various amounts of data. One of the questions is how big of a team or company can Coda be used for. Well, Coda is about 70 employees and we actually run on Coda. We use it everyday in all sorts of capacities. Other companies that use Coda, to extents that might surprise you, are Uber, Spotify, The New York Times, Square, Zapier, and more. These are companies with large engineering, sales, and support teams and they use Coda for a very wide range of things as well. So why do we see posts here saying a few people is too much and companies with 1,000’s of people who happen to be running Coda just fine?

It’s been said here already that it’s pretty rare to have a system so flexible that it can be made into the various solutions we see posted in this community. With that kind of extreme flexibility comes an even more extreme number of permutations in the ways these various features can be put together. If you consider every feature available, and all configurations of formulas, it’s an incalculable number. If you throw in the various permutations of data that can be used in these docs, it actually is infinite.

This unthinkably gigantic number of permutations, from formulas to various types and sizes of data, make estimating how big a doc can become very difficult. There is more to consider than just data size. This isn’t having a flash drive with 500 MB of space and hoping to fill it up to 499 MB while accurately estimating each picture being between 4.6 MB and 5.4 MB. When you have operations to run and require a cache of memory to hold data while those operations are running, you need memory to perform the process. With the nearly wide open formula language that Coda offers, there are some calculations that can take quite a bit of memory while they run. So that same flash drive can’t hold 499 MB of data AND do calculations at the same time.

There are docs with 10,000 rows that run just fine and docs with 1,000 rows that are somewhat slow. Due to the very wide range of data we seen in rows, it’s tough to ballpark when this happens, but if I had to put a range on it for where Coda is at the moment of writing this post, 0 to several thousands, you should be totally fine and thousands to 10,000 you should be efficient with your formulas. 10,000 to 20,000 you should be very efficient with data and formulas for a single doc.

As far as the API goes, the limit is more with doc size than with row number. There is a very small percentage of Coda docs running the API that this affects. So small that I can count them on one hand.

Like was mentioned before, every system can be overloaded. So when you chose a system, you need to design within that systems constraints, and if you want bonus points, play to that systems strengths. Spend some time figuring out what data is necessary and what can be discarded. This will help to streamline your processes as well, so it can be a win win. Then spend some time figuring out how to most effectively display that information so it can be digested quickly and easily.

My Personal Experience with Larger Datasets

At a previous job, I was asked to help out with a data project that ran well over 100,000 rows of data and that would grow by more than that in less than a year. There were enough columns to make the dataset not loadable in any online spreadsheet software (Sheets or Excel). A local install of Excel was required. The ongoing project had taken roughly 10 hours each month to sort and compile the new data entries and even then, it was just a grid of numbers and pivot tables that would hopefully make sense at some point.

After looking over the data and spending far too many hours trying to reverse engineer and clean up a massive spreadsheet, I started to see where some of the problems were. Then I turned to Coda to sort out the solution.

There were simple ways to break down the data into smaller chunks, run what I needed to in Coda, then compile the results into a cumulative results doc. Data split monthly into separate docs, then compiled results from each into a results doc. With this setup, the time the project took each month went from 10 hours down to 15 minutes. That’s 40x faster than the previous large database setup. Not only that, it found previous errors in the system and improved accuracy moving forward. Charts and stats were far more readable which also allowed them to be far more useful.

I didn’t insist on absolutely all data being in one doc AND expecting it to be readily available and usable in a web app. I broke it down into realistic groups and engineered the solution around the strengths of the system I chose to use.

I’m not saying that Coda’s answer is to split docs up and that’s that. I’m saying that where Coda is right now, 100,000 rows of data is not likely to run well. But we are working on performance daily as well as growing the product. If you’re building a house, you can’t paint the walls and clean up the kitchen before they are even built. And while you’re building, you can’t continue to the next task until you cleanup after the first one. Coda is several teams working together building, cleaning, fixing, and repeating on a regular basis over various areas of the product always aiming for continuous improvement and innovation. Thanks to computers, steps aren’t as literal as this metaphore, so where there is overlap with these steps, we try to take full advantage of it to be as efficient as possible.

We’ll keep pressing forward to better the product and improve performance and we hope that you’ll keep testing, using, and pushing Coda in new and creative ways. I’m betting there is going to be a heck of a lot of good stuff in the years to come.

8 Likes

I concur. This is exactly how I segment range of scale and the responsibility that goes along with knowing the boundaries of the underlying solution. Scalability is not a one-sided requirements debate; it involves careful planning and a comprehensive embrace of internal and external limitations.

1 Like

LOL. No.

(more fluff because this platform doesn’t allow me to be concise)

@BenLee the issue is that the constraints aren’t clearly defined. The Coda marketing pages say its unlimited:

And while I agree with the spirit of your post, you failed to address the elephant in the room, which is that a small team of four breaks the API in a few months. This is happening in manifest - this thread is the evidence of it - to small teams and individuals are hitting the limit and getting disconnected from the API with no forwarning or indicator (instead we are told its unlimited):

Its hard for us to square the circle that you have thousands of people from Uber, Spotify, The New York Times, Square, Zapier, and more NOT having any of the issues that us small independents are?

1 Like