Why would you NOT try to manage all a Tech Start-Up's stuff in Coda?

Wow, did this turn into a big topic!

I’ve read and re-read this pretty carefully and wanted to address the concerns that have been brought up so far and I also want to thank everyone here for putting so much time into writing these posts and offering this feedback.

There is a lot of great stuff in here about how Coda allows everyone to build much more than they were able to with other systems, and with much less effort. There is also a very big array of other software that Coda has taken the place of, from as big as fully featured project management systems to smaller meeting notes apps.

The big questions popped up around scalability, doc size, and how to manage various amounts of data. One of the questions is how big of a team or company can Coda be used for. Well, Coda is about 70 employees and we actually run on Coda. We use it everyday in all sorts of capacities. Other companies that use Coda, to extents that might surprise you, are Uber, Spotify, The New York Times, Square, Zapier, and more. These are companies with large engineering, sales, and support teams and they use Coda for a very wide range of things as well. So why do we see posts here saying a few people is too much and companies with 1,000’s of people who happen to be running Coda just fine?

It’s been said here already that it’s pretty rare to have a system so flexible that it can be made into the various solutions we see posted in this community. With that kind of extreme flexibility comes an even more extreme number of permutations in the ways these various features can be put together. If you consider every feature available, and all configurations of formulas, it’s an incalculable number. If you throw in the various permutations of data that can be used in these docs, it actually is infinite.

This unthinkably gigantic number of permutations, from formulas to various types and sizes of data, make estimating how big a doc can become very difficult. There is more to consider than just data size. This isn’t having a flash drive with 500 MB of space and hoping to fill it up to 499 MB while accurately estimating each picture being between 4.6 MB and 5.4 MB. When you have operations to run and require a cache of memory to hold data while those operations are running, you need memory to perform the process. With the nearly wide open formula language that Coda offers, there are some calculations that can take quite a bit of memory while they run. So that same flash drive can’t hold 499 MB of data AND do calculations at the same time.

There are docs with 10,000 rows that run just fine and docs with 1,000 rows that are somewhat slow. Due to the very wide range of data we seen in rows, it’s tough to ballpark when this happens, but if I had to put a range on it for where Coda is at the moment of writing this post, 0 to several thousands, you should be totally fine and thousands to 10,000 you should be efficient with your formulas. 10,000 to 20,000 you should be very efficient with data and formulas for a single doc.

As far as the API goes, the limit is more with doc size than with row number. There is a very small percentage of Coda docs running the API that this affects. So small that I can count them on one hand.

Like was mentioned before, every system can be overloaded. So when you chose a system, you need to design within that systems constraints, and if you want bonus points, play to that systems strengths. Spend some time figuring out what data is necessary and what can be discarded. This will help to streamline your processes as well, so it can be a win win. Then spend some time figuring out how to most effectively display that information so it can be digested quickly and easily.

My Personal Experience with Larger Datasets

At a previous job, I was asked to help out with a data project that ran well over 100,000 rows of data and that would grow by more than that in less than a year. There were enough columns to make the dataset not loadable in any online spreadsheet software (Sheets or Excel). A local install of Excel was required. The ongoing project had taken roughly 10 hours each month to sort and compile the new data entries and even then, it was just a grid of numbers and pivot tables that would hopefully make sense at some point.

After looking over the data and spending far too many hours trying to reverse engineer and clean up a massive spreadsheet, I started to see where some of the problems were. Then I turned to Coda to sort out the solution.

There were simple ways to break down the data into smaller chunks, run what I needed to in Coda, then compile the results into a cumulative results doc. Data split monthly into separate docs, then compiled results from each into a results doc. With this setup, the time the project took each month went from 10 hours down to 15 minutes. That’s 40x faster than the previous large database setup. Not only that, it found previous errors in the system and improved accuracy moving forward. Charts and stats were far more readable which also allowed them to be far more useful.

I didn’t insist on absolutely all data being in one doc AND expecting it to be readily available and usable in a web app. I broke it down into realistic groups and engineered the solution around the strengths of the system I chose to use.

I’m not saying that Coda’s answer is to split docs up and that’s that. I’m saying that where Coda is right now, 100,000 rows of data is not likely to run well. But we are working on performance daily as well as growing the product. If you’re building a house, you can’t paint the walls and clean up the kitchen before they are even built. And while you’re building, you can’t continue to the next task until you cleanup after the first one. Coda is several teams working together building, cleaning, fixing, and repeating on a regular basis over various areas of the product always aiming for continuous improvement and innovation. Thanks to computers, steps aren’t as literal as this metaphore, so where there is overlap with these steps, we try to take full advantage of it to be as efficient as possible.

We’ll keep pressing forward to better the product and improve performance and we hope that you’ll keep testing, using, and pushing Coda in new and creative ways. I’m betting there is going to be a heck of a lot of good stuff in the years to come.

13 Likes