Why would you NOT try to manage all a Tech Start-Up's stuff in Coda?

@Edward_Liveikis as much as I love Coda unfortunately I don’t have good answers here. I have been at the tip of the spear hitting the performance limits over the last couple years. As awesome as Codans are, I think they have been substandard when it comes to this topic. They have ignored my post: New API limit - 1,000 rows? Alternative services or workaround

and the user experience for getting thrown off the API is HORRIBLE. Right now there is no way to see the size/health of your doc. Instead they usually wait until Friday night at 7pm after everyone’s done working and then throw you off. So when you come in Monday everything over the weekend has broken and you have a huge mess to clean up. No warning that you are approaching a limit, and no indicator that you’ve broken it … just a bunch of angry colleagues and broken apps that aren’t working.

Worse, their documentation around this is unclear and misleading. This article: https://help.coda.io/en/articles/3370370

is their official policy. This article used to say that each Coda doc could store ‘all of the text of Encyclopedia Brittanica’, which I pointed out to them is literally gigabytes of data or millions of rows. Only after my ‘calling them out’ on this did they change it to the 3,000 row figure, which in my experience is overstated. We are falling off on one of my docs with less than that.

The upside is that sometimes the osbstacle is the path. Having this performance limitation has really made me think about performance, schema and how to set things up for performance, which, after much contemplation over the the last year, has been a good thing. As @Bill_French mentioned, scalability is an issue at every level of scale and as Makers we want to always be thinking about how to build efficiently.

I think Coda gets a D in this area right now. To improve their GPA I suggest:

  • A health indicator showing ea Maker the size of their doc
  • A notification in Coda and via email as the doc approaches / hits the limit (its crazy to get the notice from other web services crashing)
  • An increased limit for paying Makers
  • An ability to pay / somehow increase the limit
  • An indication / roadmap from Coda about how and when this is going to increase
5 Likes

Hey Guys, very glad to get all this commentary added in here, a lot of eye-opening stuff for me and hopefully of value to some of the readers, too!

First @Johg_Ananda and @Bill_French, as pioneering Makers I’m increasing my respect for, very intriguing your advanced commentary about the scaling issue. I have been spending most of my efforts tweaking Coda to my team’s needs, and not thinking about these questions. But I agree 100% with all you both wrote as things that need to be addressed if Coda is to realize its potential as a true all-in-one app. As you can see I would really love to see Coda evolve and replace, for example, Jira in big engineering teams. Sounds like these Security and Scaling priorities would need to be pushed to accomplish this.

And @John_Scrugham, that is an amazing graphic, thank you for posting! Would love to build it in Coda once Mindmapping comes out :wink: I think by now, you guys are legitimately built to replace or integrate a lot of this stuff, and in fact with a superior user experience in some cases. But your image illustrates my point exactly. How many founders, or CEO’s of tech/software companies, are eager to deal with all that stuff? They have businesses to run! Managing the proliferation of tools we need to run this type of business these days is an amazing challenge.

To my point from your video about building Marketing Automation in Coda: Let’s say I have already set up that terrific CRM you and Maria demoed, and I now want to think about scheduling Email to my Companies, regular blog posts, etc. Who would want to go sign up for Hubspot, learn that tool, pay for it, and then try to integrate with the data that lives in Coda? So now deal with non-synced data most likely, a real headache?

I will try to pull off a TL;DR of my own:

  • Coda is in position to solve this problem. I’ve spent the last two years studying all these tools and nothing checks off as many boxes as Coda. Whether an intended goal of the founders or not, Coda has become a top solution for this market need.
  • Coda probably needs to “close the deal” with some more tailoring of the app to increase team friendliness. And no question address the Security and Scaleability stuff described here.
  • To the Coda Product Team: It is my sincere hope that all this goes to the top of your priority list!

Thanks again everyone for this compelling discussion!

And possibility because there aren’t any good answers when we make great assumptions and extrapolations that overlay our vision of how Coda should change the world.

Maybe We Should Step Back a Bit…

I think y’all are getting a little drunk on Coda. :wink: But I get it - using it is often euphoric and even magical at times.

It allows us to do things never before possible and all in a practical manner. Indeed, in many respects it creates accessibility to the inaccessible. It allows us to see information in previously unseeable formats and contexts. And it provides affordable data and content luxuries the likes of which can be very costy to build.

About 3 miles east of Breckenridge Resort is a little cabin - an unassuming 2200 sq ft at about 11,000ft MSL. It provides affordable and safe access to the largely inaccessible deep forests and mountains during the middle of winter. It’s situated in a place that is not likely to kill you with avalanches and it has some nice creature comforts in the middle of pretty much nowhere.

Like Coda, this cabin was built with design choices that make the experience quite pleasurable. But it is not where you should drag a beer cooler on a sled.

Is it possible we’re overstepping the spirit of Coda and the way that it should change our world?

2 Likes

@Bill_French I get the spirit of your post, but I vehemently disagree. Coda markets this product as a to-do manager for teams and demo it as such. Let us take the example of a small team of 4. Each person works 300 days a year, and each day they perform just 4 tasks. This is not a ‘lets step back from the precipice of our grandiose ideas’ example, this is the minimal pitch to someone for them to use the product. After one short year, the team has generated 4 * 4 * 300 = 4800 tasks. If their doc was limited to JUST this function (and we know how docs ‘feature creep’ with all the magic of Coda), in 8 months the Doc drops off the API with no warning or indication and does a hard FAIL on everything? They had no warning, no context, nothing up front that warns them of this upcoming crash, and now they … what do they do? Reimagine their whole schema and make it more efficient? Its kind of a disaster waiting to happen. I’m able to reimagine and CrossDoc and Zapier and come up with schemes to mitigate this and for me its worth doing. But for the average user this is a timebomb waiting to blow up in their face. Atleast thats how I see it, as opposed to your magical remote cabin paradise :heart:

1 Like

Indeed, there are some limitations that Makers should be aware of and which “average users” may never fully understand. You are simply making my point - Makers should be very careful about putting average users in harms way. I believe this is a reflection of the importance Coda places on Makers.

If I may extend the cabin metaphore - you can hire a guide to take you from this cabin to even deeper reaches of the mountains where far greater risks exist. Makers are mountain guides; their objective is to keep average users from injuring themselves because dead guests often don’t pay their bill.

Coda guides share a similar responsibility; for our best interest we should be cautious about placing average users in precarious situations. And certainly, Coda shares that obligation as well and there’s no indication they take this lightly.

That’s not the same as saying Coda has unlimited scale. I have read everything written about Coda and by Coda and at no time did I get the sense that it had unlimited scale or would be performant at any given threshold of activity. If you’re heavy on the gas and brakes, your mileage may vary. In fact, you might blow the engine or burn the brakes. :wink: As a developer of solutions I recognize there are limitations in every product, every platform, and more so when you camp out on a recent release from beta.

Coda is a tool for making unique document-centric solutions. No one can debate otherwise. However, Coda is not a development platform. You might think it is and you might want to use it that way, but therein lay my point. No one should feel free to superimpose their own definition of development possibilities or scale or performance.

I’m simply saying - we need to be reasonable about expectations.

4 Likes

It is a pleasure to read such enthusiastic and expert dissertations.

I rarely let myself involved in these discussions; mainly because I’m lazy, but also because I lack of my native language strengths (so, please forgive me for quirky sentences).
However, several of this - sill little - community members possess lots of interesting qualities (technical and intellectual) and this time I fight against my laziness.
Plus, Coda itself represent one of the reasons of my own enthusiasm, as well.

Said that, I tend to agree with @Bill_French.
I think that many of us have already experienced several “ultimate” revolutions in the last 25/30 years in the technology context (and not only).
There is a constant hype for the last killer app/pattern/suite/solution/platform/language/architecture/approach… feeding the deep emotional need of innovators: novelty.

Fact is that this is what we are: innovators
It’s not good nor bad, per se. It’s a feature like others.
We tend to reduce the actual adoption time by looking at the potential and believing in it long before it is happened (if it ever happens, eventually).

Fact is - also - that we don’t even know what is on Coda’s executives’ vision.
I gave up bothering asking for this (Coda Release Notes, Public Roadmap) in order to set up my expectations.
@shishir’s webinar has been extremely interesting, but still we have feedbacks like “It’s on our radar”, “it might be not too late”, … without an organic vision, a path or even a statement.

So far, Coda is a multi-purpose tool extremely flexible to cope with a lot of situations.
It’s incredibly well designed (from a technical perspective) and really sexy to use.
Being multi-purpose, it attracts a lot of interests because it covers a wide range of use-cases but - by definition - it lacks of some specific deep solutions (well explained throughout this post).

I think that integration is nowadays the key to move forward at the right speed and Coda is, in my opinion, one of the most representative tools as such.
If it keeps on integrating/connecting with other tools/solutions - current or future - I think I’ll keep my foot on it; time will speak for us and let’s see what will happen.

I extensively used Airtable for a couple of years. Now I’m letting it go because it’s still stuck in the promises and nobody knows where is going to be (and there is no shared roadmap).
Hopefully this is a different story: let’s do our part. :wink:

4 Likes

Yep - we’re slightly drunk. :wink:

This is a reasonable view.

Unfortunate indeed, and unlikely to be the case with Coda because we can see the potential through existing design choices. While the roadmap and vision may be pixelated; the underpinning architecture is transparent enough to be somewhat comfortable.

2 Likes

Yep. This is why I keep the hopes high :+1:t2:

1 Like

Yes - this is important. I want to see this as well. But I get it - it’s also important for Coda to be careful about telegraphing their ideas and plans.

I believe this should happen in the context of a Maker/Partner program where some level of confidentiality surrounds the narratives. Until such a program has been formalized I can understand why they might be a little reluctant to expose strategic plans.

3 Likes

I’m curious about this example. Please explain why the API is involved and its purpose in a simple task manager and what the nature of “drops off” means.

This is a great discussion. I love @Federico_Stefanato’s point about “looking at the potential and believing in it” - this is a nice philosophical reflection of what a great job they’ve done at making something that so clearly demonstrates its potential. Like being able to understand an artists style and recognize an uncontextualized painting as one of theirs.

Re @Bill_French’s ask for an example … I mean anything that connects their doc outside of Coda! Any feature extension or connection or anything that isn’t through a native pack. These outcomes will vary from catastrophic to mundane. The problem mostly arising from the expectation not being met. Here’s a rather mundane example. A shipping company uses Coda to manage their orders/to-dos. They create an automation that when an order is completed it sends a Bill of Lading to the warehouse printer via the API using Google Cloudprint. This works for 8 months, then one day the truck arrives to pick up the $100,000 order but the BOL isn’t printed. The truck can’t pick up the order without it and doesn’t have time for the team to research why their printer isn’t working like it always has, and the driver has to leave to make his other pickups. Now the company doesn’t get their order picked up and they suffer some big financial penalty for not meeting their contractual obligation.

Now what do they do? Their ‘Maker’ didn’t know this was coming, and how do they get ‘back on’ the API? They need to go in and start cleaving away at their existing, valuable data? Its a uncomfortable fail paradigm … the fact that currently you don’t even know about the impending disaster until it strikes - and its totally foreseeable - is the main problem here.

There are unlimited scenarios where makers will extend this incredible functionality via IOT and API and all other three letter sandwiches to create magic … and dropping off the API will be problematic.

I love your metaphor, but I think you’re understating my example. I’m talking about using a simple task manager adding one row at a time. I can forsee people wanting to take the Cambridge Analytica database and put it into Coda for manipulation and analysis. This would be hundreds of millions of rows. This where I think your metaphors become more realistic. But to sell the product as a do-it-all thing, even albeit little ‘docs’ like manage your project to-dos … 4 actions a day for 8 months to break the thing? I think we’re still in the shallow end of the pool in terms of where this thing should break.

That said, they aren’t doing this to be vindictive or punitive or for some bad reason. It’s very difficult doing what they’ve pulled off an they need limits … I understand that. The opportunity I see for Coda now is to better manage the expectation and provide some warnings so the car doesn’t break down in the middle of a busy highway.

1 Like

:zap: Unlimited scale!!! :zap:
:slight_smile:

I definitely agree it is a long road for a product like Coda to become an enterprise-scale system that can smoothly support customers with thousands or even hundreds of employees.

But we’re talking financial spreadsheets or API’s that become unresponsive or unusable in the couple thousand entries area. Numbers that can be hit by single users, in simple use cases, and guaranteed to be hit by a small team project that continues for a typical amount of time.

Just as an example we recently created a product that has an iOS component, with a view that a typical user would probably have 10-20 entries in, but we filled it with 10,000 entries in our test to make sure nothing goes bust and that performance scales reasonably, and it worked just fine. Of course 1,000 entries, or even 100 might have been acceptable in that particular case but we take performance seriously.

In my opinion Coda should aim to comfortably support, in terms of performance and reliability, within an order of magnitude of the general use case. It’s common for small teams to have thousands of tasks accumulate in a year or two, so the tool should make sure it can support at least 10x that (~100,000). As mentioned in this thread it’s better not to be 3 years into a project and suddenly your tool that contains all workflow + knowledge stops working or slows down to the point of being nearly unusable and needs to be migrated.

I assume the devs at Coda agree, and it’s hard to give any commitments about when optimizations would make it into a release, but it would be nice to know what we should expect as long term goals and priorities, since PM and CRM etc are templates and example use cases that the product provides.

4 Likes

I just wanted to clarify that I’m just getting started with Coda and although I’ve built out a CRM and PM structure, I only have a handful of entries in each of the various tables. I haven’t run into any performance issues myself, and am just interpreting what I’ve read here and the other mentioned thread.

So I may be misunderstanding the performance situation, for example perhaps it is limited to certain, isolated use cases that most small teams would not encounter. But it is something I need to understand if we move all our processes and knowledge to this lovely tool. :slight_smile:

Okay, so these are hypothetical calamities that you are warning us and Coda about. Let’s examine these a bit…

  • No system is impervious to the failures you are describing. Even systems that are 100% custom made and even domain-specific systems that are created specifically to print shipping labels can, have, and will fail.

  • Any shipment process that is gated by a system crafted as an add-on to a document-centric app and cast in a mission-critical role will likely be carefully vetted, stringently tested, and probably scrutinized by many people. I would be skeptical if Coda was the winning technical choice for this specific example. As such, asserting that Coda would get a black eye for such a failure is probably a stretch. Does C++ get a bad wrap when a poorly-skilled engineer writes a bad printer driver?

  • Every minute of every day there are dozens of users who pose as skilled technicians and through masterful convolutery create incredible sh*t-shows that people - the likes of you and I - must untangle. This is neither a new thing or anything that will ever likely stop.

I still don’t have a clear understanding of this phrase. Describe the nature of a process that causes the API to “drop off”.

I certainly agree with all comments here that performance in a given expectation envelope is important and there’s no question the team has work to do.

Yes @Bill_French you are correct in your points. I used a printer in my example because you could have just said ‘what if it runs out of toner!’ I wanted it to be mundane and simple … AND I can see Doctor’s and hospitals building ‘mundane’ workflows that cascade into higher stakes scenarios. I mean I can see unlimited potential!! Which is why I care enough to write in this forum.

My point is that once it breaks, now what?? There’s no good solution. I’ve had to rebuild my ‘doc to rule them all’ 4 times from scratch, and now it lives as 6 different docs. It was a TREMENDOUS opportunity to do this, and its made me the coda wizard I am now, but its not a good model for the business as a whole. As you said, most people are not technical innovators who geek out on this stuff. @Ed_Liveikis laid out the best argument for why it needs to support ~10x the general use case for safety. Much like how buildings are overdesigned for their weight limits.

As far as being thrown off the API, perhaps you haven’t had the honor yet? Typically I get a Zapier notification. In peak distress I created this post when I got hit by the truck and didn’t know what to do … had so much invested in my doc and then BOOM everything breaks. All the people I cajoled into using Coda were pissed … business processes failed … it was a total disaster. As I said earlier, my suggestion at this time isn’t for them to artificially raise the limit beyond what is technically possible at the moment. I have confidence in Moore’s Law and that they will be able to increase these limits. Its how Coda handles the failure (and how at the time of my post they were publicly claiming to handle hundreds of thousands of rows but then kicked me to the curb at 3,000 rows) now that I think could be optimized. AND I have confidence they are working on it and its a priority … my purpose here isn’t so much to chide Coda as provide guidance to other early pioneering innovators that they might not hit their head on the doorframes I’ve already passed through.

Until such time that Coda is able to allocate more resources into scaling performance, I would appreciate more user facing diagnostic tools like the “Debug calculations” tool, which I use all the time.

What gets measured, gets managed.

Expose as much diagnostic data as possible for robust users, so that we can proactively manage approaching performance issues long before our Coda projects unexpectedly blow up—costing who knows how much downtime.

Empowering us to partner with you on managing performance limitations will leverage your limited resources.

$.02.

Awesome conversation, btw.

6 Likes

Well said @Ander! Totally agree.

2¢ quite valuable.
Thank you.

2 Likes

@Johg_Ananda,

You trusted a codeless, near-free black-box platform to provide a mission-critical integration. I’m sorry it went sideways, but this is precisely why solution designers need to be skeptical of the many gears and machines they choose to separate processes from data.

At the heart of this problem is a failure to recognize that Coda (like Airtable) provide no event handlers. Because of this, Zapier and Integromat are forced to poll for data changes just as you would be forced had you created a custom integration with the Coda API. This is why it failed. And while it’s fair to lay [some] blame at Coda’s feet, we all must accept at least some of the responsibility. This limitation is not related to Coda’s performance or their API per-se - it’s caused because we all conspire to oversaturate the Coda infrastructure with millions of unnecessary requests every hour - perhaps every minute.

We’re high on glue - the Zapier and Integromat glue. We think it’s wonderful and yet, it is ultimately harmful without webhooks and events that drive them.

Avoiding this (for us) is not easy and adhesives the likes of Zapier only exacerbate the problem by offering vast integration options without cost or near-free and easy to setup. This serves to endear less-knowledgeable users to elevate the polling insanity because they know not what’s happening under the covers.

If the Internet depended on polling architectures, we would have shut it down in 2005. But we got smart and we realized that events are critical to interoperability success patterns. Why Coda chose to ship before events and webhooks were possible is not something I can easily rationalize.

My client solutions do not suffer from this calamity because (a) I asked deep questions about the architecture before getting clients on to it, and (b) for any processes that require some degree of event handling, I either optimize the crap out of the polling detection scheme and set clear latency expectations, or I use Slack events to mimic actual events. I also avoid polling at all costs and push manually-driven event triggers on to actual users until such time as events and webhooks may become available.

Where does that leave us?

Well, we can’t blame Zapier - they want events and webhooks as much as we do. We can certainly blame the glue-factories for referring to these integrations with Coda as “webhooks”. At best it’s a deeply harmful mischaracterization of their service.

Coda can change the API performance game with two key architectural enhancements that work the way almost every SaaS solution presently achieves optimized interoperability. I don’t think they have a choice. In the meantime, there be thin ice - best to skate carefully my friend.

[UPDATE]: BTW, a couple of key things are circling this drain.

  1. Someone asked about the API accessing data via views during the Maker webinar and @shishir mentioned that he thought it was already possible but not likely in the SDKs. On this point alone it would allow us to discover changed records easily and avoid massive table-level polling to find data patterns. I will be spending a fair bit of time on this in the coming weeks because this is a significant advantage lacking webhooks and events.

  2. The glue-factories do not pace or filter their API calls. In many of my integrations, I pace the change detection and processing tasks by calling a small number of API requests over many hours. This allows me to perform API tasks across many thousands of records without triggering alarms. But it does require some carefully scripted machinery to segment the activities to the record set.

4 Likes

@Bill_French great post. Webhooks or things like grpc definitely are needed for change notifications in big public-facing API’s.

I’m trying to read between the lines here - are you implying that you are successfully using Coda on sizable datasets with your clients without any major performance issues or major sacrifices? I’m considering working on code to automatically fill up my PM and CRM projects with a 2-3 year simulated data-load, but would rather avoid the effort if y’all can confirm things are working decently enough for you :slight_smile:

1 Like

They’re needed for little private APIs as well. We have many web services that are special-purpose micro-APIs based on serverless architectures that listen for events. Big or tiny - webhooks (driven by events) drive data seamlessly using very few computing resources.

What is “sizable”?

In my data science work, a million records is moderate and considered outside the realm of browser-based apps, but I do this often with the help of Streamlit. And then I embed Streamlit apps into Coda. With that, document users have access to big data analytics and a framework for building narratives, exploring analytics, etc. There are many tools that support embedded blending to give Coda users access to “sizable” data sets.

I also have one client that uses Google App Script to cache forward customer survey data directly into a raw Coda table from which they perform various analyses and reports by slicing off views. I advise them to be cautious about bitting of large collections.

That particular solution allows users to formulate a query by selecting filtering criteria in a table and setting a refresh flag. A chron process checks for those filter settings in the Coda document and via the API, it refreshes the data. I’ve seen them populate 10,000+ rows in a raw data table, but this takes time. Typically they make a request before leaving the office and the data is ready in the morning. The chron process adds 100 records every five minutes until complete. This is the pacing aspect I was referring to earlier.

Another client has about 1,000 rows of transaction data per month and they spread 18 months across 18 documents. This us ugly but they’re thrilled with it.

Coda can handle some data volume, but don’t expect it to be browsable in a performant fashion; you need to keep larger record sets out of view of the user and pull summaries and other metrics into smaller sets with views.

Major sacrifices? Absolutely.

I think this is where you need to ask - for what purpose?

In my view, Coda (as a data store) is ideal for day-to-day operational information management. It is well-positioned to wrap context, summations, narratives, and some decent charts around aggregations for useful reporting and administrative tasks.

Again - just my view – it is not [presently] a safe harbor for large historical collections of data. And to be a little harsh (perhaps), a quick glance at the feature set should give us a clear picture of how Coda can be used advantageously. Large data set support is not what I see when I read about Coda’s features - just sayin’.

I recommend careful experimentation and a general strategy that paginates seamlessly to mimic the idea that the Coda app has unlimited data scale. This concept is difficult given the lack of HTTP POST/GET capability, but there are ways to overcome these shortcomings.

Someday it will be easier. Today, not so much.

1 Like