Why would you NOT try to manage all a Tech Start-Up's stuff in Coda?

hey @ABp the problem seems to be scalability. Coda falls short when you get big. At ~2000 combined rows across a doc, they (Coda gods) throw you off the API, which for what you’re talking about, breaks your use case.

The solution is to spread out your functions across many Coda docs and have them Cross-Doc and play well with each other. Its not as convenient as having one doc to rule them all (which was my original approach about a year ago), but it has other advantages, primarily speed and performance.

Other considerations at this point are security and sharing. The granularity of permissions just isnt here yet, so for sensitive information you may be better served by something else.

TLDR: Scale and Security

5 Likes

No.

Sorry for being so direct, but it’s in your best interest. Marketing Automation is not my wheelhouse. I think it requires a clear skill set with a fair amount of marketing knowledge to be able to do that well.

I can help you in matters of complex integration, AI, machine-learning, mimicking scale through external database services, sentiment analysis, analytics (real-time and otherwise), and findability (i.e., search, indexation, etc).

I think this is really good advice. I tend to think of complex Coda processes best implemented as distributed services; each with autonomous intelligence and no single process at the mercy of any other process. Ideally, the demarcation of each service is within the bounds of a specific business action (not activity) or a collection of closely related actions.

I think this architectural design choice makes it possible to create very complex and potentially sizeable Coda apps without serious scalability concerns. Document-centric apps are simply incapable of being monolithic in the traditional sense of the term. (FYI - I say this only because I have no evidence that Coda apps scale massively and the evidence I do have seems to suggest we must tread carefully)

Another consideration -

Imagine a unified (hybrid-monolithic) Coda app that works with very discrete data sets that are actually managed and stored elsewhere (like FireStore). As operational focus shifts from entity-to-entity, the data concerning the entity of interest is swapped in and out of context, thus making it possible to create an operational ceiling that’s virtually limitless. Many of my data science apps are designed this way because the data sets are so enormous; we need to get really good at virutualising the data if we hope to create scalable and very complex business apps with Coda.

And when we swap data like this we tend to leverage datastores designed to be fast and able to synchronize across clients and servers effortlessly. This is why Firebase itself exists. And of course there are other ways to move and seamlessly synch data - sockets-based and long-term HTTP connections and real-time networks like PubNub, Ably, and MQTT. (This is precisely why I asked @shishir about this in the Maker webinar).

Coda can be everything to everyone but not without distributing the processes and data. Just sayin’…

3 Likes

@Bill_French would you be so kind (maybe in another post) as to detail how you’re moving data in and out of Coda? Perhaps with FireStore or something else? I have a client who I think Coda would be awesome for, but they have 100,000 customers and need to be able to query them and I haven’t been able to think of a way to make it work, but it seems like you’ve grok’d and constructed a framework to do so? :pray:

2 Likes

You bet. It’ll be a few days - working toward a deadline at the moment.

2 Likes

Hey @ABp - thanks for the shout out!

Your tech stack of potential software that Coda can replace is actually really similar to a visual I put together for the ideal stack for a B2B digital service / product company back in 2017 (before I joined Coda). Cool to see the similarities.

I find it funny that, at the time, Coda is sitting there not connected to anything. I had just gotten access to the Beta and wasn’t sure where it fit in yet :slight_smile:

3 Likes

Just chiming in here - that I believe this is a huge opportunity for Coda in order to place all workflow + knowledge in one tool.

Communications (Email, Slack, etc) and code/asset management (Github, AWS, Dropbox etc) can remain external and should have great interop with Coda.

There are missing features now in the areas of permissions, file attachment or external embedding, cross-doc syncing, markdown support etc, but most of these don’t concern me so much since they can be addressed in the future and there are a lot of benefits compared to the separate tool approach.

The scalability concerns are alarming though. My current layout puts each Project (Product) into its own doc, which could obviously grow to large numbers of rows. One of the main reasons I am looking for alternatives to things like Jira is the long delays between nearly all user actions. I think PM, CRM, and other workflow tools need to aim for text-editor like response times, to get out of the way of the real work. I can’t count the number of times I’ve spaced out while trying to do things in Jira.

Asana does this well and I’ve used tools in the past, like Hansoft, that were instant, and it really allowed me to enter a mental state of flow doing even PM work.

It’s possible there is some low hanging fruit for optimization, but the long delay times mentioned in this related thread, across various parts of the app, give me the sense fundamental architectural improvements are needed. This requires people that have a strong performance background and on an app this size, a big time investment.

Can any Codans comment on the goals here? My suggestion is that all mundane and common user actions - adding, removing, editing rows, editing doc content, should take less than a couple or few tenths of a second on a table with 1000 rows with 15 fields or so, or other similar complexity. Opening large docs should be under a second.

If Coda can do this, then the tool gets out of the way of interrupting the train of thought while jumping to Coda to update a task, enter something that needs to be done, get some info. This is super important.

@Johg_Ananda What’s this API limit? Is that even for Makers? A Maker can’t use the API on a doc that has combined more than 2000 rows? Would you recommend that a single Product that contains things like PM (backlog, dashboard, sprint-like stuff), Policy, Procedures, Visual Design, Technical Design, Meeting notes - be broken into a separate doc for each of those areas (as an example)? Even then - just the PM doc would eventually surpass 2000 rows even on a smaller project.

I decided to answer your question inline to this thread because it’s more related to the supposition that we might be able to create vast data scale under a Coda document. And despite your interpretation of my earlier comment where I said…

… I am not doing this in a Coda app because it is [presently] not possible (more on that later). But I am doing it with very weak clients such as Google Sheets.

As you know, spreadsheets are ideal user interfaces for many things including FinTech apps. One of my clients has 800,000 clients each of which has an average 2200 transactions. This is a big number when you factor in the complete data set for all clients.

Workers using this dataset need a spreadsheet to perform the analyses and other research required for day-to-day operations. Furthermore, they need to perform certain data science activities such as running machine learning models against individual client data and sometimes comparing client metrics to the entire data set.

These are steep challenges given the depth and breadth of the data and the relatively limited operating ceilings for spreadsheets.

Something’s Gotta’ Give

And that something is simply a caching approach that allows users to (i) easily select and (ii) near-instantly begin to work on a client’s entire dataset. This is achieved with two key features -

  1. The ability to find a client’s data quickly and painlessly;
  2. The ability to extract and instantiate the client’s data in a collection of spreadsheet tabs quickly.

#1 Search

As you can imagine, a picklist of 800,000 client names or IDs is out of the question and especially so in a spreadsheet app. Search - and a very forgiving search - is the only way to create effortless access to specific client data sets. I do this by maintaining a deeply tokenized search index for the fields that contain likely search terms. The index architecture is essentially the Lucene (i.e., ElasticSearch) open-source algorithm.

Using this approach in Coda is not a big leap but it does require the ability to integrate a search UI into the Coda framework. A search UI is no different than a a web form and we all know this is not ideally possible in the current release. Another requirement for seamless search experiences into large data sets is need for the form to interact with the search index over HTTP. Again, this is not presently possible in Coda and why I specifically asked about it during the maker webinar.

#2 Extract and Instantiate

The integrated scripting engine (Google Apps Script) which is provided as a containerized application server behind every Google doc is ideal for processing requests over HTTP to services such as Firebase and Firestore. But, the Google Apps Script model also supports inline HTML panels that run in the sidebar of all Google documents.

This makes it possible to utilize dynamic HTML apps in the context of Google server-side containers for automation. Quite literally, an HTML app integrated into a spreadsheet instance has the ability to use the real-time aspects of Firebase. By setting the client ID context in javascript in the app, Firebase can instantly synchronize the spreadsheet with thousands of transactions. It’s able to do this because (i) a sockets connection is zippy fast, and (ii) Google Apps Script handles arrays quite efficiently.

Updating Big Data

Once again, record locking and updates back into the Firebase environment is fully supported because it’s a real-time architecture; changes occur in real-time which ostensibly eliminates collisions and other concurrent editing issues. I don’t want to trivialize this, but for most applications, a real-time architecture comes with big advantages that allow me to be lazy. :wink:

Data Science Operations

To use 800,000 client records in any sort of analytical process is not easy. You need ways to extract and run processes that stem from various aggregations that are maintained as the data changes. But there are some tools that make this practical.

I use Streamlit as the rendering layer for all Python-based data science apps. It includes a very advanced data caching approach that I do not fully understand but I certainly enjoy its advantages. It allows me to create a client web app that can load a million items in a few seconds and allow users to begin slicing and dicing the data.

Firebase, Firestore, Google Docs, Google Apps Script, Streamlit - I stand on the shoulders of giants.

7 Likes

@Edward_Liveikis as much as I love Coda unfortunately I don’t have good answers here. I have been at the tip of the spear hitting the performance limits over the last couple years. As awesome as Codans are, I think they have been substandard when it comes to this topic. They have ignored my post: New API limit - 1,000 rows? Alternative services or workaround

and the user experience for getting thrown off the API is HORRIBLE. Right now there is no way to see the size/health of your doc. Instead they usually wait until Friday night at 7pm after everyone’s done working and then throw you off. So when you come in Monday everything over the weekend has broken and you have a huge mess to clean up. No warning that you are approaching a limit, and no indicator that you’ve broken it … just a bunch of angry colleagues and broken apps that aren’t working.

Worse, their documentation around this is unclear and misleading. This article: Are there any size limitations on docs accessible via the API? | Coda Help Center

is their official policy. This article used to say that each Coda doc could store ‘all of the text of Encyclopedia Brittanica’, which I pointed out to them is literally gigabytes of data or millions of rows. Only after my ‘calling them out’ on this did they change it to the 3,000 row figure, which in my experience is overstated. We are falling off on one of my docs with less than that.

The upside is that sometimes the osbstacle is the path. Having this performance limitation has really made me think about performance, schema and how to set things up for performance, which, after much contemplation over the the last year, has been a good thing. As @Bill_French mentioned, scalability is an issue at every level of scale and as Makers we want to always be thinking about how to build efficiently.

I think Coda gets a D in this area right now. To improve their GPA I suggest:

  • A health indicator showing ea Maker the size of their doc
  • A notification in Coda and via email as the doc approaches / hits the limit (its crazy to get the notice from other web services crashing)
  • An increased limit for paying Makers
  • An ability to pay / somehow increase the limit
  • An indication / roadmap from Coda about how and when this is going to increase
9 Likes

Hey Guys, very glad to get all this commentary added in here, a lot of eye-opening stuff for me and hopefully of value to some of the readers, too!

First @Johg_Ananda and @Bill_French, as pioneering Makers I’m increasing my respect for, very intriguing your advanced commentary about the scaling issue. I have been spending most of my efforts tweaking Coda to my team’s needs, and not thinking about these questions. But I agree 100% with all you both wrote as things that need to be addressed if Coda is to realize its potential as a true all-in-one app. As you can see I would really love to see Coda evolve and replace, for example, Jira in big engineering teams. Sounds like these Security and Scaling priorities would need to be pushed to accomplish this.

And @John_Scrugham, that is an amazing graphic, thank you for posting! Would love to build it in Coda once Mindmapping comes out :wink: I think by now, you guys are legitimately built to replace or integrate a lot of this stuff, and in fact with a superior user experience in some cases. But your image illustrates my point exactly. How many founders, or CEO’s of tech/software companies, are eager to deal with all that stuff? They have businesses to run! Managing the proliferation of tools we need to run this type of business these days is an amazing challenge.

To my point from your video about building Marketing Automation in Coda: Let’s say I have already set up that terrific CRM you and Maria demoed, and I now want to think about scheduling Email to my Companies, regular blog posts, etc. Who would want to go sign up for Hubspot, learn that tool, pay for it, and then try to integrate with the data that lives in Coda? So now deal with non-synced data most likely, a real headache?

I will try to pull off a TL;DR of my own:

  • Coda is in position to solve this problem. I’ve spent the last two years studying all these tools and nothing checks off as many boxes as Coda. Whether an intended goal of the founders or not, Coda has become a top solution for this market need.
  • Coda probably needs to “close the deal” with some more tailoring of the app to increase team friendliness. And no question address the Security and Scaleability stuff described here.
  • To the Coda Product Team: It is my sincere hope that all this goes to the top of your priority list!

Thanks again everyone for this compelling discussion!

And possibility because there aren’t any good answers when we make great assumptions and extrapolations that overlay our vision of how Coda should change the world.

Maybe We Should Step Back a Bit…

I think y’all are getting a little drunk on Coda. :wink: But I get it - using it is often euphoric and even magical at times.

It allows us to do things never before possible and all in a practical manner. Indeed, in many respects it creates accessibility to the inaccessible. It allows us to see information in previously unseeable formats and contexts. And it provides affordable data and content luxuries the likes of which can be very costy to build.

About 3 miles east of Breckenridge Resort is a little cabin - an unassuming 2200 sq ft at about 11,000ft MSL. It provides affordable and safe access to the largely inaccessible deep forests and mountains during the middle of winter. It’s situated in a place that is not likely to kill you with avalanches and it has some nice creature comforts in the middle of pretty much nowhere.

Like Coda, this cabin was built with design choices that make the experience quite pleasurable. But it is not where you should drag a beer cooler on a sled.

Is it possible we’re overstepping the spirit of Coda and the way that it should change our world?

2 Likes

@Bill_French I get the spirit of your post, but I vehemently disagree. Coda markets this product as a to-do manager for teams and demo it as such. Let us take the example of a small team of 4. Each person works 300 days a year, and each day they perform just 4 tasks. This is not a ‘lets step back from the precipice of our grandiose ideas’ example, this is the minimal pitch to someone for them to use the product. After one short year, the team has generated 4 * 4 * 300 = 4800 tasks. If their doc was limited to JUST this function (and we know how docs ‘feature creep’ with all the magic of Coda), in 8 months the Doc drops off the API with no warning or indication and does a hard FAIL on everything? They had no warning, no context, nothing up front that warns them of this upcoming crash, and now they … what do they do? Reimagine their whole schema and make it more efficient? Its kind of a disaster waiting to happen. I’m able to reimagine and CrossDoc and Zapier and come up with schemes to mitigate this and for me its worth doing. But for the average user this is a timebomb waiting to blow up in their face. Atleast thats how I see it, as opposed to your magical remote cabin paradise :heart:

2 Likes

Indeed, there are some limitations that Makers should be aware of and which “average users” may never fully understand. You are simply making my point - Makers should be very careful about putting average users in harms way. I believe this is a reflection of the importance Coda places on Makers.

If I may extend the cabin metaphore - you can hire a guide to take you from this cabin to even deeper reaches of the mountains where far greater risks exist. Makers are mountain guides; their objective is to keep average users from injuring themselves because dead guests often don’t pay their bill.

Coda guides share a similar responsibility; for our best interest we should be cautious about placing average users in precarious situations. And certainly, Coda shares that obligation as well and there’s no indication they take this lightly.

That’s not the same as saying Coda has unlimited scale. I have read everything written about Coda and by Coda and at no time did I get the sense that it had unlimited scale or would be performant at any given threshold of activity. If you’re heavy on the gas and brakes, your mileage may vary. In fact, you might blow the engine or burn the brakes. :wink: As a developer of solutions I recognize there are limitations in every product, every platform, and more so when you camp out on a recent release from beta.

Coda is a tool for making unique document-centric solutions. No one can debate otherwise. However, Coda is not a development platform. You might think it is and you might want to use it that way, but therein lay my point. No one should feel free to superimpose their own definition of development possibilities or scale or performance.

I’m simply saying - we need to be reasonable about expectations.

4 Likes

It is a pleasure to read such enthusiastic and expert dissertations.

I rarely let myself involved in these discussions; mainly because I’m lazy, but also because I lack of my native language strengths (so, please forgive me for quirky sentences).
However, several of this - sill little - community members possess lots of interesting qualities (technical and intellectual) and this time I fight against my laziness.
Plus, Coda itself represent one of the reasons of my own enthusiasm, as well.

Said that, I tend to agree with @Bill_French.
I think that many of us have already experienced several “ultimate” revolutions in the last 25/30 years in the technology context (and not only).
There is a constant hype for the last killer app/pattern/suite/solution/platform/language/architecture/approach… feeding the deep emotional need of innovators: novelty.

Fact is that this is what we are: innovators
It’s not good nor bad, per se. It’s a feature like others.
We tend to reduce the actual adoption time by looking at the potential and believing in it long before it is happened (if it ever happens, eventually).

Fact is - also - that we don’t even know what is on Coda’s executives’ vision.
I gave up bothering asking for this (Coda Release Notes, Public Roadmap) in order to set up my expectations.
@shishir’s webinar has been extremely interesting, but still we have feedbacks like “It’s on our radar”, “it might be not too late”, … without an organic vision, a path or even a statement.

So far, Coda is a multi-purpose tool extremely flexible to cope with a lot of situations.
It’s incredibly well designed (from a technical perspective) and really sexy to use.
Being multi-purpose, it attracts a lot of interests because it covers a wide range of use-cases but - by definition - it lacks of some specific deep solutions (well explained throughout this post).

I think that integration is nowadays the key to move forward at the right speed and Coda is, in my opinion, one of the most representative tools as such.
If it keeps on integrating/connecting with other tools/solutions - current or future - I think I’ll keep my foot on it; time will speak for us and let’s see what will happen.

I extensively used Airtable for a couple of years. Now I’m letting it go because it’s still stuck in the promises and nobody knows where is going to be (and there is no shared roadmap).
Hopefully this is a different story: let’s do our part. :wink:

4 Likes

Yep - we’re slightly drunk. :wink:

This is a reasonable view.

Unfortunate indeed, and unlikely to be the case with Coda because we can see the potential through existing design choices. While the roadmap and vision may be pixelated; the underpinning architecture is transparent enough to be somewhat comfortable.

2 Likes

Yep. This is why I keep the hopes high :+1:t2:

1 Like

Yes - this is important. I want to see this as well. But I get it - it’s also important for Coda to be careful about telegraphing their ideas and plans.

I believe this should happen in the context of a Maker/Partner program where some level of confidentiality surrounds the narratives. Until such a program has been formalized I can understand why they might be a little reluctant to expose strategic plans.

3 Likes

I’m curious about this example. Please explain why the API is involved and its purpose in a simple task manager and what the nature of “drops off” means.

This is a great discussion. I love @Federico_Stefanato’s point about “looking at the potential and believing in it” - this is a nice philosophical reflection of what a great job they’ve done at making something that so clearly demonstrates its potential. Like being able to understand an artists style and recognize an uncontextualized painting as one of theirs.

Re @Bill_French’s ask for an example … I mean anything that connects their doc outside of Coda! Any feature extension or connection or anything that isn’t through a native pack. These outcomes will vary from catastrophic to mundane. The problem mostly arising from the expectation not being met. Here’s a rather mundane example. A shipping company uses Coda to manage their orders/to-dos. They create an automation that when an order is completed it sends a Bill of Lading to the warehouse printer via the API using Google Cloudprint. This works for 8 months, then one day the truck arrives to pick up the $100,000 order but the BOL isn’t printed. The truck can’t pick up the order without it and doesn’t have time for the team to research why their printer isn’t working like it always has, and the driver has to leave to make his other pickups. Now the company doesn’t get their order picked up and they suffer some big financial penalty for not meeting their contractual obligation.

Now what do they do? Their ‘Maker’ didn’t know this was coming, and how do they get ‘back on’ the API? They need to go in and start cleaving away at their existing, valuable data? Its a uncomfortable fail paradigm … the fact that currently you don’t even know about the impending disaster until it strikes - and its totally foreseeable - is the main problem here.

There are unlimited scenarios where makers will extend this incredible functionality via IOT and API and all other three letter sandwiches to create magic … and dropping off the API will be problematic.

I love your metaphor, but I think you’re understating my example. I’m talking about using a simple task manager adding one row at a time. I can forsee people wanting to take the Cambridge Analytica database and put it into Coda for manipulation and analysis. This would be hundreds of millions of rows. This where I think your metaphors become more realistic. But to sell the product as a do-it-all thing, even albeit little ‘docs’ like manage your project to-dos … 4 actions a day for 8 months to break the thing? I think we’re still in the shallow end of the pool in terms of where this thing should break.

That said, they aren’t doing this to be vindictive or punitive or for some bad reason. It’s very difficult doing what they’ve pulled off an they need limits … I understand that. The opportunity I see for Coda now is to better manage the expectation and provide some warnings so the car doesn’t break down in the middle of a busy highway.

1 Like

:zap: Unlimited scale!!! :zap:
:slight_smile:

I definitely agree it is a long road for a product like Coda to become an enterprise-scale system that can smoothly support customers with thousands or even hundreds of employees.

But we’re talking financial spreadsheets or API’s that become unresponsive or unusable in the couple thousand entries area. Numbers that can be hit by single users, in simple use cases, and guaranteed to be hit by a small team project that continues for a typical amount of time.

Just as an example we recently created a product that has an iOS component, with a view that a typical user would probably have 10-20 entries in, but we filled it with 10,000 entries in our test to make sure nothing goes bust and that performance scales reasonably, and it worked just fine. Of course 1,000 entries, or even 100 might have been acceptable in that particular case but we take performance seriously.

In my opinion Coda should aim to comfortably support, in terms of performance and reliability, within an order of magnitude of the general use case. It’s common for small teams to have thousands of tasks accumulate in a year or two, so the tool should make sure it can support at least 10x that (~100,000). As mentioned in this thread it’s better not to be 3 years into a project and suddenly your tool that contains all workflow + knowledge stops working or slows down to the point of being nearly unusable and needs to be migrated.

I assume the devs at Coda agree, and it’s hard to give any commitments about when optimizations would make it into a release, but it would be nice to know what we should expect as long term goals and priorities, since PM and CRM etc are templates and example use cases that the product provides.

6 Likes

I just wanted to clarify that I’m just getting started with Coda and although I’ve built out a CRM and PM structure, I only have a handful of entries in each of the various tables. I haven’t run into any performance issues myself, and am just interpreting what I’ve read here and the other mentioned thread.

So I may be misunderstanding the performance situation, for example perhaps it is limited to certain, isolated use cases that most small teams would not encounter. But it is something I need to understand if we move all our processes and knowledge to this lovely tool. :slight_smile: