Some Performance Announcements

In our case, we’d love a pagination feature in tables as well.

We have a table with an icebox where we only need to see a few items in one view. Pagination will allow us to avoid overwhelming the user with too many entries.

The same happens with our backlog.

In my opinion using controls for this is too cumbersome.

So I’m a clear +1 to adding pagination controls to tables : )

2 Likes

The main use-case for pagination is what it always has been… user experience. Nobody wants to see a list of 400 items. Especially if this table is supposed to just be an element in a longer form page/report.

I really struggle to see why you guys don’t just implement this. Its almost impossible to find a grid / table system out in the wild that doesn’t have native paging, yet somehow you keep trying to justify not having it?

Hey @Richard_Browbek1 and @Raul_San_N.H

Just to make the distinction, the topic of pagination should have nothing to do with performance anymore, since we’re working on making big tables scroll faster regardless of pagination.

Totally understand that there are valid scenarios for pagination that are not possible yet. There are a few workarounds like using filtered views, canvas controls or the detail layout with bottom navigation, which is paginated with one item at a time. I completely agree that it will be useful to be able to paginate through a small number of rows at a time. It’s not a question of justifying not doing pagination but just a matter of prioritizing it against other improvements on our list.

Really appreciate hearing the feedback and scenarios and will definitely pass them on to the team to prioritize appropriately. However, would suggest we move the discussion about pagination to a different thread like this one since it is unrelated to performance.

Angad

1 Like

Edit: Ooops just seen this was already in the examples :upside_down_face:

Another possible tip, regarding CountIf.

I have this 500+ rows table with time tracking info. Date/Task/Start/End then some calcs for Duration etc.

I had previously given up on this doc as it lagged as hell. Some of the views make heavy use of grouping, so I thought it could be that, but was never able to fix it. Now with the performance tool I went back to check what was going on.

There was basically just one column taking tens of seconds to calculate. It counted the number of times a task was executed in a day:

thisTable.CountIf(Task=thisRow.Task AND Date=thisRow.Date)

I rewrote the formula as

thisTable.Filter(Task=thisRow.Task AND Date=thisRow.Date).Count()

Now it takes ~100ms.

2 Likes

Hey @Dalmo_Mendonca, sorry about the slow response here.
Yup CountIf() is one of the formulas we haven’t optimized yet.
Glad you were able to rewrite it and make it much faster!

Angad

Hey Codans (@Jason_Tamulonis, @Angad et al),

Can you please tell more about how updates are propagated through lookup-linked tables? Including tables that reference other rows from the same table.

I want to know performance implications of this vs that approach for an ever-growing data set.

What triggers a full table recalculation? And what doesn’t? How does Coda resolve which rows and columns to recalculate, or does it recalculate everything? Is there internally a sort of dependency graph, a digestion pass, or something else? How about if tables are linked through manually set references vs a formula.

As a software engineer, I’m interested in as many technical details as possible. And when clients ask me about possible slowdowns, I’d like to have some answers.

2 Likes

I have cloned a doc which works seamessly. I populated the clone with new data. It is a base table with several lookups. It is “calculating…” once and again

Very good point, @Paul_Danyliuk.
I think that having an idea of the - hopefully improving - breaking point is a rough metric we need to take into consideration.
Optimisation hints according with the under-the-hood implementations could let the community to be more aware of this.

@Angad I believe at one point I asked the support team why the load times and performance varied so much per device in comparison to other mobile apps, but don’t remember what they said. When I upgraded my phone, performance jumped incredibly. Unfortunately though, most users on my team don’t have that same advantage. It seems iOS devices have more of an issue with performance versus Android. Your thoughts on this?

Also, love the debugging tool. When @Mallikar (I believe) tipped me off to that, my life was changed :smiley: That’s also when I realized the performance differences between filters and lookups.

Thank you for your continual dedication to making Coda the best it can be!

Hi @Paul_Danyliuk,

Thanks for the question. I should first point out we are constantly improving our optimizations and thus things are always changing, so what holds today might not tomorrow. :slight_smile:

At a high level Coda does understand the dependencies between each piece of data in your document. The references in a formula and the rows selected from a lookup help build up our understanding of dependencies within the document.

When static data changes we end up looking through the dependencies to figure out which formulas need to be recalculated, those calculations can also trigger more recalculations if they themselves have dependencies.

Now there is a fair bit of complexity that goes into deciding if we can recalculate a single cell in a column verse the entire column. I could probably write a longer blog post on this, but I’ll try to outline an example where this comes up. Imagine a scenario where you have Products which have a Name, a lookup column against a Category table, and Cost. In a Categories table you have want to sum the cost of all products in that category. You could do this a number of ways but lets consider two options:

A) TotalCost= Products.Filter(Category=thisRow).Cost.Sum()
B) Items = Products.Filter(Category=thisRow)
TotalCost = Items.Cost.Sum()

Now let us consider what happens when we update the Cost of one product in both scenarios.

In option A we know the TotalCosts column depends on the Category and Cost column in the Products table. For every category we need to extract the Cost of every product associated with them and sum it. The problem here is without remembering which products belong to a given category we have to recompute the filter for every row in order to determine which category is associated with the product whose Cost changed. This is far from ideal.

You might be able to see why option B performs better now. When the cost is updated we know that only the rows, where the given product is present in Items need to be updated. There is one giant caveat here that sufficiently complicated setups can throw off optimizations and fall back to performance similar to option A.

We do have some work planned on our backlog to automatically detect and optimize option A into option B, but until that happens option B should always perform better.

10 Likes