In Debug Calculations, what are the "[integer] X" referring to?

On debug calculations, there is a time value (how long its taking to load) and then an ‘[integer]X’ value, such as 0X, 1X, 10X, 30X. Is this referring to how many times this object is loaded? How can we use that information to help us make our docs more performant?


As far as I know, it shows the number of recalculations of that particular property that took place while debugging. This helps detect parts of the doc where one change triggers cyclic recalculations (e.g. a change in a User name triggers recalculation of Projects and Tasks, causing a change in Projects that triggers yet another recalculation in Tasks).

Hope Codans can confirm this or add more details to it.

As far as I know, this is the number of times that calculation has run. I’ve had people tell me that a doc just slows down sometimes, so I’ve had the analyzer open for 15 minutes and various calculations will happen multiple times in that window.

This helps with what @Paul_Danyliuk mentioned as well as being able to better judge how long an individual calculation takes if it happens to run multiple times.

Thank you @Paul_Danyliuk as I think you are correct, however it would seem appropriate on this forum to get a definitive answer to such an important question about how to use a Coda tool designed to make docs more performant … having the answer on the forum seems essential to me.

@BenLee can you get someone to answer this authoritatively? Even better would be some use cases on how to use it to optimize our docs. Thank you!

I hereby decree that “N” shall equal the number of times a calculation has run!

Haha, but seriously, add Now() to a doc and then run the calculations analyzer and you’ll see that it ticks up by one every time Now() runs.

As far as how to use it, sometimes you’ll see a longer time frame for a calculation that you might expect and this lets you see that it might be because it’s just run many times. So some things might not be as calculation intensive if you take that into consideration. It also lets you verify the time it took to run the calculation once.

1 Like

Thanks BenLee for the humor … my point was that when someone says ‘as far as I know’ - that isn’t coming from a place of confidence, whereas you have access to the Coda development team.

This seems to make sense, but I have never seen that number get really above a hundred or into a thousand, yet I have many tables with that many rows that get recalculated, so that doesn’t seem to mesh with my experience. So, despite your beautiful decree, I don’t think that is the full answer. Thanks for helping me understand.

The number is not for recalculations of each individual row — the number is for each “recalculation pass”.

Picture this:

  • A row is added in the Tasks table that triggers Tasks.Filter(Project = thisRow) recalculation on a column in Projects table. Regardless of the number of rows in the Tasks table, this will show up as 1x. The time, of course, will depend on the sizes of both tables.

  • Now, let’s say that column is not just Tasks.Filter(Project = thisRow) but a more complicated expression that’s dependent on multiple tables and/or canvas formulas, that are also indirectly linked to the Tasks table through a complicated dependency graph:

In a scenario like this, adding a new task:

  • Directly affects View of Tasks, triggering the recalculation of this filter
  • Triggers recalculation of Users.Last Added Task
  • The latter triggers recalculation of the canvas formula ULILastCreatedTask
  • And finally, that formula triggers recalculation of the filter once again

It is very much likely (though not guaranteed) that some of the recalculations will be deferred, so the View of Tasks will be actually recalculated twice: first on the change coming from Projects.Tasks and then on the change coming from the ULICastC.T. canvas formula. That’s when you’ll see 2x in your debugger.

Another scenario when you can see this is if you have a RunActions() that modifies the same row multiple times. You might have written such code for convenience (e.g. your former Flow button which I debugged), e.g. you modify certain columns in the beginning of the formula, then some other columns in the end of the formula. But those are two separate edits, and RunActions() forces them to be on two separate frames. If each of the edits triggers recalculations of other parts of that doc, then you’ll see 2x not just on the immediate row but also on a) everything that’s dependent on it directly, and b) everything that’s affected by the changes in a).

So, to answer your question about how to apply this — use this information to:

  • Find places where changes don’t arrive in a single frame and a formula gets recalculated multiple times, first with partial changes and then with the final state. Solution: simplify dependency graph, make it so that changes flow in only one direction and ideally don’t branch and merge.

  • Find actions that change the state on the same row multiple times within one RunActions() and rewrite the code so that all changes are applied at once in a single ModifyRows.

Of course, not all 2x-3x-5x calculations are worth the effort if they run in mere milliseconds anyways. They can trigger excessive recalculations in heavier formulas though, so it may still be worth to check on them.

I’m not a Codan though, so I guess my answer is not reputable enough either. I’ll try to get some engineer’s attention to this thread, however I have a feeling that it’s something that only few engineers would 100% certainly know of.


Great answer, @Paul_Danyliuk!

I’d say this topic can be considered answered.

Thank you @Paul_Danyliuk for once again bringing your brilliance to the discussion. This makes lots of sense as you’ve explained it. AND I agree

And it seems like one of the advantages of this community is to support power users so we can continue using the product (can’t use it if its too slow!) and get definitive answers on technical issues from the people who know as opposed to us lay-users speculating.

@BenLee Why not get an engineer to answer this authoritatively since Paul says his answer is a “guess”. Why would a guess be ‘considered answered’?

We are spending huge investments of time and money to make these docs work and it would be more respectful of that investment to get us a proper answer than a “guess”.

1 Like

Mostly because our engineers are working on the product. I can appreciate that you don’t want to hear from me on this, but you might be stuck with my answer and have to trust that I’ve talked to an engineer about it. They jump in when they can, but we’re incredibly busy and this question is answered. @Paul_Danyliuk gave a great answer to it in fact.

@BenLee I would love to hear the answer to the question, regardless of whom it comes from. I love that @Paul_Danyliuk is such a genius and can figure these things out and he spends so much of his valuable time contributing to this community at his expense. What I don’t love is that Coda brags constantly about how it raises hundreds of millions of dollars and then tells one of the most prolific powerusers on the platform (who represents many other users that have (or will have) the same question) that I’m ‘stuck’ with a substandard answer and that Coda is just too ‘incredibly busy’ to respond to someone who obviously isn’t as important as all the other ‘incredible’ things you are working on!

Another example of just blowing off your user base - ignore foundational technical questions on PAID features for a year: CrossDoc: Create a new Coda API token - #2 by Jeroen_Broerse

It’s hard to not take your attitude as an insult to me and the other users of Coda … especially when they are real technical questions that cannot be answered by any other documentation and are related to really investing in Coda as a platform. A question about the performance tool is about growing Coda and making it work and be a joy for users. When that is at the bottom of your priority list, it helps me understand better where and which platforms I should be investing in.

cc: @shishir

1 Like

I’ve obviously messed up a lot here. I made a poor attempt at some light hearted humor and I stated my first answer in an ambiguous way. I’ll work on my communication skills and I’m definitely sorry you didn’t feel your question was taken seriously or answered sufficiently.

In all honestly, that number is a count of calculation operations that have been run for that item. That table breaks down into columns, which are the smallest single item measured, not rows.

If you have the calculations analyzer open and you edit a column formula, you will see that column recalculate and you will see N=1 because that calculation will have run once. This means the whole column was calculated. Whether you have 100 rows or 10,000 rows, the column calculates once.

There isn’t any trick to it and there isn’t anything hidden that we’re not telling you. I leave some ambiguity in my answers because this community is incredibly creative and seems to always find new ways to use simple features. It feels arrogant of me to answer anything too absolutely when there are so many talented and creative people offering up solutions that have never crossed my mind here. So I do tend to speak in a manner that leaves things open ended.

As for this feature, it does exactly what was stated in this thread. As far as how that can be used, I’m sure there are ways I’ve never considered and that part is up to yourself and anyone else that has ideas to discuss here.

If you feel anything is still unanswered, please let us know the specifics you still have questions about and I’ll do my best to get you a sufficient reply.


This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.