@Jean_Pierre_Traets, @Christiaan_Huizer, @Jono_Bouwmeester, sharing settings on the Source Doc are now updated. Let me know if you still have any problems.
@Christiaan_Huizer re your questions:
- Since I’m unpacking the data with
_Merge() I’m not overly concerned about it changing. All I’m grabbing is the underlying json datastructure, which will always be useful. If the formula were ever to break, it would just cause my archive automation to fail, which will just mean I’ll have to go change it. However, no data will be lost, and the retrieval function (
ParseJSON) is definitely supported, so I’ll always be able to grab my data.
- Re the overhead of the ParseJSON formulas, I’m not sure how much slower it is. Presumably this is similar to what Coda is doing under the hood when it looks up the contents of a row in a Cross Doc table anyway. Either way, this is a tradeoff in exchange for way less maintenance on my part as columns are added and removed from the source doc.
What would be really great is if Coda supported a way to copy the row object that Cross Doc returns into a new table as an object. Then you could do this without
_Merge() and could reference with just
Row.Field like usual.
Or if Coda could cross doc more than 10k rows, then you could do this without archiving to a separate table.
Or if Coda could run quickly with more than 100k rows and complex lookup formulas, then you would never have to cross doc.
all of this is worth pointing out just to show that the root of the problem is performance, and all of this is a workaround. And of course, Coda’s working hard on performance, and also these sorts of workarounds may always be necessary at some level of doc size and complexity