How do you test your docs?

Say you have a complex set of related docs that’s running your business. How do you ensure that someone making a change in some formula over here does not break some workflow over there?
I’m wondering whether there’s anything like integration tests over docs, that ensure that everything is as expected on an ongoing basis.

2 Likes

This is a great question. We do only manual testing. The result is that we have only one person (me) making edits to all of the functionality, and that it is very risky to try to open up document process editing to other members of the team.

I would love to hear if anyone has had success instrumenting their docs with a UI integration testing solution.

3 Likes

indeed a good question. coda does not have any dev-ops abilities like version control, unit-tests, deployment processes and environments (dev, test, pre-prod & live production). and it doesnt facilitate these concepts easily (as you will see below).

for the less complex situations where the internal logic of documents is not too demanding and the dependency across docs is not great, we find its acceptable to use a LIGHT change management system:

  • every doc has a change-log page with the history of major changes over time
  • take a copy of the live document as a backup before making changes
  • if you are about to make a significant change to an important table, make a copy of the table to a hidden page first (its easier to restore that way)
  • tell users that changes are being made so they can stay away, then tell them when you are finished
  • do not depend on the document history mechanism to recover from your mistakes. it works, but its hard to find the right snapshot among hundreds of changes
  • we always have test rows in the tables that we use to fully test the changed workflows. we dont let people generate ad-hoc test rows at random, they must use the ‘official’ test data sets, adding new cases if required.

but when the internal logic of documents is complex, and there are tightly-coupled dependencies across many documents, we use a more heavy-handed change management system:

  • all pages are locked - so users cant accidentally delete tables (its happened) add columns, edit formulas, etc. we manage document ownership carefully so users cant unlock pages at a whim.
  • all transactional data is cross-doc’ed from a master database document. this isolates data storage from workflow logic. its a layer of difficulty for sure, given how cross-doc works but it enables the next rule…
  • ALL changes are made to a COPY of the workflow documents, leaving the LIVE version safe and unchanging, available to users. some changes can take days to complete.
  • the changes are logged in the Change Log Page in each document.
  • changes are all TESTED using the test rows in the master data tables.
  • when the changes are tested we put the new doc into production by (a) renaming it and the old live doc and (b) changing the link in our HOME DOC to point to the correct document. users access the LIVE docs via the links in the HOME doc - dont save document links for this as they point to the OLD document not the NEW changed LIVE doc.
  • changes to the master database are slightly more complex. we need to keep the original document links unchanged for the cross-docs. so we have a special change-management process for this case

for changes to the master database document that all other docs cross-doc to for reading and updating the most valuable TRANSACTIONAL data tables…

  • we have a single page that contains ALL the main tables of the system, as collapsed raw tables (you will see why later)
  • we have a separate VIEW of each table in its own page which we use to make changes (changes to a view are reflected in the raw table) - all changes are logged (explained) as check-list items on the same page - when the change is tested, then the box is ticked.
  • the CROSS-DOC views for each table are also stored in separate pages, one page per sub-doc, with an EXPORT view (data going out to the sub-doc) and an IMPORT view (changes coming back)… modifications to these are logged here as above
  • to SAFELY make a change to the master tables we do the following
  1. make a backup copy of the one-big-page that contains all the raw tables (thats why we make it a single page) - DUPLICATE the data, dont do a LINKED copy!
  2. make your changes on the VIEW pages and document them (changes the raw tables in the big-page)
  3. do sufficient tests to ensure the changes are correct
  4. leave the backup page in place in case you need to roll-back over the coming days (if you can - sometimes the duplicated row-count gets too big and we must depend on the backup copy of the document instead)
  5. if there are older backups, we delete those

occasionally we do need to roll-back changes made to the main tables - and its not easy because users will have made lots of changes to the data on the live system.

  • we usually need to kick-off all users and create a recovery button that has the actions in it
  • those actions will update rows on the backup tables with changes made in the live system and insert any rows that are new - to get the backup tables to have the latest real-world data.
  • we cannot simply copy the backup-page to the live-page as all the table references used in the cross-docs will then be broken
  • so we construct a second button that will delete the rows of the live table (holding our breath) and copy over the rows from the backup tables (and breath again).
  • and now we test everything again and let the users back onto the system

coding these recovery buttons is tedious because you must list every column by name and look out for the differences that exist between the live tables and the backup tables. its a punishment that fits the crime of having made changes that were incorrect in the first place ;o)

the new advanced cross-doc pack from @Paul_Danyliuk will probably allow us to streamline all this in the future.

this level of complexity is only worthwhile if the system is too complex to allow ad-hoc changes and when the transactional data is extremely valuable (ie; it runs your business).

non-transactional tables used for lookups and referencing are not treated this way as they are less complex and can be changed in-situ more easily.

this process is especially important when your business depends on the data in these tables or if you are in a regulated industry (like aerospace, finance, or pharma) where auditors with the power of career-life-or-death can come auditing about and looking for risks and lawless practices.

respect
max

10 Likes

Awesome details, thanks for sharing :grin:
This level of work is essential for platforms such as Coda to address. I like how you build processes using the tool itself, but it still feels clunky. I feel there should be a concept of meta-tables, where all pages/tables/fields/changes in the system are exposed via the native UI concepts (pages/tables/fields) automatically.

1 Like

Dear Max, it will never cease to amaze me how generously you share your time and insights with the community. This post, as any other, is a pleasure to learn from. Thanks so much!