Working with ChatGPT has fast-tracked my Coda abilities an incredible amount. I’ve been able to develop many complex and effective schemas / formulas - beyond my own abilities - while learning so much as a result of the process.
The only friction comes from having to describe the existing schema, relationships and copy/paste the formulas. Screenshots of docs / tables / column lists provides some extra help. But the Doc Map, not so much.
I’d really like to be able to export a meaningful dataset that could provide all of this info, minus row data, to ChatGPT.
Yes! I’m offering a service to visualize the interactions of columns in tables in a doc
I usually only deliver pictures of the diagrams, but if you’re planning to feed it to a LLM then I could attach the diagram code as well, it should know what to do with it =) It’s an eraser.io “Entity Relationships” diagram
Oh wow! Sorry I missed this until now. This looks great. Some docs contain sensitive data, does this access only the doc infrastructure or does it require duplicating and emptying the docs before analysis?
It would have access to everything since Coda doesn’t yet offer granular permissions on their API tokens. Of course it’s not reading any rows, but I’m afraid it’s hard to prove that. Duplicating and emptying the docs sounds like the best current solution!
What is the process for exporting detailed document maps to utilize for ChatGPT assistance, and how does it enhance the quality of the assistance provided?