🚀 **Coda Doc Scraper**: Export Your Entire Coda Doc to JSON!

Hey Coda Makers! :wave:

I’m excited to share the Coda Doc Scraper a tool that makes it super easy to fetch and export data from your Coda doc to JSON via the Coda API.

I use this primarily to get feedback on my Coda Doc from LLMs :robot:

  • Privacy & Security: Your API token is stored locally and never shared.
  • Flexible: Fetch metadata only, specific rows, or entire tables.

Get the Code

If you’re a developer or just curious about how it works, the project is open source! Feel free to check out the code, contribute, or customize it for your needs:
:link: GitHub Repository


Feedback Welcome!

I’d love to hear your thoughts, suggestions, or feature requests! Let me know in the comments or open an issue on GitHub.


Happy scraping! :rocket:

13 Likes

Interesting use case - what’s an example question you’re asking the LLM, and what sorts of answers does it give??

Haha, yeah. I build a lot of pretty complex low-code app MVPs using Coda, so having a quick way to get feedback from an LLM is great.

There are several common prompts I use:

  • Summarize what this app/doc does (usually for my clients).
  • Do you have any recommendations for how I can improve this doc/app?
  • How can I streamline [workflow]?
  • Write a workflow tracking the user’s journey of this doc/app.
  • Write Instructions/a User’s Manual for this app/doc.
  • Are there any security vulnerabilities
  • Help me build a custom React front-end that interacts with this Doc via the Coda API
  • And my new personal favorite: Convert this Coda doc into a React app that I can host on Vercel. (this is makes the transition from Coda low-code MVP to a production-ready web-app with a custom UI and OAuth much easier/faster).

LLMs (Claude in particular) have gotten much better at coding in Coda Formula Language, rather than conflating it with javascript. But, as of right now, despite Coda AI, there’s no built-in way to get an AI to give feedback on your Doc (like GitHub Copilot).

4 Likes

Wow Jon, that sounds amazing. Thanks for sharing with the community!

Is it really able to write a manual / user journey just looking at this data? How good it is?

Can it really grasp all the complexity of a non-trivial doc and translate it reliably to react?

1 Like

Hey Pablo - thanks man. Short answer is yes, with AI-QA, and manual review/editing.

I could probably update the whole app to:

  1. Minify: More tightly/efficiently store table data (less unnecessary info reduces LLM confusion)
  2. Prompt Bank: Add some of the regular prompts I use to the interface to select/deselect to be included in the copy/export (e.g. a prompt to determine the User’s Journey(s); a prompt to fact-check the previously generated User’s Journey(s).
  3. Chatbot: Integrate an LLM enabling you to chat” with your doc. (Maybe with a dropdown to select a specific model).

Awesome! Now integrate a import function and you basically created a backup service for Coda docs. Something a lot of people ask for. :smile:

4 Likes

This is a thing of beauty! Thank you so much!

If you provide Replit with a few screenshots of a Coda app, Replit Assistant will generate the app in a language of your choice, debug it, and deploy it ready for testing and refinements/enhancements.

Yes. Coding models have advanced quite a bit in the past year to adequately infer underlying complexities with minimal guidance and UI screenshots.

A screenshot might tell you some of the necessary context, but there can be complex actions / filters / formulas that are not obvious from the UI. I was wondering if the understanding of all this underlying logic in CFL (which AFAIK the commercial models are not great at) is good enough to reliably translate it to react.

We cannot underestimate the reasoning that these recent LLMs perform. I’ve seen this approach produce magical results, often better outcomes than the original app. However, you have to nudge them in most cases. They can infer a great deal, but they are not mind readers (yet).

But here’s the other side of this twisted coin - you can also describe an app instead of challenging the LLM to guess how something was made based on a picture. Solid requirements fed into Replit Assistant will almost always amaze you. And, it is capable of requirements that are comprised of images, data, code, and even video.