That’s amazing! Just to confirm, we created our own custom GPT through ChatGPT. But we don’t want to publicly send the link out internally and would rather have it be embedded on a Coda page.
This is what I made and have been using for a few months…
Do you know if you’ll also be adding a comments endpoint that the MCP can use? Will we be able to create tables via the MCP? Those would be two really nice upgrades that we just can’t do right now with the available endpoints.
This particular update will not impact the UI itself as it is purely a backend change, like a new API that you can connect to Coda from any agent or AI assistant.
That being said, we are looking into how you can build and deploy those agents directly in Coda and Superhuman Go as a future update. We would love to hear more about your use case so we can keep it in mind as we design these features
Hey Dustin! We’ve been playing with your MCP as well and it has helped us think through our own design - so great to speak directly with you
Re creating tables - big yes! In our own testing this has been one of the biggest game changers.
Re comments - this is one we are still exploring and prototyping. What are the specific use cases you have in mind? Are they primarily read or also write?
Thanks Bharat - here’s the original copy from my post a few weeks back:
We are in the process of publishing a custom GPT that will be used internally at my company. Ideally, we’d love to have it integrated directly into a Coda page instead of sharing the private link.
The main reasons:
Privacy concerns: if we share the private link, anyone with access (whether they’re internal or not) could use it.
Workflow: we’d prefer our team to stay within Coda rather than opening the GPT in a separate browser window.
My question: Is it possible to embed or integrate a custom GPT directly into a Coda page (similar to how you can embed other apps or tools)? Or is there another approach you’d recommend for securely hosting it inside Coda?
Regarding the comments… for me it is both, but with a heavier weight on reading them, but since you would allow to read them… you can easily allow for writes as well.
This is great to hear! Table creation is going to be AWESOME! As for comments….
Here is one example:
A project I work on has users creating content in Coda, which is reviewed and iterated on. Users like the comment and suggest edit functionality, and I want to enable them to open their LLM of choice, ask it about their work and any comments left by their counterparts/managers, and then allow them to address it from their LLM instance. So that would be read & write.
Make a Claude Project (like customGPT) and load it with system instructions, table schemas, and any other relevant docs. I also include entity IDs, think docID, pageIDs, tableIDs. This way the instance of Claude Project know what is going on and where to take actions. This has proven quite nice as you can configure something like this to take a simple input, like just a web link, and it will kick off a whole workflow that gets logged in Coda…which can then set off the whole suite of automations or other pack actions. One of my favorites logs a bunch of info in coda then clicks a button to trigger a workflow in coda that connects to an LLM via packs.
A modern data platform is to be able to enter data in various languages and using different artifact types (images, docs, videos, audio, etc.) on any device, and equally retrieve it in any language in different content types (docs, tables, audio, video).
This update is big step forward and if Codans listen carefully to the community the above will be achieved soon.
Does this mean we’ll see improvements in the Coda API (table creation, access to comments, etc.), or will the MCP use a private API that isn’t available to the rest of us?
While I’m not personally interested in MCP, any improvements to the Coda public API would be greatly appreciated—it seems like it hasn’t seen active development in the past 3 years.
The short term goal is to get functionality into the MCP interface, as we think that will unlock a ton of value for users of all types. Since LLMs are very tolerant of change, it’s also a good place to experiment with how best to structure these new interfaces. My hope is that it will lay the groundwork needed to one day expose the same features in the REST APIs, but it would almost certainly be a separate effort.