So I’ve been thinking lately about the best way to interact with my Coda data using AI. The buzz right now is around MCP, and that seems like a reasonable way to go.
Coda doesn’t provide any native MCP server, but I do see one from Pipedream and plan to experiment with that.
However, my use case with coda is not as a documentation platform, where it would be helpful for an LLM to be able to simply read all my docs and respond with some information. The stuff I’m doing is more app-like, with certain docs acting essentially as standalone web apps.
With this in mind, my instinct is that a generic coda MCP server will not be opinionated enough to provide effective tools to an LLM. Rather than exposing a getRows tool, it seems to me that I should be exposing a getShoots tool, for my doc that I use to schedule video shoots (particularly because a “shoot” is far more complex than a single row, and includes all kinds of related data about crew bookings, equipment allocations, etc from perhaps a dozen other tables, some of them strung together in somewhat hacky ways to enable the UI constructs I needed for human users).
Do you agree that it’s reasonable for me to try to roll my own custom API surface here, standing up an MCP server that has specific knowledge about the data structures of my Coda “apps”? Or am I thinking about this in too much of a pre-AI mindset, and I need to just relax and trust that the AI can figure out all my schemas? (This seems like the long-term end game, but I’m incredibly skeptical about this currently)
And if custom MCP server is the way to go, what do folks like for that in terms of platform? I’m agnostic about the level of code involved, could be no-code or very much from-scratch. (Follow-up question here - should my MCP server engage directly with the Coda API? Or should I set up some kind of data replication to Supabase or something like that?)