Custom Coda MCP Servers for AI Interaction

So I’ve been thinking lately about the best way to interact with my Coda data using AI. The buzz right now is around MCP, and that seems like a reasonable way to go.

Coda doesn’t provide any native MCP server, but I do see one from Pipedream and plan to experiment with that.

However, my use case with coda is not as a documentation platform, where it would be helpful for an LLM to be able to simply read all my docs and respond with some information. The stuff I’m doing is more app-like, with certain docs acting essentially as standalone web apps.

With this in mind, my instinct is that a generic coda MCP server will not be opinionated enough to provide effective tools to an LLM. Rather than exposing a getRows tool, it seems to me that I should be exposing a getShoots tool, for my doc that I use to schedule video shoots (particularly because a “shoot” is far more complex than a single row, and includes all kinds of related data about crew bookings, equipment allocations, etc from perhaps a dozen other tables, some of them strung together in somewhat hacky ways to enable the UI constructs I needed for human users).

Do you agree that it’s reasonable for me to try to roll my own custom API surface here, standing up an MCP server that has specific knowledge about the data structures of my Coda “apps”? Or am I thinking about this in too much of a pre-AI mindset, and I need to just relax and trust that the AI can figure out all my schemas? (This seems like the long-term end game, but I’m incredibly skeptical about this currently)

And if custom MCP server is the way to go, what do folks like for that in terms of platform? I’m agnostic about the level of code involved, could be no-code or very much from-scratch. (Follow-up question here - should my MCP server engage directly with the Coda API? Or should I set up some kind of data replication to Supabase or something like that?)

It can’t. At least not well. But, you should try a generalized MCP server and provide the tools with your schema so that it will be well-informed about your expectation outputs.

1 Like

maybe give fibery a test :slight_smile: until coda hear us

Just want to wade in here with a perspective from my own MCP experiments as I’m aware product teams read these threads.

I’m still learning about MCP and playing about with use-cases - it hasn’t gone great with my Coda tables though - as Nick alluded too, he’s building his own server, a lot of people in the Notion world haves custom servers for specific docs too.

I have a tiny instance of Pinecone vector store running for a project right now and it’s just made me think a lot that maybe we’re looking at such a new paradigm shift between human readability, something Coda excels at, and machine readability.

If / when Coda introduce a native MCP integration - and I’d be tolerant of some compatibility changes if it meant proper implementation rather than something rushed into the product haphazardly - the big question becomes: do we optimize for human collaboration or AI consumption? Maybe the answer is both??

1 Like