I made a 'thing' that might be useful?

Agile Vibe Maker Pack

An AI agent for building coda workflows from English* prompts.

From inside a your Coda document you will be able to:

  • Provide a description of a workflow in English*
  • Generate a Coda doc implementation of that workflow
  • Include RAG AI Agents (in native Coda) if required

The production version will use the Superhuman Go Agent Builder SDK when that is generally available.

This experimental version requires access to a Github account, an OpenAI API account, and the Coda MCP Beta-test server, alas.

Other limitations, for now, are the inability of the Coda MCP server to generate buttons or automations.
(For which we have a workaround, using the CFL Actions Compiler we built for our Visual Basic pack some years ago.)

The pack will also allow one Coda document to create or modify other Coda documents using just CFL formulas.

It will also generate Superhuman Go Agent SDK pack code so we can create Grammarly-like agents using English* prompts.

Members of the current Coda MCP Server beta-test program might like to volunteer to help us test this pack?

You can sign up via the Coda form here.

Respect
:red_circle:➤𝖒𝖆𝖝

(* This pack uses the emerging “Markdown Specification Notation” to define the required functionality - a structured form of English)

Pack ID

5 Likes

You had me at “thing”. :blush:

Yeah, limitations like this are a real buzz-kill. @Bharat_Batra1 is hopefully watching.

I overcome these limitations in most use cases with two [agentic] rules baked into mcpOS:

  1. Authorize the agent to use a browser subagent.
  2. Instruct the subagent to verify and thoroughly test its approach in a document “sandbox” before implementing in the production document.

Here’s the general process I’ve defined in my agentic platforms, Antigravity and Claude Code/Cowork:

It’s important to note that browser subagents are not perfect, but in most cases, they work well. Given their somewhat stochastic nature, ambiguity can also be a buzz kill. This is why it’s so important to add a clarity layer to any agent that is writing content to a Coda document. Coda MCP alone is insufficient (and technically unable) to provide this layer as I exposed in this agentic strategy article.

You’ve always been very deliberate about ensuring that unlocking coda’s power should not require deep knowledge of a traditional programming language like Javascript. If I remember correctly, that was one of your main disappointments with the packs infrastructure.

Because as someone who knows JS deeply, you’ve always been clear about the tradeoff, that a few lines of JavaScript can compress an enormous amount of hidden logic,which makes it impossible for non-JS experts to truly understand it.

To now hear that you’ve now unlocked a way for makers to use AI while eliminating the requirement to rely on JS makes me think you’ve come full circle.

You’ve come up with a way that enables coda makers to use constrained natural language (markdown-style specs), or in the future using CFL directly, to build workflows, evaluate what was generated, change it, and maintain it.

What an accomplishment, and what a testament to your perseverance on bringing your vision to life. Can’t wait to see where you’ll take this! Congratulations!

4 Likes

I think this is pretty cool and clever to say the least. But let me say something that’s way more than the least. :wink:

These things cause me to pause a little. It’s why I haven’t jumped in to help test it.

  • Proprietary notation (“Markdown Specification Notation”) creates a dependency on Max’s framework, right? While well-designed, we’ve seen this before. It’s a good approach, but it often ends with some heartburn [eventually] because there are so few resources to maintain and improve the notation variant as everything around it changes. Am I missing something?
  • Embedded agent logic in Coda means debugging requires understanding CFL, Pack internals, and the agent framework. This is a lot to embrace. While any good Maker will feel at home working with embedded logic, the number of good Makers is not in abundance. Businesses may be reluctant to embrace this approach.
  • Version coupling — when Coda MCP, OpenAI APIs, or the Superhuman SDK change, multiple surfaces break. I can’t see a way around this or a way to label this as an unlikely event. If there’s one thing we can predict with three 9s of precision - all of these platforms will change and very often.

Maybe I’m overly sensitized. Perhaps Max can help get me past these issues.

This is all leading-edge experimental stuff.

It will all be swept aside the day Superhuman Go Agent Builder is officially launched (and has access to the Coda MCP Server).

Then citizen developers will be able to create Coda Workflows using English specs.

Meantime, we must explore and experiment and boldly go where no intelligence has gone before.

The spec language is an emerging industry standard (like MCP itself) and not proprietary. Any detailed prompt will do. the notation just adds clarity.

See the end of my last video on the notation I use.

2 Likes

I suffer from several biases. Some of them are probably rational, at least, that’s what I tell myself. It’s a bias in its own right. But, I think I’ve been conditioned to digest a future Coda/Superhuman roadmap claims with an element of skepticism that is reinforced by market and technical realities. I have some work I must do to temper my skepticism and remain as open-minded as possible. I truly hope the agent builder isn’t another Coda Brain. So far, this vision is as elusive as the brain.

Still, it’s a vision I hope for because it bridges a critical gap that enterprise-centric solutions require - security, determinism, and provability. Despite Anthropic’s deep enterprise penetration, it was not achieved by Claude Code or Claude CoWork. These tools run in the face—completely antithetical—of the primary business drivers that helped Anthropic rise to enterprise dominance.

Superhuman magically bridges this gap with its presence in the OS and its predilection for cloud integration, and soon, with Packs. It’s still unclear to me if something this simple will work in a Pack. It should, right? Thoughts?

// This is the "sprinkle of code" that probably matters most
pack.addMCPServer({
  name: "LocalFileSystem",
  endpointUrl: "http://localhost:3000", 
});

Ultimately, natural language and other human-guided intentions is how we will instruct computers. This is rarely a debate narrative, and when it does arise, it’s a waste of time and energy. We want our computers to understand us in the biological framework we exist. It’s not unlike self-driving cars - they need to operate on roads that people designed for people.

Your examples of pseudo code as prompts are very good. They exist in the grey area of discerning user intent and serve as crucial language examples that LLMs seem to prefer. Perhaps a sidebar observation, I’ve been experimenting with real-time voice as the basis for pseudo code generation. I say what I want the process to do, and it writes the workflow in a yaml/code-like format ready to run in a skill. Note - this all occurred in a Coda document. It can occur in any app because VoiceInk, like Superhuman GO, are OS-bound hooks.

Ideally, no human should have to write these code-like constructs and today’s LLMs are capable of abstracting instructions in the LLMs native tongue in real-time. It’s good experience to do this manually, but you can also eliminate it entirely by using tools like VoiceInk which provide the design components to accelerate this part of the process.

Yes, but with shields up, perhaps? :flushed_face:

2 Likes

UPDATE:

Thank you to those who signed up for early testing here.

We are delaying those tests as we make a significant change to the architecture.

Users have found the creation and deployment of the Vercel AI Client to be rather complicated and not very ‘no-code’ friendly.
So we are switching to a simpler architecture using ValTown HTTP vals instead.

Ultimately, we look forward to being able to dispense with the need for the “bridge” and instead be able to provide the Coda MCP Server tools directly to a Superhuman Go Pack, but there is no sign of this being available from Coda anytime soon.
In the meantime, we must find a more ‘no-code’ friendly architecture.

It may be that N8N, Make, or Zapier will soon be able to use the Coda MCP Server directly in their automations.
It seems strange that there remains this large chasm between HTTPS APIs and MCP Server Tools.
We are at an early stage in the evolution of the AI/no-code ecosystem, alas.

Respect
:red_circle:➤𝖒𝖆𝖝

1 Like

I love this project.

This totally makes sense to me, and ValTown is a pretty cool way to build an “External Nervous System” for Coda.

While Coda is world-class as both the data engine and the user interface (the “body”), true agentic processing—the kind that requires planning, looping, and using external tools—needs a dedicated space to “think” without being slowed by spreadsheet logic. Is that how ValTown is being used in your methodology?

Since Coda doesn’t natively host a full Agentic Engine yet, using Val Town as a lightweight, no-code-friendly bridge seems like a winner. Think of it as a “Cloud Function for the rest of us” that acts as the Orchestrator. If the Coda->Pack-as-Agent is delayed or it has some limitations, it seems like your architectural changes could be a promising long-term strategy.

I’m certain I don’t fully understand what goes into ValTown, but it appears it’s the place where the agent’s brain lives, pulling the strings of the Coda API to turn a static document into an active, thinking workflow. Is that the right way to think of this?

1 Like