Stop Building the Thing

This is one of the shortest posts I ever created, but it could be the most impactful.

Stop Building the Thing

I’m sure this got your attention. What’s the “thing” and what should we build instead? The answer is:

The thing that builds the things…

At Tesla, the thing is the factory. The things are the cars. They invested billions to build the thing. Ergo … in a Coda context:

Stop building the Packs; build the thing that builds the Packs.

That “thing” is AGI. More specifically a framework for using AI to create code, test it, perfect it, and document it.

Coda has used AI thus far to focus on helping a small slice of users write faster, perhaps better. [yawn] Coda (in my view) should be helping all users extend Coda.

Stop building the Formulas; build the thing that builds the Formulas.

'nough said…


It’s a nice goal. Perhaps AI will get there sooner rather than later.

But at present AI for coding is more like a great textbook of example code snippets and not capable of reliably creating full apps or full packs. No doubt it is extremely helpful, just like a great book or great website or great youtube on how to code is helpful. But in the end all of them still require that the coder is in the mindset of understanding the code rather than just having AI spit out the full completed project.

1 Like

Some of the most useful Packs are very simple. They tend to meet extensibility needs with relatively small amounts of code nor are they very complex. But, code like this stands in the way of domain experts achieving their goals where even the most complicated formulas are unable to.

What is your definition of a “full pack”?

Complex formal with deep logic also represent challenges for Coda users who want desperately to be Makers, but cannot.

I think you are unaware of what’s happening in the Code Interpreter projects at OpenAI.

As I wait for access to Code Interpreter, I have also started actually doing it, but with PaLM 2. Bear in mind - I am a complete idiot with complex ideas like this, but I was able to make something work. Here’s how…

In everyday Google Apps Script code, I have built a very rudimentary “code interpreter”:

  • PaLM 2 takes in code-building instructions (in a simple chat UI)
  • It generates the code
  • Hands it off to my code “runner” which executes it
  • Returns with results, repairs it, rinse-repeat

Some of my code requests run straightaway. Many take many iterations and sometimes never execute. However, did I mention I’m clueless about automating AI code development? I’m new at this, barely a seasoned engineer, and no special training in AI or DevOps or code generation. Yet, somehow, I have it working in some cases.

I will soon demonstrate how a non-coder can build new functions in Google sheets through an AI model that understands javascript, the underlying script environment for Google Apps Script. Most domain experts cannot build new formulas that work like native spreadsheet formulas without a fair understanding of Javascript, particularly Google’s apps script flavor. PaLM 2, however, understands it well enough to generate working code that users do not want to understand - they just want the benefits of their designs. This is no different than a Coda Pack that delivers on a workflow process.

And this example is not unlike the challenges that Coda users face with new formula functionality, complex iterative formula logic, or NodeJS in Packs. All of these very small hurdles lay in the pathway to vastly greater extensibility.

1 Like

That confirms my point. Presently AI can create very simple code examples but often those examples require tweaking. Or even if they seem to “work” they will later fail in edge case situations.

AI is extremely helpful now as an interactive textbook. I love it personally. In my case I was a developer 30 years ago and then went into a totally different field so I understand the principles of coding but syntax is very different now. AI has been very helpful to get me back up to modern standards of programming but I still need to think through the resulting code just like any developer would. AI is an amazing textbook of programming, not an autonomous programmer.

1 Like

That point being an unskilled, unprofessional coder, with zero training or skills in AI can get it almost right, and therefore, skilled professionals needn’t bother try? :wink:

I have a hunch there are people at Coda who can do this really well.

1 Like

The point is that an unskilled, unprofessional coder with zero training or skills - but willing to learn - can use AI to become a “citizen developer” quicker with AI than with most traditional forms of learning.

If the goal is to remain a “non-coder” then AI code generation will not be very useful.

But if the goal is to find an efficient and effective way to learn basic coding skills, then AI is a great advance.

1 Like

Okay - thx for the clarification. What you’re saying is that early experiments like this (which occurred less than 24 hours after OpenAI’s code interpreter was released) will not become mainstream approaches for non-coders to create solutions where AI generates and optionally executes the code?


The person got lucky - or maybe tried several requests and this is the one that worked

Moreover it’s a fun demo, not a practical app. And surely not as complex as a Coda Pack.

Hard to imagine AI creating a Coda pack and getting it 100% right first try. But I can imagine that it could give you a good first-pass attempt that you can tweak yourself.

1 Like

I’m certain some are getting lucky. And even running code is often running on luck. But I guess many of the alpha testers must be getting lucky then.

Almost every practical app in the history of software emerged after a fun demo. :wink:

Yep, I agree. But, there’s plenty of evidence that AI can be instrumented to self heal through recursive dev-test processes. Much of the success and reliability will depend on the design of the “thing”. With adequate learner prompts, NodeJS can be generated with excellent results. Pack, however, follow some very basic patterns that I predict will be straightforward to generate through natural language. The corpus for training is well-defined in the SDK documentation and there’s a flood of public pack code in the library.

We have memorialized our positions and I look forward to see where it goes.


I’ve been building a few packs for clients and as I get used to it I see how the structure of it is simple - once you get used to it.
Use AI to code is great, I do it a lot with Chat GPT (which is not familiar with Coda pack library, by the way). Maybe we could get a AI assistant trainned with Coda’s Pack Library to help us out? Maybe, would be nice.
But I’ve been thinking that I could build a Coda Doc, or even a Coda Pack, to help getting a Coda Pack code. For simple API calls, to get info out of a cloud service, for instance, is not that hard, so it’s a scaffold good enough for a starters.
AI is here to help, but get to know how things work is what makes them more powerful: when you present them a prompt with all the logic, the parameters and the expected results - with a coding “jargon”, not with simple words. As for now, that’s how I’ve got the best results, so there is still improvements to be done.


I agree. It’s good enough to help someone who needs a little bit of guidance. Often, this guidance needn’t be perfect or pristine. But it still supports my belief that a universal law of productivity that we rarely benefit from is attainable through AI.

Let’s examine a few simple goals that AI has been fairly successful with. To demonstrate, imagine you want to build a “thing”, and that thing is a plan to go see the Cybertruck in Los Angeles. Having never been to the Petersen Museum (where Cybertruck is on display), I would assume the first step is to start Googling. And after a series of fifteen or twenty searches, we can begin to formulate a plan.

This is what I mean by “Building the Thing”

Alternatively, what if I built a thing that builds the plan?

When I see users thrashing away in ChatGPT or Google Search, I liken it to non-trivial human effort being sunk into building parts of a desired outcome. Over time and great effort, you begin to assemble a strategy to reach your objective which might be stated like this:

Weekend of Fun including the Petersen Museum in Los Angeles.

The subtext of this goal can be stated succintly:

Plan three days of activities (Fri night through Sun late) that include a half-day at the museum, good restaurants, and sightseeing.

LangChain and AutoGPT exist to mitigate the chat and search-level-thrashing used to reach specific objectives that are so common among users. These tools compress time by allowing the LLMs to dynamically determine what needs to be done next, and then doing them for you.

The trouble with this approach - you need to be a developer to leverage them. And despite a lot of noise about shaping these tools for every-day users, let’s just say that even that will be poorly implemented unless the UX is in a form that business users appreciate. Coda is one such form; Google Workspaces is another.

Here’s the thing I built in Coda to build the “research things” I need to accomplish.

Each row in the Coda table is an AI Agent. You provide it with just a goal and initial task, and it sets off to complete your goal and generates it into an outcome field. The report is embellished with linked locations, interesting places, and sightseeing ideas. It fully embraced my stated preferences as well. Bear in mind - I primed this process with just two sentences.

This agent also lays out a general timeline and accurately sensed the time frame for these activities to occur.

It’s Not Perfect

But neither are you. :wink:

These agents are thorough and fast. They can be easily created, modified, subclassed, and extended - all without rebuilding the “thing”.

This, BTW, is Solve for X (an experimental Pack), that attempts to combine the essence of LangChain and AutoGPT. It is surprisingly simple to use, but performs a gazillion little AI steps to achieve the stated goals. Best of all, there are no inferencing costs and it has the ability to use arbitrary LLMs. This example uses PaLM 2 exclusively.


anxiously waiting to hear more about Solve For X.

I’ve been leaning on GPT but would like to explore others.

That’s a amazing solution @Bill_French!

I think Coda’s AI Assistant will help build towards your vision as well.

But, let’s face it… these days are claiming for speed and people don’t have enough time to learn the fundamentals of things – AI will help them out? Yes! But will help them improve their fundamentals, think better about the logic of their data or even think further to improve and develop more complex things? I don’t see it happening.

I’m 37 years old, got to see a computer in a friend’s house, learned HTML by reading on a book and taking notes on paper, to write on the Notepad app later on my friend’s computer… that’s the kind of thing that makes you curious on how things work, that forces you to take time to investigate.

AI is a big revolution these days and will certainly push things further, but building things will still be a thing - a thing made by us, people that have been struggling to build things for long. And I’m sure there will be AI help for doing that too, so we could do our thing without that much struggle.

1 Like

This is [partly] what Solve for (X) became.

Coming soon!

Edit: Sometimes, Coda AI dreams about formulas that don’t exist, like DateDiff in this case.

  • Aurora solves this problem: “Don’t write prompts, build the thing that write prompts”:
  • LingAI addresses this concern: “Don’t write anything, including books, build the thing that writes it for you”
  • And Copilot will solve this thing: “Don’t write Coda Formulas, or any code snippet, build the thing that does it for you”

As we keep going up the ladder of hierarchy or abstraction, we reach this idea: Instead of just creating Coda templates, focus on building the tool that actually builds those templates. And if we continue pushing beyond that, we can imagine a world where we’re not just constructing the universe, but rather constructing the very mechanism that brings universes into existence. It’s mind-blowing to think about what lies ahead! The future holds incredible possibilities.

I totally understand that constantly focusing on these “awe-inspiring” concepts might not be the most practical way forward at this very moment which I think is somewhat counterproductive. It’s crucial to strike a balance between dreaming big and taking tangible actions. So, for the time being, let’s channel our energy into constructing and refining what we already possess, steadily advancing towards our goals. By focusing on tangible tasks and making consistent progress, we’ll lay the foundation for even more remarkable accomplishments in the future.