Coda AI and the User Experience

Building systems that embrace Coda AI is made easier by the well-designed integration of AI features in Coda. However, translating these capabilities into an improved user experience that embraces AI for productive outcomes, is still a big challenge.

The hosts of Latent Space recently tackled this exact question because Sam Altman, CEO of Open AI admitted that GPT Plugins were not panning out as had originally been hailed a massively smart idea.

ChatGPT plugins aren’t really taking off because people want ChatGPT in their applications, not applications in ChatGPT . They wanted the UX more than the model.

Bingo. Finally, someone realized what I said the week after GPT Plugins were announced.

You can’t have a ten-minute latte break at Starbucks without overhearing a conversation about GPT Plugins and how they will revolutionize customer experiences with brands and real-time data. These grandiose AI visions in the web context represent extremely early, deeply flawed assertions .

In the shadow of this unintended reveal by Sam Altman, the Latent Space fellows carved up the topic in their latest Substack post and hypothesized -

If plugins are not the holy land, what is? The UX of every product that integrates AGI is.

Specifically, they followed this problem domain to the topic of prompts and that’s where Promptology is relevant.

Swyx and the Latent Space team determined that … prompts can often be split into three components:

  • Command: the part that is specific to the task
  • Constraints: the part that is specific to the user
  • Context: the part that is specific to the situation

For example: I have milk, eggs, bellpeppers, flour, olives, mild salsa, and blueberries. Give a recipe step-by-step in metric units that I can make with a non-stick pan and blender.

Currently, users have to do all of the work to specify all three components, so how can we shift some of this work from the user to the app? The key is that it has to be intuitive. So, here are some ideas:

  • Command : let users create task presets and then be able to search them
  • Constraints : creating user settings in the app that are automatically pulled in
  • Context : simplifying collection of data (ex. take a picture of your fridge and CV that)

Promptology handles command and constraints exceptionally well. Read more about it here. This is what a Promptology [prompt] template looks like:

Context - that’s on you Makers to sort out. :wink: But, Coda AI is pretty clever about making your data accessible in the context of a prompt. Dynamic infusion of data into AI prompts is possible because the prompt dialogs support @ and = references. You can easily ingest a page, a table, and even use a formula to transform large data sets into smaller sets for inferencing.


Coda and Coda AI is fully capable of being the UX that OpenAI wishes they could build.


I like your Promptology doc.

As with every new technology, AI is going through a fashionable phase. And lots of companies are throwing lots of money at it, in a FOMO frenzy.

I cringe every time that Coda suggest I get the AI to “write a blog post.” Very soon we are going to get to the stage where half the activity on the internet is going to consist of people getting AI to write blog posts to be read by AI’s.

Never in the history of humankind has anything really worthwhile been easy and simple. Including AI - you need to THINK about what you want done.

And that is where Promptology is good, it helps, prompts you to think.

I saw a picture a while ago, saying something to the effect that: “our programming jobs are safe; not because AI is bad, but because most people are so bad at articulating what they want.”


1 Like

Um, I think we’re well past that point. The Interwebs are vastly oversaturated with crap. It was long before AGI came on the scene. How it was generated no longer matters. If you generate 10x more, it’s still the same - mostly worthless. We’ve reached the point where exponentially bigger piles of badness are no worse. Unlike how we got here, AI possibly holds the key to making a very big pile of crap potentially useful. AI may give us the out we desperately need to nullify the crap.

Thanks for noticing! For me, it also gives me an immediate sensation that I’m not [yet] on an ideal track and when I find the ideal track I can preserve it for me and others. Losing ground in prompt-making is like accidentally deleting code. It’s miserable.

Yes, it is a funny meme in the requirements management circles for sure. However, we’re about to see an AI model (perhaps as soon as this or next year) that invalidates this axiom. We think GPT is a new and difficult-to-top high water mark in the seventy-year history of AI, but there’s another much higher mark that we will probably see very soon. It will magnify the possibilities that we already regard as life-changing technology.

AGI is the equivalent of a suited ace-king in poker. We’re about to be dealt three more aces.

Buckle up!

1 Like

I firmly believe that things will just pick up more speed.

It is really unnerving to think just how much AI is going to be able to do. Coupled with 3D printing, automated driving, robotic surgery and a boat load of areas that will surprise us.

Unnerving, yes. But what really has me concerned is how the economic proceeds of this is going to be distributed.

We (the USA specifically, but not limited to) has a really bad track record in sharing the proceeds of productivity improvements. Just look at the last fifty years.

On top of climate change, and it’s economical impacts, the world is going to have to also work through earth shattering economic changes. Human greed has always been a much bigger problem than technological progress.

1 Like

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.