How to make an AI Chatbot in Coda

You cannot normally have a chat with Coda AI inside your workflows. When you set a column in a table to use Coda AI, you can provide a formula that provides the prompt, but it does not remember any of the previous prompts or responses - so it has no context to help it understand the current prompt better.

When you click the AI Refresh button or execute the RefreshAssistant() formula, the AI has no ‘memory’ of the previous exchanges - and so loses all that context.

But is is possible to give that context to the AI engine. The convention used by LLMs is to provide the previous prompt/response pairs in the current prompt. You label the queries with “USER:” and the responses with “BOT:” and the LLM understands this to be a complete conversation to be used to provide the context for the query.

In the example above, you will notice that each question assumes the AI remembers all the previous exchanges. So references to “this”, “there”, “that monastery”, are understood. When it is asked “And the ecomony?” it understands from the context that it is the economy of Greece that is being refered-to.

To achieve this, there is a hidden column called Prompt that is the actual prompt given to Coda AI for each row. That column has a formula that gathers all the previous exchanges and labels tham with “USER:” and “BOT:” and this ‘reminds’ the LLM of the conversation that is to be used to provide context for the questions.

This results in this Prompt for the last question from the example above;

The AI settings for the Response column is set as follows. By setting the length to a single sentence, a better conversation ensues. Longer responses would contain many repetitions of previously provided information.

image

Note how the queries are short, simple and direct, but the responses are germaine and to-the-point.

The button simply does RefreshAssistant(Response) and adds a row if its the last row of the table.

This approach provides Coda users with a form of AI interaction they will be very familiar with from ChatGPT and other chat-based AI platforms.

Max

10 Likes

About a year ago I created a similar approach, but it was not performant at all. It was so slow that no one in my company would tolerate it.

How well does your version respond?

we have had no complaints regarding performance.
i have not done any measurements,
but it feels like under a second per row,
which is fast enough for our use-cases

Ahh, @Xyzor_Max, time and time again your implementations amaze me. Thanks so much for taking all of us along in your discovery journey and for sharing your insights!

1 Like

Wonderful. My approach was gated by all sorts of things including Coda AI itself which was only an vague idea when I first tried this in early 2023.

It’s nice to see it is more practical now and the idea that conversations could exist in a data-centric framework is even more compelling.

Firebase also supports the idea that conversational AI solutions should begin and end inside the data. Firebase AI extensions make this possible. Simply store a prompt into a field and watch as the database becomes the agent, complete with history, multi-turn inferences, and even a baked in ability to support unlimited conversational threads.

1 Like

Can you have the sort of conversations you can within ChatGPT with prompts and responses being several thousand words long?

yes.

by changing the ai-settings of the response column.
i set its length to single sentence, but it can me longer.

Nice one, @Xyzor_Max. How fast does this burn through credits?

i have not measured that because i use the unlimited credits option.
and i insist all my clients do the same.
i cannot afford an automation that uses ai to suddenly stop working due to credits expired.
i believe to unlimited credit option is good value when you start to exploit coda ai in your workflows.

simply doing the initial “prompt engineering” to get the ideal prompt for a use-vase would burn through a lot of credits before you started to get any business benefits