How to make an AI Chatbot in Coda

You cannot normally have a chat with Coda AI inside your workflows. When you set a column in a table to use Coda AI, you can provide a formula that provides the prompt, but it does not remember any of the previous prompts or responses - so it has no context to help it understand the current prompt better.

When you click the AI Refresh button or execute the RefreshAssistant() formula, the AI has no ‘memory’ of the previous exchanges - and so loses all that context.

But is is possible to give that context to the AI engine. The convention used by LLMs is to provide the previous prompt/response pairs in the current prompt. You label the queries with “USER:” and the responses with “BOT:” and the LLM understands this to be a complete conversation to be used to provide the context for the query.

In the example above, you will notice that each question assumes the AI remembers all the previous exchanges. So references to “this”, “there”, “that monastery”, are understood. When it is asked “And the ecomony?” it understands from the context that it is the economy of Greece that is being refered-to.

To achieve this, there is a hidden column called Prompt that is the actual prompt given to Coda AI for each row. That column has a formula that gathers all the previous exchanges and labels tham with “USER:” and “BOT:” and this ‘reminds’ the LLM of the conversation that is to be used to provide context for the questions.

This results in this Prompt for the last question from the example above;

The AI settings for the Response column is set as follows. By setting the length to a single sentence, a better conversation ensues. Longer responses would contain many repetitions of previously provided information.

image

Note how the queries are short, simple and direct, but the responses are germaine and to-the-point.

The button simply does RefreshAssistant(Response) and adds a row if its the last row of the table.

This approach provides Coda users with a form of AI interaction they will be very familiar with from ChatGPT and other chat-based AI platforms.

Max

12 Likes

About a year ago I created a similar approach, but it was not performant at all. It was so slow that no one in my company would tolerate it.

How well does your version respond?

we have had no complaints regarding performance.
i have not done any measurements,
but it feels like under a second per row,
which is fast enough for our use-cases

Ahh, @Xyzor_Max, time and time again your implementations amaze me. Thanks so much for taking all of us along in your discovery journey and for sharing your insights!

1 Like

Wonderful. My approach was gated by all sorts of things including Coda AI itself which was only an vague idea when I first tried this in early 2023.

It’s nice to see it is more practical now and the idea that conversations could exist in a data-centric framework is even more compelling.

Firebase also supports the idea that conversational AI solutions should begin and end inside the data. Firebase AI extensions make this possible. Simply store a prompt into a field and watch as the database becomes the agent, complete with history, multi-turn inferences, and even a baked in ability to support unlimited conversational threads.

1 Like

Can you have the sort of conversations you can within ChatGPT with prompts and responses being several thousand words long?

yes.

by changing the ai-settings of the response column.
i set its length to single sentence, but it can me longer.

Nice one, @Xyzor_Max. How fast does this burn through credits?

i have not measured that because i use the unlimited credits option.
and i insist all my clients do the same.
i cannot afford an automation that uses ai to suddenly stop working due to credits expired.
i believe to unlimited credit option is good value when you start to exploit coda ai in your workflows.

simply doing the initial “prompt engineering” to get the ideal prompt for a use-vase would burn through a lot of credits before you started to get any business benefits

@Xyzor_Max will be possible for the user to ask generating a text block using LLM on a document knowledge base (so i.e using RAG on documents like txt or MS Word documents).

in principal yes.

but it would be easier to just add your content (the text from your ms word docs and other texts) directly to top of your prompt via a formula.

however…

there are limits to how much of your prompt text will be used by coda ai.

i dont know how much of the context window is consumed by coda’s own prompt content, and how much remains for you to use.

so you need to experiment. but i suspect the limitation of the current version of coda ai will not support much text from your word docs.

and you will need to pay for the unlimited ai credits or you will run put before you have even got your prompts figured out.

max

2 Likes

You think will be easier to integrate a third party LLM?

Honestly since some months I’m not using GPT anymore for my LLM-related activities, passing on Mistral (various models) and Claude (2, 2.1 and now 3 different options).

Having a complete LLM+RAG pipeline outside Coda (but integrated with Coda UI) will be probably much more flexible than using current Coda native AI features (limited so some use cases in my understanding).

the purpose and scope of my original post was to simulate the conversational form of interaction with Coda AI, simular to platforms such as ChatGPT etc.

it is not intended to extend the paradigm to RAG or agents etc.

so you are most likely to need to use packs to progress in that direction.

max

1 Like

@Xyzor_Max, I have played with this method and find that the Coda AI often returns a response to the first prompt in a chain or sometimes it just repeats the prompts back to me. I made a sample doc with manually created prompts, where you can see these strange responses in row 4 and 8.

Another strange thing is that I consistently get the same response to a particular prompt in my test doc, but when I use the exact same prompts in another doc with the same settings, the quality of the AI responses is much lower. (Attaching a screen dump)

Any idea if I may be doing something wrong? Would you recommend using a pack instead to get better results?

this looks more like an issue with my formula than a problem with coda ai.
i am travelling at sea a.t.m with variable internet bandwith.
so i will look into this early next week.

max