Launched: AI chat, your new virtual collaborator

Hum. When I am writing in Coda, the diffs that Coda identifies in its history are more frequent than I would want for an expensive calculation. That could be because it can take me several hours and multiple drafts to write a single page.

Worry not. Coda specializes in smart systems that optimize recalculations. Have you ever watched how precisely Packs tend to cache results?

Just got a chance to test this new incredible AI feature. It helped me find I was missing an important column,and it helped me repair the issue.

Then, I discovered that, asked properly, it could read the PDFs that I’ve uploaded as attachments for that system. Proper asking is tricky: I used the Lorem Epsum Latin text file as one of the pdfs. FileNumber was the name of the Display Column. Coda AI worked with ā€œWhich FileNumber pdf contains the word consecteturā€, but it could not find ā€œWhich FileNumber pdf contains the words consectetur adipiscingā€. And it cold not find the consectetur reference unless I used ā€œcontains the word consecteturā€; just using ā€œcontains consecteturā€ did not work, and it usually did not like quotes around things.

But still, as Bill French has noted, an incredible time saver. Thank you very much!

This is the subtle line where inferencing ends and reasoning is required, a trait that has yet to emerge in LLMs with predictable outcomes.

If - for example - Coda made it possible for us to configure chat components (as we do with AI blocks), it would be possible to establish a sustained prompt rule and persona. Such settings are often used to shape conversational AI features in ways that help users achieve positive outcomes even when they don’t specify ā€œthe wordā€ as in your example.

Chat is not how we become more productive. It’s how we overcome inaccessibility, awareness, and understanding. But each of these dimensions can be more productive for those who know to be extremely precise, and for those who don’t know what they don’t know.

Chat with your doc is a step up and also a step down for those who struggle to use it. Coda needs to shape this feature so that all of the steps lead upward.

1 Like

Indeed. For me, the reading of a pdf was huge. But you are right about complexity. Years of being a programmer and a legal background allowed me to figure out what to say and how to say it, and it also allowed me to feel that I could properly experiment and get a proper result, rather than feeling I couldn’t handle it. New users just think it didn’t work and give up. PS Thanks again for Grammerly and the comments that have saved me so much time.

1 Like

Would love to see it as part of the UX for published docs (considering that it excludes the hidden pages) to assist the users too.

I can’t wait to have it enabled on our Workspace!
We have been thoroughly documenting all of our procedures and best practices on Coda - and there are now tons of them… AI Chat is the magic I was waiting for to make this knowledgebase easily accessible and immediately pertinent!

THANK YOU CODA, YOU ROCK !!!
No You Re Awesome GIFs | Tenor

1 Like

Interesting thought! Would you expect that the AI tokens come from the author, or the viewer of the doc? If the latter, how would you expect AI to work for non logged-in users?

Brian, by no means I have enough knowledge on how to train a on-top-layer of AI engine for chat bot :blush: However, I imagine that the token would be from the author who defines how the data is used and set the data permissions (public or private) and user access rights.

I imagine that the training of the AI chatbot for a particular doc (or workspace) would be awesome feature for Coda.io which would make it the ultimate tool for data organisation.

Few wild ideas about that:

  • selecting coda docs in the workspace which to participate into a AI training
  • content gets pushed to for data restructuring using Coda AI
  • Coda AI creates a new document with suggested strings as most appropriate for annotating dataset in a table format.
  • The user would access this new coda doc and see the drafted annotated dataset and a test it. It would be perfect if just by changing values in some rows/cells/columns the author could improve the annotated dataset.
  • Set the document dataset as Ready to Train Chatbot and provide the decision whether the dataset to stay private or public
  • The custom chatbot would become available on for testing and once approved it could be attached to specific documents in the workspace.
  • After publishing the document the chatbot icon just displays for the published document (ideally, it would be accessible to both coda and non-coda users who are in the document either for edit or view). Their feedback could be useful to update the dataset from the author.

It’s likely that the above points make no sense to a person with AI expertise and knowing more about how Coda AI works (I’ve only touched on it so far) :innocent:

This is precisely what I currently build for many clients. This technology brief (still in development) exposes the capabilities and features I build around the CustomGPT platform. Incorporating Coda documents in this architecture is very difficult because there are no streamlined ways to expose Coda document canvases to an external corpus-building workflow.

1 Like

I knew I am not the only one thinking about it :smiley:

My improvement-thinking mind tells me that such a chatbot would be greatly beneficial for learning and knowledge exchange. Google have also been thinking about it and it seems like there will be an option for GDrive some time in the near future.

AI should be able to help build AI - one that thinks in tables and data structuring can teach another one which thinks in conversations :slight_smile:

Will follow your page for updates on the topic.

2 Likes

I believe there already may be one through the Vertex platform.

In case you missed it, very early experiments with GPT-3 the day OpenAI’s beta cone of silence was lifted.

1 Like

i enjoy having Coda AI Chat open (on the right hand side) as I work.
but when i need to capture parts of the conversation, i find that the insert button only inserts the RESPONSE into my canvas - which gives a response with no CONTEXT.

so it would be useful to have 2 buttons; one inserts the RESPONSE only, and the other inserts both the PROMPT and the RESPONSE.

it is particularily frustrating that i can only select the text of the response and NOT the text of the prompt from the side-panel; so i cannot even copy-and-paste the conversation!

otherwise i am delighted with the Coda AI implementation

max

1 Like

And/or… a new AI block called AI chat. You should be able to embed a conversational AI session in a block and the features should support the historical elements, it should be able to be cemented in time (by Makers), and it should even be searchable and accessible/addressible through CFL.

3 Likes

Finally was able to check out the AI Chat feature and unfortunately am getting an unexpected error. Anyone else seeing this?
Screenshot 2023-09-20 at 9.11.37 AM

Maybe I’m doing something wrong? I’m trying to use coda ai to help me create the structure of my doc and its tables. it keeps giving me directions that include wrong information about how coda itself works and looks. For example, it keeps telling me to click on settings and buttons in the coda sidebar that don’t exist, and then follows up with an ā€œapology for the confusion.ā€ It also only gives me examples (for example, for how to structure a table) in markdown. is that how it’s supposed to work?

You are asking an LLM about things it cannot possibly know.

Share your prompts here and maybe someone can help.

@Bill_French Hope you are doing well. With the new development of OpenAI, do you think Coda data could be linked to a custom ChatGTP?

1 Like

Yes. I think there are two viable ways, (i) a custom GPT with a no-code pathway to Coda data (such as Zapier), and (ii) a custom chatbot based on the GPT Assistants architecture.

Similar to custom GPTs, assistants also empower you to create custom chatbots using the contextual power of ChatGPTs. However, unlike GPTs, assistants cater specifically to developers and do not have a dedicated user interface. The former being the no-code approach and the latter requiring code.

In both cases, these custom GPT chatbots must make a copy of Coda content to function. That seems like a bad approach that would have to present a lot of benefits for following that approach. I don’t think it’s wise to make copies of data and move it across the wire to achieve an inference that can occur must easier and faster adjacent to the data that is needed for inferences. I’m describing Coda AI (of course) - it already does this, so one must ask -

> What are the requirements that are driving such a question?

Hi Bill, and huge thanks for responding so quickly and detailly. My goal is to give the users (not editors) the chance to communicate with the data in the document in a published doc. If Coda AI can’t do that, then a custom GPT embedded as a layer in an website with embedded Coda published doc would do the trick, although, like you said, not having up-to-date data is not at all great. What do you think?