Coda AI - It's not me its you

Hi All,

As I’ve been beta testing Coda AI - I have been left with a rather lackluster impression. Now - I completely admit that I may not be prompting sufficiently enough to get my desired results. Curious about others impressions of it?

My use case for Coda AI is mainly filling in missing information to my tables - for example I tried using coda AI to help fill in missing data for one of my land development budgets and It failed miserably. But as previously said - I maybe prompting poorly or asking it to do things its not capable of. However, when i have gotten sufficient answers from coda AI the output is wordy and screws up any chance of using its answer to relate to other tables and expand on it… I’m starting to think that every software company is getting too horny with the idea of AI in their products. From my experience coda has such a long way to go until its AI is capable of understanding context in a useful way as well as formatting its response in a way that corresponds with the users table context.

I think coda was very ambitious in their AI attempt - which raised my hopes. I wouldn’t say that Coda AI is any worse than its competitors like notion - but I can find way better solutions in the form of extensions that use Chat GPT…or just use Chat GPT itself which outputs 100000000x better answers.

In my opinion Coda needs to find a way OCR documents and format that into a coda table - keep that chat history in Coda AI’s memory and use that to dig deeper into the pdf or whatever you just had it parse… and perhaps this context would also help coda AI search the web for additional context… I’m just spitballing here

I feel your pain. :wink: I think it’s the latter. You are expecting behaviors of an LLM that is gated by two requirements - the live Interweb, and time.

This is where custom Pack interfaces come to your rescue. Coda AI is gated by the limitations of its own design; Packs are not.

You are letting the LLM act before it thinks. Promptology may help you get better results.

This is a typical sentiment and in many cases, the prompts are partly to blame. Another is no access to embeddings, a common staple in AI solutions that work well.

This is why (for my own and consulting projects) I lean on PaLM 2. It is free, it is fast (2.7x OpenAI) and it has access to the live web. Most important, it provides access to embedding vectors. I predict Coda AI will soon support interchangeable LLM selection and embeddings. They can’t not do this. :wink:

Here’s three examples - Coda AI (GPT), Bard, and PaLM 2 inferences about a software company that did not exist before Jan 1, 2023.


This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.