My name is David, and I’m leading the efforts to integrate AI with Coda. I’m excited to show you a sneak peek of where we’re headed with AI.
Many of you have already been exploring the OpenAI Pack to make your Coda docs even more magical, using it for workflows like idea generation, meeting note summarization, and data analysis.
Based on that, we’ve been hard at work exploring how we can craft a more integrated AI experience — one that just works out of the box. We’re so excited about where we’re headed, we wanted to show you an early glimpse of the possibilities. We’re also opening up a Coda AI alpha, where we’ll be gradually letting makers in to use AI while we iterate based on your feedback. Check it out, and let us know what you think!
it emotionally transport me in that moment in which you press the dot on the keyboard and your brain overloads trying to remember what formula you were trying to write ahahahaha, mum mum can we keep it???
Nope, nope, and nope. I seriously hope this is “opt-in” or is in addition to your regular tools. I’m assuming it will be in addition (that is to say, I will still be able to type /table to add a table instead of going through a b~llsh~t-generator to add a table), but please do let us know if your plan is for this to replace the “regular” Coda commands completely.
Jokes aside, it’s cool. Gimmicky but cool. I probably won’t trust it with making something critical for me or my business(es) but can totallly imagine generating some sample data with it.
This is not actually a bad idea, but first - I’m with Paul; measured skepticism. What I don’t see here are…
A careful attention to the little detail of cost. How many tokens were used in just the demo? These large responses from GPT can get pricey.
A focus on fine-tuned models that include your Coda data. This is how smart ML users compress GPT costs while raising accuracy and value.
A maintenance model that supports updates to prompts and parameters. Given the way it appears this was designed, there are a number of engineered prompts and parameters being used to trigger effective responses. Are users able to refine these hidden prompts and parameters?
A word or two about the accuracy of this when GP4 arrives; it is anticipated to break most prompts.
In my view, the true value of LLMs going forward will be realized on two fronts:
The workflows and integration of ML into bona fide business processes.
The integration of user data.
The demo shows a lot of thought was given to #1 and Coda really shines with this objective.
@Paul_Danyliuk mentions his own data, and I’m thinking specifically about his video transcripts. These are gold in a ML context. Paul’ customers could ask for a synopsis of the videos and even the time-code locations where his much-demanded answer lay.
Shouldn’t every Coda document be used to train a variant of the LLM such that queries about what GPT knows is blended and influenced by what Coda documents possess and its users have written? #2 is how AI becomes extremely valuable - I don’t see this in the roadmap.
I would highly encourage, if not already familiar, for the folks in charge of Coda’s AI push to look into projects like GPT-Index, LangChain and the use of embeddings to enable more context awareness and the integration of workspace data.
I was able to create a simple app to query my own data and see similar notes/files. If I can do it, I know Coda can do it lol.
Done right, it’s not expensive. Supporting this functionality would be a killer for other apps. Query your company’s own docs? No problem. Intelligently resurface notes/people/tasks related to a meeting? You got it. Summarize info across multiple rows/tables? Yep.