Coda AI alpha: A sneak peek

This is not actually a bad idea, but first - I’m with Paul; measured skepticism. What I don’t see here are…

  • A careful attention to the little detail of cost. How many tokens were used in just the demo? These large responses from GPT can get pricey.
  • A focus on fine-tuned models that include your Coda data. This is how smart ML users compress GPT costs while raising accuracy and value.
  • A maintenance model that supports updates to prompts and parameters. Given the way it appears this was designed, there are a number of engineered prompts and parameters being used to trigger effective responses. Are users able to refine these hidden prompts and parameters?
  • A word or two about the accuracy of this when GP4 arrives; it is anticipated to break most prompts.

In my view, the true value of LLMs going forward will be realized on two fronts:

  1. The workflows and integration of ML into bona fide business processes.
  2. The integration of user data.

The demo shows a lot of thought was given to #1 and Coda really shines with this objective.

@Paul_Danyliuk mentions his own data, and I’m thinking specifically about his video transcripts. These are gold in a ML context. Paul’ customers could ask for a synopsis of the videos and even the time-code locations where his much-demanded answer lay.

Shouldn’t every Coda document be used to train a variant of the LLM such that queries about what GPT knows is blended and influenced by what Coda documents possess and its users have written? #2 is how AI becomes extremely valuable - I don’t see this in the roadmap.

8 Likes