Passing data to Coda Ai

Hey all !

Im working on a project management system using Coda Ai as the back bone. Something that is a bit of a UX feedback is that I can do the following:

Pass a table and all its contents using @[tableName] in the summary block.

However when I pass a table that is filtered it will only grab the display name:

I’d love to be able to pass all the filtered data just by doing table.filter and it can still give me a summary.

I understand a workaround is to create a filtered table and then pass that to the Ai. That adds an extra step and creation of a table that I dont think is necessary.

First, you probably can create a view and pass that.

But if anything, you can always construct the “plain text” input out of your filtered table within a formula, e.g.:

DailyStandups.Filter(...).ForEach(Format(
  "{1}: {2}",
  CurrentValue.Name,
  CurrentValue.StandupNotes
)).Join(LineBreak())

I was doing this (manually constructing what’s passed) when I wasn’t sure whether Coda AI could access the whole table or only the display columns.

2 Likes

Thats the thing… I was hoping I wouldn’t have to create a view and skip that step for speed. :smiley:

Your second part is a nice fix! I guess its the underlying way the coda formula language works so perhaps I’ll never be able to just pass tableName.filter(date=today) and get all of the data returned rather than just display name.

Manually constructing takes so long. :cry:

I see why you want to do this, but it’s risky - it won’t scale. Eventually, you will swamp the prompt window and only errors will occur thereafter. My tests have made certain that Coda AI doesn’t manage this to defend the inferencing requests from calls it cannot perform. I think this will change in the future - prompt windows will expand, and Coda AI will get smarter about filters and other aggregations.

For the foreseeable future, aggregations are what we should be throwing at Coda AI, not entire tables.

At the outset, tables are typically not of Parquet format, and as such, the data itself is 90% bigger than it should be. Python data structures are typically arrays of columns and there’s plenty of science to prove they are massively more efficient for AI inferencing. Coda AI should do the same - regardless of the table format, it should transform this data into Parquet because it’s faster, smaller, and generative AI understands it better. Until then, you should be transforming all data into compressed Parquet formats.

How does one compress into parquet formats in coda ?

I typically lean a custom Pack, but I’m sure it can be done with a formula.

I wrote about it in this Google script context. Might be enlightening.

Thanks for sharing your article. You earnt a sub from me.

I’ve used the prompt you suggested in the article to start getting the data formatted correctly.

1 Like

That’s clever - does it actually work reliably?

Yes but results are a bit varied. Sometimes it likes to start it off by writing some python and then declaring the table.

I don’t actually recall suggesting this prompt and as I read yours, I get the sense you are self-defeating the intent of Parquet. Here’s why…

Returning the table as parquet that is then converted into JSON, puts all of the bloat back into the table data that you were trying to eliminate with a CSV or other table representation.

I would test accuracy of outputs without converting to JSON just to see if it’s more reliable.

+1 on this. Compiling performance reviews and being able to filter enables me to batch things together to save me time.