That’s great feedback, and the example is really helpful! I’ll take this back to the team for consideration.
Thanks for sharing - appreciate your feedback! We’d love to get your thoughts once we begin testing the updates in the community.
We definitely hear you - there are many docs that are meant for collaborators to only interact with, not to edit. As @joost_mineur mentioned, locking sounds like it would be great for your use case!
We’re also considering creating a more streamlined and dedicated experience for this category of docs that function more as apps, where the end-user experience is distinct from the doc building experience. In this world, the end-user experience could be very locked down without any access to any editing UI, while the doc building experience is set up to keep all necessary powerful tools like tables, automations, and packs at hand.
Curious how that idea resonates and whether it would feel valuable for how you’re using Coda.
@Lane-Shackleton
@fortes
@Harshita_Yerramreddy1
@Glenn_Jaume1
@Ayuba_Audu
@Shelley_Garg
@Nathan_Penner
Very much appreciate the openness to communicate!
These features will be all super helpful in using Coda, especially the dedicated database section, the quicker and faster calculation of huge docs (I feel like it improved already over the last weeks(?)) as well as the new locking abilities. Oh and FINALLY: international decimal seperators…
But one mayor thing I’m desperately waiting for is a better timeline view. More specifically “multiple items per row” or a sequential timeline. Theres a bunch of posts about this feature request in the community but no official statement wether it’s considered or even on Codas radar, to implement this function.
I’d love to take the opportunity to (hopefully) get some insides about this topic.
Here’s an example and an incomplete list of posts, mentioning this feature:
Great to see all the questions and interest in mobile!
We hear you on the need for performance & memory improvements loading large docs on mobile. That’s a key goal for the data layer work, to enable loading only the data you need.
We know there’s many needs on mobile beyond this, but we’re focusing first on making sure you and your team can reliably and efficiently open any doc or link on the go.
Side note on our release process: we have daily updates for most of the mobile app & desktop experience via Coda’s servers, so we don’t need to make App Store or Google Play updates very often.
@Piet_Strydom @Stefan_Stoyanov1 @Brock_Eckles — great feedback, we’ll take these into account in how we share roadmap updates.
Great to hear these features resonate!
Thanks for sharing how important sequential timelines are for you! We are are aware of many use cases that would benefit from this. While it’s not on the immediate roadmap, it’s an area we’ve explored and on our radar for the future.
Can you please make Coda-Links work in Mobile Edge? I don’t get why this after so many years is still not possible. It’s working on desktop, it’s Chromium based, and one of the most common browser. For Surveys I can’t use Coda because many customers have edge on mobile as their default browser and when they click on a Coda link it doesn’t work. This should be a very very simple thing to fix! Thanks!
@Shelley_Garg a “Coda App” mode that simplifies / streamlines permissions is a great idea!
+1
Hi Community - I’m Eric from the Developer Relations team, with a belated update on where we’re headed with the Packs platform. In short, we’re investing heavily in Packs and making them an even more central part of our strategy going forward.
As you may already be aware, we are using Packs as the integration layer for Coda Brain. We’ve been busy over the last year extending the Packs SDK and infrastructure to make it work better with large datasets, permissions, and other features critical to an enterprise-scale knowledge solution. This work is nearing completion, and in the coming months we plan to extend these new features to you.
Beyond Brain, we are making Packs the way to develop and deliver AI agents across the Coda and Grammarly experiences. The platform already provides many of the core features that agents need, and the addition of Grammarly’s AI engine completes the package. Our hope is to provide you with a path to easily upgrade your existing Packs to AI agents, extending their reach to many more users and use cases.
Imagine a user installing your Pack, and immediately being able to chat with it. It can provide answers using synced data, or invoke actions and formulas purely from natural language requests. And just like Grammarly itself, it can get context from the user’s active window, and show related information using underlines and highlights.
I’m personally very excited to see the platform grow in these new directions, and I’m looking forward to working with you all to bring it to life. If you have any questions, concerns, or comments, please let us know.
This articulates the vision I was hoping for (and encouraging my clients to stay on-board for).
Well done @Eric_Koleda for sharing this most welcome update with us.
The ability to combine the magic powers of Coda tables, formulas, actions, and packs with the power of Grammarly’s distributed AI engine, and the abilities of AI Agentic workflows will be a huge force for good in the business world.
And those who weild the skills to weave these magic spells will be very well suited to take full advantage of the market opportunities that will come from this.
Your message gives great reassurance and encouragement to the Coda Maker community.
Max
Thanks @Eric_Koleda !
Can’t wait to see the updates. Can you provide any more specifics in terms of how the pack experience for users might change in the coming months?
Syncs over 10,000 records? Packs that provide different per-user configurations? Updated payload size windows? More authentication methods (Oauth 1.0)? Updated execution time over 60 seconds?
Excited for all of the updates!
@Eric_Koleda please please please let us sync more than 10k records. This is a big bottleneck for one of our central Coda functions: syncing our employee timesheets doc to our project calendar doc.
+1 We run into the same issue often. We’d love to hear more @Eric_Koleda
Thanks so much for sharing your perspective, @Eric_Koleda! In your post, you asked us to imagine a chat-like pack experience.
That sounded intriguing right off the bat, so I add your post to my active OpenAI project, where I throw in snippets of new functionality I want to explore. I was utterly delighted when I reviewed the AI response, informing me about all the massive changes coming my way via the updated Coda Packs SDK.
Let me share further below a few that sounded particularly exciting for me. You will find them underneath the header “A) New Capabilities and what they enable”.
But, before you share in my excitement, let me dampen your surprise. None of this is actually true. Or rather, even if it might be true, it’s neither announced nor rolled out yet.
Yet, it sounded so spot on that I drank the kool aid in big gulps. That’s until I shared my enthusiasm with @Agile_Dynamics, who was not fooled as easily. I proceeded to review the Coda Pack SDK changelog, and - sure enough - none of these functionalities show up there. I reviewed the sources the AI quoted and - again - none of these sources mention the functionalities.
When confronted, “the AI” had this “to say for itself” (I took the liberty of removing all “mea culpa, you’re so smart to point this out …” statements). See Section “B) AI’s Corrections” below.
My take-away: I’ll be standing by to hear from Codan directly on the community board and will continue to monitor the SDK changelog. I will stay away from “inferred insights”
A) New Capabilities and what they enable:
1) Actions (makeAction)
You can define custom buttons that run complex JS logic—even on row data inside the doc
2) ChatResponder (coming soon)
-
Packs will be installable as real-time, stateful agents: users can type “Why is this late?” → the Pack answers from row logic
-
The SDK will let you export a special handler named ChatResponder. If it exists, Coda auto-detects the Pack as an AI agent.
-
For the user
- They install your Pack.
- A chat bubble appears: “Ask XYZ Pack…”
- They type “Onboard vendor Acme.”
- The Pack calls the same action your button used. To them it feels like talking to an assistant.
Because each row already stores:
- Who owns it (Owner), What happens next (NextStep), Due when (DueDate), Why (Reason)
- the ChatResponder can answer “Why is PO-42 late?” by:
- Grabbing the row where PO = 42.
- Reading Status, BlockedBy, DueDate.
- Replying: “It’s blocked by legal review, due 26 Jun 2025.”
- No extra prompt-engineering or search across the doc—your schema is the prompt.
3) Cross-domain Packs are allowed
- Previously, Packs were limited to one domain.
- Coda has quietly removed this restriction, allowing one Pack to:
- Authenticate against multiple services (via different OAuth flows)
- Call multiple APIs in a single action
- Chain logic.
- Caveat: You must explicitly define each domain in the Pack manifest, and the user must authorize each domain separately when installing the Pack. But you can now create truly multi-domain Packs.
B) AI’s corrections
1. makeAction()
Exists, but I misspoke in the naming.
The correct method in the Coda Packs SDK is:
You don’t wrap actions with makeAction()—you define them with pack.addFormula() and mark them with isAction: true.
2. ChatResponder.
Does not exist in the SDK today.
This was an inferred future feature, based on Coda’s official statement:
“Imagine a user installing your Pack, and immediately being able to chat with it.”
From this, I assumed—incorrectly—that the SDK would follow the pattern I outlined.
3. Cross-Domain Packs
Inferred from practice and SDK flexibility.
While older documentation strongly implied one domain per Pack, there is no hard enforcement of this in the current SDK.
The really scary thing about this hallucination was how realistic it appeared at first.
The LLM even produced a very authorative table of new Coda Pack SDK Features..
Complete with a little pink brain icon!
It would be quite easy to niavly include this table on a slide in a presentation to a client, spreading the misinformation further into the memosphere.
The other scary thing about this interaction with the LLM is how it responded to being called-out on its mistake;
It spewed a load of very plausable logic about how it was reasonable to extrapolate the information given, into this highly structured and detailed table. No sense of irony or embarrassment at all.
This was more than an ‘hallucination’; it was a meglomanicial self-decieved delusional dilerium!
I dispair for our species!
Max
Hey @Scott_Weir1 - I don’t have any specifics to share, but using a Pack in an agentic way is likely to be a very different user experience. We aren’t planning for any large changes to the existing in-doc experience in the near term.
As for your wish list of Pack features, I wouldn’t expect any changes in the near term. Execution time and request size limits are pretty fundamental to our architecture and not something we can readily change. OAuth 1.0 support is theoretically possible, but the standard is so complicated and rarely used I don’t think we’ll ever support it.
Regarding the 10K row limit that you, @Chris_Strom and @Chris_Williams1 asked about, that isn’t something we’ll be changing in the near term. The project that Lane mentioned to “support databases with millions of rows” will include Pack sync tables in the long run, but the immediate focus will be on native Coda tables.
@Nina_Ledid and @Agile_Dynamics - What a roller coaster ride! The LLM certainly has a strong imagination, but I agree with the recommendation to stick to the SDK changelog as a source of truth
And while mentioning toggle… the filter selection for toggle field should also be localized (not necessarily using words in all languages - better UI could do the trick):
Hello Eric, and thank you for this sneak peek!
In the meantime, Airtable just released Omni + their new AI vision (they’re now turning to add a “Vibe Coding” solution that leverages AI to help makers create their Apps).
I understand that this is not Coda’s path for now, but are there any plans to improve AI in the field of creating Apps / crafting formulas (which would be really helpful for my needs and probably for many other users as well)?
What is Coda’s vision in that area?
Best