Coda AI can be used to help us understand people through the words they use. One aspect of this is understanding customers, colleagues, and users through conversations such as those that occur on Discourse communities. I spend a lot of time on communities trying to help people and also learning from experts.
This template exemplifies the power of Coda AI and almost every facet of a Coda solution. It uses a Pack to read and parse a Discourse conversation. It identifies influential participants, the key topic focus, and even attempts to see things that the participants may have overlooked. It collects data about the conversation and uses it to make inferences.
This is a tool to that helps me understand people based on their interactions. Conversations contain valuable information and this approach saves me a lot of time when I encounter a topic that interests me. It quickly gets me up to speed on the conversation by summarizing, organizing various perspectives, and visually displaying analytics about the contributors to the topic.
IMPORTANT: This tool can be shaped for use with any conversational content. This example is for one Discourse community. I use it with seven Discourse communities and two other conversational data sources including a survey system.
Just drop in a URL to a Coda Community topic, and watch as it harvests the content and begins to create a multitude of inferences. Let’s use the recent popular Fireworks topic as an example.
Drill down into a conversation and a new world of insight and analytics unfolds. This topic has many participants and a lot of comments and observations.
My favorite insight is the Viewpoints inference. Coda AI identifies the discrete viewpoints of each participant in the thread. In seconds, I have a pretty good sense of what each person is saying. This component also contrasts and suggest the most plausible perspective when one exists.
P.S. Under closer examination, the Conversation Log turned out to me some amusing hallucinations and in fact a longer (and less helpful) read than just reading the topic itself But the summaries, the personas, the viewpoints etc are all spot on! And a beautiful dashboard overall. Kudos!
This is typically the case when trying to understand a reply that assumes the reader has read every comment up to that point in the conversation. Responses summarized isolation is a dicey and tenuous path. I have some ideas to improve it.
Thanks! I’m just trying to use AI to be as lazy as I can possibly be.
Read even closer into your screenshots; the synopsis is also a little bit made up But you picked not an easy (and a mostly visual) topic to feed to it. No wonder it got confused by all the gifs and static images.
Wondering what it would output for the topics on simple tables, page-level sharing, or Coda 2.0 Pricing (the original one)
It did a pretty good job on that one. Not stellar, but descent. Lengthy conversations are problematic until Coda AI supports LLMs with bigger prompt windows.
I’ve made some progress overcoming the massive number of links and images typically found in conversations. But it’s still an issue if you want the AI to understand the conversations well. Until Coda AI supports multi-modal prompts, we can’s use images to help the LLM understand what we’re talking about.
Bill, Thank you for cross posting about this. I enjoyed poking around the structure of your doc and looked at your design patterns. I don’t have much to say about the AI aspects of the doc, as I am still very much a novice at using and understanding AI content. I was impressed with the content generated by the AI and I am a little worried what the AI will say about this post.
I noticed a few things about the design of your doc that I’m curious about because they are different from how I do things. I took the liberty of copying your doc and refactoring a few things. However, I’m wondering if you had specific reasons for your design choices that I am unaware of.
You have users manually copy the “Conversion Insights Page Link” to setup the doc. I refactored the doc to remove this manual configuration by calculating the url for the page using the thisDocument.objectLink() formula.
You delete and regenerate all the rows in the [Conversation Log] each time the user drills down to a different topic. This also means that the AI is called for each row in the table for every time the button is pressed, which can take a long time. This feels inefficient if someone wants to revisit a topic or flip back and forth between topics. It can also mean that a [Post Title] or [Post Synopsis] can change between viewings, which can be a bit disorienting. It also prevents two different people from using the doc to view different topics at the same time. I refactored the doc to use a relation control so that flipping back and forth between topics doesn’t require the rows to be regenerated every time, and so different people can view insights for different topics at the same time.
Your “Drill Down” button has a lot of logic mapping JSON values to fields/records in various tables. I moved most of that logic to the individual fields/tables. For me, having the logic in the individual fields/tables seems easier to maintain than having the logic in the “Drill Down” button. My refactor also converts the [values] column in the [Topic Statistics] and [Productivity Factors] from editable values to calculated ones.
(Apologies for taking this thread in a direction away from its original AI focus. But refactoring this doc was a great learning experience for me.)
Ha ha! No. The design choices were all technical debt destined for version two (if anyone cared). But to be clear, the inefficiencies such as deleting and rebuilding rows were less worry because of the scale. Twenty or thirty row tables are not generally a concern.
The re-inferencing of the conversation log does bother me, but there are also some advantages to that inefficiency if we modify the algorithm. Those inferences are built in isolation (one by one), but they probably should be built in a cascading larger prompt having some knowledge of the history. Furthermore, I wanted them to use any edits or updates to the individual posts.
Your changes are really good and your mastery of Coda is already impressive. That’s the beauty of open-source, you get to make a better version. I’m honored to have all this technical debt exposed by you.
Isn’t that what the refresh button is for? In my refactor, the refresh button deletes any existing post rows so any new edits or updates are retrieved and AI content on them is regenerated. But I didn’t see a need to regenerate without pressing the refresh button. (Of course, the manual regenerate option for each cell is always there.)
I also noticed that the AI did not generate the titles and synopses in row order—sometimes content was generated for later posts first.
But I suppose a way to get a better summary would be to include the entire thread history along with the post content to the AI. That could be another field in the conversation log.
Thanks for letting me know. I was guessing you threw the doc together for just yourself and didn’t bother to polish some aspects. But I wanted to make sure I wasn’t missing something subtle.
I worry that I don’t have a good feel yet for how many rows is too much. My design has a row for every post for every topic. Given that the tool is most useful for longer, more complex threads, a few dozen topics could have several hundred posts. I’m guessing that will still be okay.
Let me know if you want me to share my refactor with you. I’m not going to publish my version since it is really your doc and I don’t want to cause confusion.
Here’s an idea. Let’s make it our template. I started it. You can finish it and make me look smart. You can also make it into something far greater. After the challenge is over, we can monetize it through pack plugins for many communities. Maybe even Khoros. LOL
I have a hunch I will be very busy over at CyberLandr now that CyberTruck is in production. This collaboration would ensure this idea doesn’t die on the vine.