I’m curious to know if anyone has any idea how to get the AI Assistant to slow down and ‘think’ more carefully. For some context, I’ve been using the AI Assistant to perform sentiment analysis on a column in a table that has some reviews. The AI Assistant works fine generally, but it seems as though it’s rushing to get the task done for every row in the table, and by doing this some rows are incorrectly classified. I would appreciate some thoughts on how to get the AI to focus on accuracy instead of speed.
You may want to explore if there are other packs since doing sentiment analysis is more of a ‘solved’ use case than some of the newer AI patterns. I’m on my phone otherwise I’d give it a quick search for you.
The new AI stuff is powerful, but it’s inconsistent. That’s not a coda problem, it’s just a foundational drawback of this next generation of AI. My general rule of thumb has been if there’s another way to get something done, usually go that route. However, there are a lot of things that just weren’t possible before that are now which is great.
Search for the topics written by Bill French. He has some very good ideas. Try to write your instruction to AI in such a way that it is forced to think about it.
Thank you for sharing your thoughts on this Kameron. So would you say it’s better to stick to AI applications that have been trained for a specific task (in this instance sentiment analysis) instead of a using a generative model ?
I’ve since checked Bill’s work, thanks for this reference Piet. I’m currently attempting to write better prompts by spacing them over a few columns, and hopefully in this way the AI can follow some steps that allow for it to think before the given a final answer.
@faatimah_mansoor, generally yes. In the use cases that they exist for most models which are purpose-built will outperform large language models for that use case. So for things like sentiment, image object detection, and other traditional ML/AI problems you’re probably technically better off going with a traditional modeling approach. That being said, it’s not like everyone has all of the infrastructure or the skills to go pull something off of huggingface and write a quick custom pack in Coda to actually get it to work. It’s awesome that they’ve made these kinds of capabilities so accessible, but the trade off is in the performance.
There are still plenty of things that these new AI tools can do which just couldn’t be done before with the speed and quality that they can. I worked on a project doing some document library auto-summarization back in 2018 and let me tell you the AI of the time couldnt hold a candle to how well today’s do that task. Personally, this is why I like tools like LangChain that let developers quilt together different techniques with some LLM-glue, but alas I have not seen a LangChain pack yet (note to self: start working on that).
Disclaimer: This may not be entirely accurate as I’m basing this off my experience working with both kinds of models and the space is evolving rapidly.
@Kameron_Baumgardner thanks for this insight, I think experience is super valuable because often times there is a difference between how a model should work compared with how it actually works, and for the end user, such as myself, these differences could get overlooked.
I’m currently curious about AI, and I use it as a supplementary resource. But I haven’t adopted the full use of any AI for a particular task. This is partly because I enjoy doing certain tasks (such as writing) myself, and partly because I don’t fully trust the tools Are there any AI tools that you use regularly?