LLMs speak before they think. You need to get them to think before they speak

AGI is Too Quick

You don’t normally hear a developer saying such things, but it’s true with generative AI. Notwithstanding the oft-seen sluggishness of OpenAI’s APIs, large language models (LLMs) are quick out of the blocks to give you a response. Too quick.

LLMs speak before they think. You need to get them to think before they speak.

In today’s fast-paced business world, it is more important than ever to stay ahead of the curve when it comes to technology. One of the most exciting developments in recent years has been the emergence of AI solution architectures, which can help businesses automate routine tasks, identify patterns in data, and provide insights that can inform strategic decision-making.

In this article, I’ll make it obvious how to reap the benefits of using Coda AI to create something like an article with a fair degree of control over the response.

The Importance of AI Solution Prompts

As technology continues to advance, it has become increasingly important for businesses to adapt and incorporate AI solution architectures into their operations. The benefits of doing so are numerous, including increased efficiency, cost savings, and improved decision-making capabilities. However, implementing these systems can be complex and challenging.

Prompts are the pathway to getting it right. But your words are also your programming code. Often, there are hundreds of ways to prompt for answers that are poor, and only a few ways to get it right.

Understanding Coda AI

Coda AI is a powerful tool that can help businesses overcome these challenges. As you probably know by know, it provides a platform for building and deploying AI solutions quickly and easily. By leveraging machine learning and natural language processing, Coda AI can automate routine tasks, identify patterns in data, and provide insights that can inform strategic decision-making.

It’s vastly open to everyone’s interpretation of AI solutions and ways they might deal with an AI feature. This is a blessing and a curse, because now you need to compose commands and queries using words. Words have meaning - use them wisely. :wink:

Productivity Boosting with Coda AI

One of the most significant benefits of Coda AI is its ability to streamline workflows and increase productivity. By automating repetitive tasks, employees can focus on more strategic work that requires their unique skills and expertise. Additionally, Coda AI can help identify areas where processes can be improved, leading to even greater efficiency gains.

At the core of the gold-rush to leveraging AI for grater productivity is to get it to write for us. If not the entire article, at least the basic stuff so we can quickly add clarity, or remove hallucinations.

Many users have already expressed dissatisfaction of Coda AI as they learn just how difficult it can be to get favorable results from AI features. The simple stuff tends to work well, such as summarizing, or extracting key terms. However, writing a brief or research paper is just dang difficult.

LLMs Require Smart Prompts

Coda AI is also well-suited for building complex prompts. By leveraging its natural language processing capabilities, Coda AI can help automate content creation with relatively good control. This can save businesses significant time and resources, as well as improve the quality of their documents. But… it needs to be carefully guided to deliver on this promise.

Most of this article was generated using Coda AI. This is the prompt I used.

Note the multiple [Output] tags. Each one is an inference deliverable and they are often dependent on earlier outputs. This causes the LLM to perform each step in a serial fashion (i.e., we’re getting it to think before it writes some of the final passages of the article).

This is a simple way to create chained AI workflows.

The bold items are abstracted references to table cells that contain additional text that the prompt uses to guide the AI to a successful response. I mention prompt abstractions in this article.

The Future of Coda AI

Looking ahead, the potential applications for Coda AI are virtually limitless. As the technology continues to advance, it will become even more powerful and versatile, enabling businesses to achieve even greater efficiencies and insights. However, it is important to approach its implementation thoughtfully and strategically - every word counts. :wink:

4 Likes

Lovely reading Bill AI :slight_smile: I just came across it.

My greatest wish is that Coda AI starts understanding other languages as good as English and then provides a chatbot-like experience for uses (not creators) who can interact with the data in a doc. Looking forward to it.

2 Likes

Hmm I believe it speaks any language :sweat_smile:

I can only account for the swedish translation being correct, but it seems fine ^^

But yeah I definitely agree that there is more potential for AI integration in Coda docs!

1 Like

hi @Stefan_Stoyanov ,

It is an interesting observation.

in my opinion the AI translates rather well, but we should not forget that the training language is mainly USA English. This has far reaching consequences. I spoke to an AI expert working on Speech to Text AI applications and they found out that when you go from Swedish to Dutch the results are better when you go first to English. That is a more expensive operation, but that is how it works right now.

I use AI for example in the context of expressing date time values in any language.

Cheers, Christiaan

1 Like

I’ve had difficulties with one of my native languages Bulgarian. I asked it to summarise a column, it did nothing right. I created prompts - Coda AI didn’t get them at all. So basically, it’s kind of hard to use it for Bulgarian.

I am just trying to use Coda AI to trigger an automation because I want it to trigger it 5 minutes after row change which is not possible by default in automations. So far no luck with that either. Not sure why but Coda AI is not collaborating for the moment.

Coda’s ability to summarize columns is not performant to begin with, Stream It I must ask – did you isolate Bulgarian as the cause of poor performance by comparing the same inferences using other languages and thus experience positive outcomes.

Hey Bill, it’s far from perfect in English but definitely it’s much worse in Bulgarian. It doesn’t understand the instructions; doesn’t understand the words in the colums and what they mean (e.g. different statuses). Overall it fails there.

My push today is to see if I would find a way to use it to trigger a toggle which then triggers an authomation to cover for the missing Run 5 min after the row change automation trigger. Any ideas on that trick?

This suggests the approach is not ideal, and by extension, I mean that Coda AI cannot use column data effectively.

An inference based on columnar data is risky for many reasons. Would it be possible to generate a deterministic horizontal output?

Imagine a row-based inference that uses AI to examine the text and output values that fall into one of many predetermined classes. The drawback is token expenditures, however, there are ways to mitigate that if you can verify this works at all.

Thanks again Bill.

For my scenario, I tried a few options with CodaAI but it keeps changing the toggle in unexpected way. At the end, I decided to go with 2 buttons and _Delay which I already know is working fine. Will likely give CodaAI another try for such a case when my brain is a bit fresher.

Hi @Bill_French , thanks for this!

I’m pretty confused about the [Output] keyword

  1. Is that some keyword that the LLM knows about already, or is it inferring it from the overall instructions?
  2. How do we know that it causes the LLM to perform each step as a different inference or input/output?
  3. Would this be conceptually equivalent to prompting the LLM multiple times in separate steps, feeding it the overall context/output from the previous step(s)? (although the LLM may produce different overall results).

Thanks

Both. Look at the prompt - “output” is directly referenced. The LLM infers what I intend the purpose of [output] to mean and this is a pattern throughout the prompt.

The word “Next, …”

Yes, but doing so would chew up a lot of tokens.

1 Like