Promptology: Creating and Managing AI Prompts for Greater Productivity

AI at Work Challenge

This tool (Promptology) was not designed for Coda’s AI at Work Challenge. It is a tool that existed long before the challenge was announced and has been providing AI productivity for many months. It existed before Coda AI was in alpha testing. It originally utilized OpenAI APIs through a custom Pack.

This community article is partly responsible for its existence and earlier influence from Tesla and SpaceX. This version has been streamlined for open submission to the challenge, but it is no less powerful than the one used by me and my team.

Quick-Start Video

To get a sense of Promptology in three minutes, watch this 6-minute video at 2x speed.

As Coda users, we are quick to focus on building “the thing”. The relevance of Promptology with the AI at Work challenge parallels the bumpy road to scaling AI productivity to build many things, not just solving a specific problem with AI. The Promptology Workbench is simply the “thing” that helps you build many other “things”. For this reason, it’s essential to think about the extended productivity this tool can produce. Promptology is perhaps to AI productivity what compound interest is to Bank of America.

I certainly want to win the challenge, but I’ve already won in a big way. The Codans have ensured that we are all winners.

With that preface out of the way, let’s first examine why Promptology exists.

Productivity in the Age of Artificial General Intelligence

At no time in modern history has a single skill gated our future productivity so profoundly. The ability to …

Create, Test, and Reuse Prompts

Promptology is about one thing - facing the many challenges of AI prompt development and how to get through this gate as productively as possible. Remember how unproductive you were when you first tried to use Google Search? Prompt development is somewhat like that but more challenging to master. You may have already experienced many misfires and frustration with AI. Over time, your search queries slowly improved and became second nature. Promptology is designed to avoid some of the misfires and, hopefully, some of the frustration.

This video walkthrough has a demo of the prompt workbench beginning at about 3:30.

Coda

In my view, no product is more perfectly suited for AI prompt development and testing than Coda. To that end, Promptology uses almost every essential feature of Coda. From automation actions to JSON parsing, this tool leans on many of the advanced capabilities of Coda. But there are key dependencies on simple ones too. Buttons, for example, are pervasively employed in AI prompt management processes.

Coda AI

As you become more familiar with Promptology, you may begin to understand how Coda AI is central to this tool. At first glance, the assumption is that Coda AI takes the prompt answers and hands them off to the AI Block for inferencing. One and done. However, this tool uses AI to support every workflow aspect to reduce your effort.

AI’s Productivity Promise

Coda AI brings with it the prospect of effortless content generation. LLMs (large language models) deliver on this promise when we ask them to create words. But they also have a well-deserved reputation for generating words that aren’t what you might expect from an intelligent system, however artificial it may be. When the AI decides to make stuff up, we think it’s hallucinating. It’s not; it’s behaving exactly as LLMs are designed to do — expound on a topic and embellish as needed.

At the heart of an AI solution is the prompt, which attempts to guide the LLM to a satisfactory output.

Ironically, we benefit greatly when LLMs exercise a degree of verbosity. But this also comes with the possibility the AI may be too exuberant, resulting in long-windedness or the prospect of it altogether abandoning reason. This is the dark side of artificial generalized intelligence (AGI). Lacking specific guidance in carefully constructed prompts, LLMs are left to generalize on their own - it’s what they do well.

As so eloquently stated by Alberto Romero (Algorithmic Bridge), ‘boundless creativity’ is AI’s superpower.

The hallucination problem (also called confabulation, my preferred term, fabrication, or simply making up stuff) refers to the tendency of language models (LMs) to generate text that deviates from what’s objectively true (e.g., ChatGPT saying 2 + 2 = 5 or GPT-3 implying Steve Jobs was still alive in 2020).

Although confabulation is pervasive—and a no-go when factuality is required—it doesn’t matter in some cases. ChatGPT is great for tasks where truthfulness isn’t relevant (e.g., idea generation) or mistakes can be assessed and corrected (e.g., reformatting). When boundless creativity is central (e.g., world-building) confabulation is even welcome.

AI’s Productivity Paradox

As early adopters of AI, we have grand visions of escalating our work output. There’s no shortage of media outlets and multi-message Twitter posts that have convinced the masses that AI makes digital work a breeze. Reality check: it doesn’t.

ChatGPT and Coda AI users typically experience poor results because successful prompts are not as easy to create as you first imagine. How hard can it be? It’s just words. The reality is that it is both hard and complex, depending on the AI objective.

Two aspects of prompt development are working against us.

  1. Prompt Construction - most of us “wing” it when building prompts.
  2. Prompt Repeatability - most of us are inclined to build AI prompts from scratch every time.

Getting these two dimensions right for any Coda solution takes patience, new knowledge, and a little luck. I assert that …

The vast productivity benefits of AI are initially offset and possibly entirely overshadowed by the corrosive effects of learning how to construct prompts that work to your benefit.

The very nature of prompt development may have you running in circles in the early days of your AI adventure. You’ve probably experienced this frustration with ChatGPT or Bard. It’s debilitating and often frustrating — like playing a never-ending whack-a-mole game.

This is what you can expect to experience as you wade into AI. I created this visual based on tracking metrics I’m gathering in another Coda AI solution I created to manage my own content production workflows.

Prompt Development Frustration

  • You make a prompt; it kinda works.
  • You modify it; it works a little better.
  • Rinse-repeat many times. The output gets better but often gets worse.
  • You’ve forgotten what worked, and this process continues as you probe for better results.
  • Eventually, you adopt the prompt that worked when you reached the point of intolerance for further development.
  • You have no record of the attempts or a methodology for testing and improving your prompt text.

Prompt Construction

As mentioned earlier, prompt engineering is not unlike software development. And while Coda itself possesses the underlying infrastructure needed to transform prompt construction into science, this template will not begin to explore all of the possible remedies that may produce AI advantages and higher productivity. But there’s one prompt lesson that we should all learn right away.

LLMs speak before they think. The challenge is to get them to think before they speak

It is well-established that prompt-building is wrought with counter-productive issues. In Promptology, I provide a glimpse that may help you nudge AI productivity lifecycle in your favor. Many examples demonstrate how to get the LLMs to think before acting.

Developing good prompts depends on your AI objectives. However, one aspect of productive prompt construction, and indeed any AGI activity, is gated by a testing protocol. You need to frame your prompts so they can be tested faster and measured, however subjective your tests may be.

Prompt Structure

Reliable prompts that produce relatively consistent outputs generally follow a pattern that includes these components.

  • Role - the persona of this AI.
  • Task - stated clearly and definitively, explaining what you want the AI to do.
  • Goal - a concise statement about the final output of this prompt.
  • Steps - the precise steps you want the AI to follow to achieve the goal.
  • Rules - any additional guidelines that you want the AI’s to consider.

Promptology adheres to this prompt structure by guiding you to answer questions about each component. Coda Makers are, of course, free to re-engineer this pattern. However, this pattern used by expert prompt engineers has proven successful.

Prompt Repeatability

Building prompts for your personal and business activities is a big challenge. Thanks to Coda, capturing them in a way that leverages reuse is just as important. Prompt repeatability is a function of basic database design at its core, and Coda rises to this challenge. Saved prompts include the prompt constructed from Role, Task, Goal, Steps, and Rules components and may be copied with a button click.

Saved prompts can also be pulled back into the Prompt Template with the Restore button in the Saved Prompts table. In the near future, this process will be increasingly valuable as live data becomes common in LLMs.

Prompt Workbench

Overview

The Prompt Workbench is not particularly magical. If you know how to use Coda, you will be delighted to know that this is a standard table with fields that provide the essential elements for constructing viable prompts, testing them, and measuring their performance. It is ideally suited and helpful when you have an idea for a prompt and need to frame some quick tests while making subtle changes.

There are three basic parts on the workbench -
There are three basic parts on the workbench -

  1. Prompt Template - You provide the answers for each of the prompt components, and you’re ready to test.
  2. Prompt Outcome - The Prompt Outcome displays the results of the latest AI test. It also includes the elapsed time to generate the outcome and the estimated minutes save for each use of the prompt.
  3. Saved Prompts - The Save Prompts table is a collection of all your saved prompts, their latest prompt outputs, and analytics.

Workbench Examples

The workbench comes with about dozen example prompts in the Save Prompts table. These examples demonstrate basic prompt designs and allow you to start using the streamlined methodology for testing them on the workbench. Use the restore feature and test immediately to get a feel for the workflow.

These examples are not perfect and may actually be utterly irrelevant to your work. However, this is not about specific prompts but about creating good prompts that work well for you. This is the perfect prompt selection for you to begin to experiment with changes. You can even rename that and save as new prompts.

Create a Prompt

You can create a prompt from scratch with the New Prompt button. Note: This button is disabled when there are unsaved changes to the existing prompt. You can also import a prompt. This option is below the Prompt Template table and offers quick access to more than 120 prompt templates from the Prompts Template Library of 120+ examples culled from various sources. Imported prompts are parsed into Promptology’s prompt components where you can polish them quickly and test.

Streamlined Workflow

Many iterations of the workbench were created and rejected over the past three months. This template version is providing me and my team with enhanced AI prompt development productivity. Even so, it is not perfect. You may see many ways to improve what I created, but that’s the promise of Coda itself; everything is extensible.

The workflow is simple:

Start with a Prompt Template → Add Your Insights → Generate an Outcome

You can be ready to test and hone your prompts by answering just seven questions. The questions represent a template success pattern that has worked well for me. If you answer these questions, there’s a good chance your prompt will work well almost immediately.

Prompt Reuse

Given a prompt that works well for a specific objective, reusing it becomes beneficial because you need only give it a new name, and then change the essential parameters of the prompt. The bold pink words in this prompt are the only things that change to replicate the prompt for a different location.

Some prompts are extremely simple to reuse.

UPDATE: 13-Jul-2023 – CODA PACK PROMPT

On a hunch, today I decided to test the hypothesis that a well-constructed prompt could generate a functional Coda Pack that would build and execute without modification. Almost within minutes of providing a reasonable Pack example and three prompt adjustments, the code was generated, built, and executed correctly.

This is just one very simple test, but it demonstrates that with a suitable prompt, Coda AI can fully provide code generation in the flavor and style that the Coda SDK requires.

Happy prompting…

9 Likes

The description is quite a thesis :sweat_smile: but the template part (the workbench) is actually pretty simple and great. Love it! :raised_hands:

P.S. The “time saved” indicator is a brilliant detail.

1 Like

I’m often accused of excessive wordiness. :wink: But, when it comes to the topic of productivity, I try to dive deep to uncover the hidden side of economic value.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.