Thanks for noticing! I tend to think this approach is more about looking ahead into the abyss of search than it is about keywords per se. I recently wrote about the death of tags which are not largely different from our infatuation with keywords. Keywords may be how we map our mental models to our relationship with LLMs, but I believe that is a preoccupation that will soon end.
Bard already demonstrates the blend of semantic search with LLMs to deliver massively more value. Gemini promises to be explosive in this regard and ChatGPT may ultimately become an interesting footnote in the AI march. Exciting and turbulent times lay ahead for sure.
In an early research attempt I labeled “solve for (x)” using Coda with PaLM 2, I hypothesized that we could control the LLMs’ inclination to confabulate while also enhancing productivity. This was an experimental approach built in Coda that began before Coda AI was released. It used two custom Packs that interfaced with both PaLM 2 and GPT to make it seem like a unified solution. It was clearly the right approach because it leaned on embedding vectors to better understand the cone of user intent. Promptology is an outgrowth of that work, but it did help me understand chained inferences were necessary if we care deeply about bending AI to our objectives.
LangChain and AutoGPT exist to mitigate the chat and search-level-thrashing used to reach specific objectives that are so common among users. These tools compress time by allowing the LLMs to dynamically determine what needs to be done next, and then doing them for you.
The trouble with this approach - you need to be a developer to leverage them. And despite a lot of noise about shaping these tools for every-day users, let’s just say that even that will be poorly implemented unless the UX is in a form that business users appreciate. Coda is one such form; Google Workspaces is another.
This is clearly the future. It is not unlike the long-in-the-tooth magic that happens behind Google Search. As users, we assume that there’s a one-to-one relationship between our search query and the search engine. This is far from the case. Google has been using AI-like machinery between the query bar and the search machinery for almost a decade.
And unlike ChatGPT (which is a one-to-one relationship), this is not the case with Bard. You can prove this by sending the same query to PaLM 2’s completion API and Bard. You’ll see the stark difference. Google is doing in Bard what it does in search - it’s leveraging its massive data set and 500,000 servers to generate responses that are far better than otherwise possible with LLMs alone.
- Aurora uses AI itself to generate prompt variations.
- Promptology asserts a structure to negate the need for prompt variations.
Both approaches have subtle productivity friction. But, this is also to say that soon, neither approach will matter. Under the covers, the friction will be eliminated.