Coda AI Live Internet

One of the biggest wish list items for generative AI projects is the ability to utilize current Internet content to shape inference results. No one wants to chat with a smarty pants who just came out of a two-year coma.

OpenAI has a plugin, but that pathway is not supported by Coda AI.

Until recently, I typically used Google’s PaLM 2 LLM to gain access to live web content when building Coda apps that required it. That approach dried up when Google constrained PaLM 2 from access to the live Internet. Its status is beta and subject to these types of abrupt changes. Bard still supports live Internet access without charge, but there’s no Bard-specific API [yet].

Live AI, it seems, is a distant and fleeting mirage for almost everyone using Coda. Unless… you get a little creative.

What we need is a Coda AI plugin for live Internet access.

Live Internet Inferences

If you ponder the nature of live Internet-driven AI responses, the objective is relatively simple.

Given a specific query, blend the power of generative AI with the data from a search for that same query.

For example, a simple question like this should be possible in a generative AI context.

What are some of the newest EV cars?

The AI output should include links with hover behaviors that include images, and videos that play in place. Most important, the content should be recent since the query is literally about the newest EVs on the market.

Other types of queries that require up-to-the-minute knowledge should also be possible like this one.

Show me the closing price of $TSLA on 25-Aug-2023.

The Coda AI responses in these examples underscore the limitation of an LLM that stopped learning a few years ago. Its lack of short term memory wholly eliminates AI from thousands of use cases.

These comparative tests contrast Coda AI’s results with Coda AI Live results. The differences are as stark as they are exciting. In this test harness I deliberately asked the AI fields to include dates so that you can see the modern-day references - some literally less than 24 hours ago.

Here’s how I did it. Enjoy…

6 Likes

That is a huge help @Bill_French - thank you.

Can ask your advice on how to parse out the items within the reply? I seem to have the ParseJSON syntax off.

For example with this query:

How do I get the first snippet? Or all snippets?

This does not work - I must have the formula syntax wrong:

Given a response like this …

The first title is:

The first result containing a snippet is:

There’s a way to use XPath to search for the first result that has a snippet, but I’d have to research that when I’m on my desktop.

As goofy and convoluted as this looks, it delivers the first result with a snippet. I’m just not an expert on JSONPath and there’s likely a better way to query that result set.

Thanks @Bill_French - that works. Really helpful!

1 Like

Very cool Bill.

I’m gonna say with a disclaimer I haven’t familiarized myself with chatgpt paid APIs and subscriptions at all, but I heard a bit about plugins which allows for chatgpt to look outside of itself.

How is your thing different from that?

I am very delighted at the chance to combin Coda, chatgpt, and the internet knowledge trove to create something cool for internal use

1 Like

ChatGPT can access the Internet only via plugins. However at present there is no OpenAI API to allow access to plugins; plugins can only be accessed via the ChatGPT web app.

1 Like

Hi Jake,

My “thing” is not actually that much of a thing. :slight_smile: Rather, it’s really more of an approach.

It’s a Pack that produces Google search results based on a query string. The results are shaped in a manner to minimize the size of the content for use as a prompt element (like a learner shot).

When search results are combined with Coda AI, you get the functional equivalent of a GPT plugin. But two things should be noted:

  1. This “plugin” cannot [yet] be used in Coda AI Chat. It can be used in all other Coda AI controls.
  2. This “plugin” retrieves only the top ten organic results from Google Search.

As to #2, it is quite easy to modify the API call into SerpAPI to use different search engines as well as larger result sets. But, you need to be careful with large results sets for obvious reasons.

This is a good point and one reason Coda AI is unable to provide a pathway to OpenAI plugins for text completion AI processes. I think there are benefits to having live web access for both chat and completion inferencing.

1 Like

Oh cool I just recently watched this video just to kill time but maybe you’ll find it helpful. I think his implementation compresses the character input to bypass the character limit: https://www.youtube.com/watch?v=c_nCjlSB1Zk

Ah okay got it very cool so you’re setting up a summarization pipeline using APIs and using Coda as the analysis UI. Would you say that’s a good summary? Sorry in advance if I misunderstand, lol!

If you have a Coda page with the output of the pack from @Bill_French you can still use the “standard” Coda AI and give it the request to “summarize this page”.

Correct. Coda AI Live is an approach that simply blends additional data from external sources - specifically Google Search. This approach could be used to capture a list of articles here in the community (which I actually do in this template). But it doesn’t change anything about Coda AI which still works fine without injecting content from the SerpAPI.

With Coda AI Live, I’m simply getting Google search results from SerpAPI and then condensing their JSON output (which is very large and has a lot of detail). The objective is bare bones - get actual live search results related to the AI prompt.

For example -

Prompt: What is the most recent legislation signed into law by the current president of the United States?

The Coda AI prompt includes the call to the Pack:

The outcome for the column instrumented with the SerpAPI Pack (Coda AI Live) includes data from the live web, whereas, the Coda AI column is attempting to respond to the identical prompt without the benefit of a learner shot comprising the true reality of the Internet. We know this because it references events than occurred more than a year after GPT-4 ceased learning.

Here’s a simple diagram.

Yep - all sorts of dancing can make it possible to jam more information into a prompt window. However, there are some issues with this approach. What if its still too big after its been compressed? Under the covers, Coda AI will truncate the content and this could be bad if the important stuff is at the end of the body of information. Also, a sentence fragment is one of the worst things you can ship off to GPT.

The better strategy is to chunkify the content first, then perform a preamble process that shapes the information into something that will always fit into the token pipeline. If you don’t do this, your AI solution will not scale - it will eventually break and probably when your boss is using it.

Chunkification can be done in a number of ways, almost all have little to do with AI - it’s just smart programming. :wink:

Newish user here - @Bill_French Sir, bar NONE, your content is the model of form and function. Every single post I’ve seen (Ive read most of em) are Masterclasses in their own right.

Got a “tip jar”? Like a Kofi perhaps?

Thank you for all that you do for the Community.

:heart::100::pray:

2 Likes

I dropped $25 on your books from the GTC doc.
I’ll reach out over email when I’m done fighting this CORS issue on my site.
:blush:

1 Like

Kind words! Thanks you!