Howdy Everybody!
I have been playing around with the GPT3 pack and having a lot of fun, but noticed it’s only using the completions API so far. Tables in Coda have the handy-dandy search function, but what if you wanted to sort a table based on similarity between a column and another string?
I quickly put together a prototype GPT3 Similarity API modification to the base pack and a very quick demo! You’ll have to copy it and put in your own API key, but it seems to be working as expected!
1 Like
I like where you’re going with this. If this got blended into search I wonder what the UX would need to be to let the user know whether you’re doing a ‘hard’ or ‘soft’ (similarity) search.
Hmm I copied the doc and it only allows me to remove the pack:
Oh weird, sorry let me check the sharing settings on the pack copy.
Very useful approach for learning more about AI in Coda.
I’ve often thought about search and GP3, but I keep hitting the practicality wall. To do this, I assume you need to contrast (in a GP3 inference) each row against the term “royalty” to get back an inference score. If so, scale and cost begins to make this impractical except for small data sets. Am I interpreting the technical approach correctly?
I have also wondered if GP3 had a role in search when combined with an inverted index (LUNR for example) which would be used to get the “short list” and then use that result set with GP3 to get the most perfectly matched top three hits.
1 Like
Yeah @Bill_French you’ve got the high-level outline correct. The API I use with this pack returns a 1000+ number array that’s a ‘vector’ representation of how GPT3 interprets the meaning of the input. I feed in the prompt (‘Royalty’) and the text for each row. You can find similarity by doing a distance measurement between these two vectors just like you would with spatial coordinates. You can do some interesting vector math these kinds of embedding vectors like adding the vectors for ‘Royalty’ and ‘Male’ should (I haven’t tested this with GPT3) be more similar to ‘King’ than ‘Queen.’
It does need to make a call per row, and the pricing is around $0.0004 per 750 words. So really it depends on the amount of text you’re feeding in. Just like everything else, it’s a balance of cost and capability.
BUT, I was tinkering around with this for some things I’m prototyping for our team and wanted to share an extended use of the OpenAI API.
1 Like
This is where the grinch in me says we need to think about caching GP3 results associated with a specific domain and then use that information to perform better localized search and possibly as future prompts to reduce the load on the inferencing process. Coda documents are ideal micro “domains” of knowledge and they have nearly the full set of data harvesting and organizational elements to create documents which themselves may exhibit future AI models.
1 Like
As far as I can tell Coda is already doing a pretty good job of caching what can be cached (kudos Codans), but I haven’t like cracked open wireshark to see exactly what the network activity is or anything. I’m (very) new to the Coda-verse but from experience working with other low-entry-high-ceiling tools, it all probably depends on what you’re doing and how you set up the formula.
For example, I’m using the GPT3 completions API to write boilerplate descriptions for ~800 rows of tasks, epics, etc and using this embedding API to allow some folks on our team to quickly find what’s relevant to whatever a customer is talking to them about, so for us that’s negligible cost when we’re selling multi-year implementation contracts to customers. Now if we were a smaller team selling cheaper services or if we ever hire an armada of junior sales people to flood everyone’s inboxes with cold sales emails, that may change.
Love the conversation and thanks for chiming in @Bill_French, if you have any other Q’s or curiosities regarding AI I’m always game to see if I can throw together a quick prototype.
1 Like
Hi, would the solution you’ve presented here be like real time for small datasets? I’ve a similarity search like this but I actually train my data beforehand (as a background daily job). The training has to go through every row at least once so it’s slow, but the search is relatively quick.
Hi @Kameron_Baumgardner , I see you are quite familiar with the OpenAI GPT3 Pack so I wanted to as a question. I used it a bit few days ago, and it’s great but I don’t quite understand the difference between these functions:
Prompt() and QuestionAnswer() seem basically the same, I don’t understand in what way would they be used differently. And what does GPT3PromptExamples() do exactly, can you give an example on how to use it, or link to some guide/tutorial?
This is fairly similar @Jake_Nguyen, I send each row to the OpenAI API for encodings which returns an ~1000 number array. The ‘search’ is essentially treating those arrays like coordinates in ~1000 dimension space and measuring the distance between them.
Luckily, the OpenAI API makes this a fairly quick process since you can submit requests in parallel.
Feel free to copy the doc and the pack and give it a go!
@Fran_Vidicek looking at the code for the pack, it looks like the Prompt( ) formula is using the same API endpoint as the QuestionAnswer( ) formula is, but your input for the latter is injected into a string of Q&A’s which is probably intended to put GPT into a ‘question answering mode.’ You would probably get the same response if you asked a question in either formula, however with the Prompt( ) formula you could break outside of just questions and give it commands or really whatever. As an example, I’ve used it to help generate boilerplate content by just telling it to do so.
The GPTPromptExamples( ) looks like what you’re doing is setting up several examples, desired responses, and then submitting a prompt for GPT to complete given the examples. So one example I can think of is if you fed it “Water, Fire, Snow” as training prompts and “Blue, Orange, White” as example responses, then provided “Dirt” as your prompt, ideally GPT should return you something like “Brown.” Thats my best guess at least.
1 Like
Sorry for the delayed response.
I wasn’t referring to caching as it relates to the app itself. Instead, I’m thinking about GTPs responses; making them sustainable without additional calls to GPT itself. It’s no different than managing costly reverse geo-encoding processes;
… given any lat/lng, transform it into a street address, but make sure you never do that more than once at a stated geo-precision.
This type of location caching makes it possible to create massive databases of specific mailing addresses by simply driving the streets with a real-time geo-tagging device. Want to know every address on the LA-transit route network? Just geo-encode the routes every two seconds. We did that in LA with our transit appliances and over 400 days, we had 500,000 addresses without paying a dime for geo-encoding fees (the free tier is 2500 encodings per day).
I have a hunch, similar caching approaches could create relatively large collections of shared (GP3) knowledge that could be sustained and utilized with larger audiences without breaking the bank.
Yeah I understood you correctly I think @Bill_French, the Coda OpenAI pack seems to be only submitting when there’s a change. So in my case I’m getting those ~1000 number arrays per row, but it’s saving them and only resubmitting when I change the data for that row.
However, creating a shared record of these coordinates might be infeasible except for some subset or type of responses. Otherwise, you’d really only be able to save an API call if folks are putting in the exact same prompts.
I thought the definition of stale was time-based, not content based.
Perhaps I’m using the term a bit liberally, however GPT3 is a fixed in place model (as far as I’m aware), so if you submit the same prompt it should give you the same response regardless of the time. You could certainly set up an automation to resubmit prompts to keep embeddings or responses ‘fresh’ but I’d be surprised if there was any difference. That’d make for a good experiment though!
Yes, it is. But I was referring to Coda’s definition of stale. As I understand it, Packs will refresh data even when the record or field referenced hasn’t changed. I thought it was based on some magic inside the Pack engine that determines when and if a cache hit will be passed over, thus triggering another request and response.
Um, not necessarily. It depends on the temperature and top_p settings. Ive seen vastly different responses to identical prompts submitted back-to-back and even days later. Results are not changing because the model has changed; they’re doing it because the temperature and top_p settings are encouraging a degree of random freshness.
Hi, thank you for your reply. This method wouldn’t be scaleable for millions of rows right?
Unless openAI will let you store and train a database on their servers so that you don’t need to give the same text input every time
Wish I was smart enough to understand what this meant in detail. I will bookmark this and come back at another time
1 Like