Scalable is a deeply contextual definition. Is OpenAI itself scalable? That depends on how many GPUs and millions of dollars you have stacked up, right?
To answer your question, we need to put a finer point on the definition of scale and the business value of using AI for or in a specific solution. But, it’s clear that OpenAI does allow you to upload your data to serve as the basis for a new derivative model that is likely to provide lower-cost AI solutions.
In a broad sense, imagine you have 1000 “objects” that each describe some knowledge facet in the context of a specific product. Let’s use CyberLandr as an example. There are about 1,000 [known] use cases for this product. But we can distill this example by simply focusing on the urban use cases which number only ~350.
If we have a list of urban use cases, we can create a finely tuned model that includes all known urban use cases and easily generate questions about each use case. In fact, given a use case, we can ask GP3 to generate five questions about each use case. Armed with the question variants for each use case, we can build a training data set using three of the questions, holding back two questions as our test data set. The purpose of this work is to create a chat tool that can carry ion a conversation with prospective CyberLandr buyers. We have another project that helps CyberLandr owners locate unique places to utilize Cybertruck and CyberLandr in new ways - i.e., wine country, farm tourism, deep overlanding where electricity may be scarce.
We submit the questions and answers to GP3 as the basis for our new model, and then we test that model using the test data set that we withheld from the training process. We can easily gauge the performance with confidence scores and develop the data we need to determine if more training is required.
Everything in GPT has a cost, but using a fine tuning approach is almost universally less costly than prompt-engineering your solution with preamble answers. This is why I believe fine tuning is likely one of the best ways to scale up GPT projects to a level of financial practicality.
IMPORTANT: AI models need data and lots of it. The more data, the more valuable your fine tuned model will become. But, a fine tuned model is the ideal way to build an ever increasing value in AI and it serves us well as a framework for improving the model.
Almost every integration with OpenAI [thus far] has been tactical; everyone wants an AI checkmark on their product. If your AI project is strategic, you will create workflows that harvest user experience in ways that improve the model - ergo, it needs an element of ML as much as it begins with AI.
p.s. - If Coda were smart (and there’s plenty of evidence to support this), everything in this community would be harvested in real-time into a fine tuned model that could carry on exceedingly helpful GPTChat-like conversations. Of course, almost everything CODAChat might say would likely mention @Paul_Danyliuk in some way. LOL