If true, we’d have to explain how I was able to get this working.
When you use Fine Tuning, you upload your own training data to cache-forward prompts and answers. In that process, OpenAI hands you back a pointer to new model that is a derivative of the model the fine tuning was based on. You are able to access your derivative model as you would any other base model through the API. Ergo, your training data and new model exist inside OpenAI in joint tenancy and is accessible through – and only through – your API token.
I suspect this is all achieved virtually inside the OpenAI platform; it doesn’t actually duplicate the base model. Rather, it likely virtualizes your enhanced training data as a wrapper and simply associates it with your new model name.
One must certainly wonder - why would OpenAI allow you to host your training data and new model derivatives on their servers? The short answer is money; they want to charge you a fee to search your own data.