As your wording implies, finetuning is restricted to the smaller models, i.e. babbage, curie etc.
You can generate the training data for this with 3.5 and 4 and tune smaller models with the resulting data. For lots of tasks, this results in robust results, which btw are also faster than 3.5 turbo.
You can generate the training data for this with 3.5 and 4 and tune smaller models with the resulting data. For lots of tasks, this results in robust results, which btw are also faster than 3.5 turbo.