@llama_index
A core use case for @OpenAI’s new finetuning API is finetuning gpt-3.5-turbo over gpt-4 outputs. Our new `OpenAIFineTuningHandler` makes data collection for this effortless. When running RAG w/ GPT-4, automatically get a dataset you can fine-tune a cheaper model over 👇 https://t.co/nmTaO3S6tJ