OpenAI has launched fine-tuning for GPT-4o, a highly requested feature from developers. This new capability allows developers to enhance GPT-4o’s performance and reduce costs by training it with custom datasets tailored to specific needs. To support this, OpenAI is offering 1 million free training tokens daily until September 23.
Fine-tuning allows developers to customize GPT-4o’s response structure, tone, and domain-specific instructions, delivering high-quality results with minimal training data. This feature is expected to greatly impact various domains, from coding to creative writing. OpenAI plans to continue expanding customization options for developers.
GPT-4o fine-tuning is now available to all developers on paid tiers. To start, visit the fine-tuning dashboard, select “create,” and choose gpt-4o-2024-08-06. Training costs $25 per million tokens, with inference at $3.75 per million input tokens and $15 per million output tokens.
A mini version—GPT-4o mini—is also available, offering 2 million free daily training tokens until September 23. Choose gpt-4o-mini-2024-07-18 from the base model options.
Success Stories
OpenAI has partnered with select companies to test GPT-4o fine-tuning with impressive results. Cosine’s AI assistant, Genie, achieved state-of-the-art (SOTA) scores on the SWE-bench benchmark, while Distyl’s fine-tuned GPT-4o topped the BIRD-SQL benchmark.
Data Privacy and Safety
Fine-tuned models remain fully under developers’ control, with complete ownership of business data. OpenAI has implemented safety measures, including automated evaluations and usage monitoring, to prevent misuse and ensure compliance with policies.