OpenAI GPT-3.5 Turbo unveils fine-tuning updates & enhanced API for developers

Now, programmers can supply their own data to GPT-3.5 turbo to customise it to their own use cases.

GPT 3.5 fine-tuning update
GPT 3.5 fine-tuning update

Highlights

  • OpenAI's GPT-3.5 Turbo model now allows developers to integrate custom data and fix specific behaviours
  • This strategic move comes with plans to introduce a user-friendly interface for the fine-tuning process

In a recent development, OpenAI has empowered its GPT-3.5 Turbo model, offering developers the capability to integrate custom data to enhance the model's text generation while instilling specific behaviours.

This step aims to bolster the reliability of AI while enabling the creation of distinct experiences for users, as indicated by OpenAI.

This strategic move comes with the assertion that fine-tuned versions of GPT-3.5 Turbo can measure up to, or even surpass, the fundamental capabilities of GPT-4 in select focused tasks.

OpenAI customisable updates

By allowing developers and businesses to customise the model, OpenAI has opened avenues for tailoring the AI to better adhere to instructions, maintain language consistency, refine response formatting, and even align the output's tone with a particular brand identity.

Moreover, this breakthrough in customisation not only improves performance but also streamlines the API calls and minimises costs. Developers have already reported drastic reductions in prompt size, reaching up to 90 percent, by integrating fine-tuned instructions directly into the model.

While this fine-tuning process currently involves preparing data, uploading requisite files, and initiating a fine-tuning job via OpenAI's API, the company has plans to roll out a user-friendly interface in the future. This interface will feature a dashboard to monitor ongoing fine-tuning tasks and manage their progress efficiently.

Costs of fine-tuning as follows

· Training: $0.008/1K tokens

· Usage Input: $0.012/1K tokens

· Usage Output: $0.016/1K tokens

In parallel, OpenAI has unveiled updated versions of the GPT-3 base models (babbage-002 and davinci-002) that can also be fine-tuned. These models offer support for pagination and improved extensibility. However, it's worth noting that the original GPT-3 base models are scheduled for retirement on January 4, 2024, as per OpenAI's announcement.

Steps for Fine-tuning

· First prepare your data

· Upload your files

· Now, create a fine-tuning job

· Use a fine tuned model

Looking ahead, OpenAI has revealed that fine-tuning support for GPT-4, which boasts the ability to comprehend both images and text, will be introduced later in the fall. While specifics remain undisclosed, this move indicates OpenAI's continued commitment to advancing and expanding the potential of AI technology.