Fine-tuning is the process of taking a pre-trained neural network and adapting it to a new, but related, task. This typically involves freezing the weights of earlier layers (which capture general features) and only training the later layers (which learn task-specific patterns).
Fine-tuning is especially useful when there is limited data available for the new task. By leveraging the knowledge learned by the pre-trained model, fine-tuning allows faster convergence and improves performance. For instance, a model pre-trained on ImageNet can be fine-tuned for specific tasks like medical image classification.
Fine-tuning is a popular approach in transfer learning, where a model trained on one task is reused for a similar task, saving both time and computational resources.