Underfitting occurs when a neural network fails to capture the underlying patterns in the data, leading to poor performance on both training and test sets. To address underfitting, one common approach is to increase the model complexity by adding more layers or neurons, allowing the network to learn more intricate patterns.
Ensuring sufficient and high-quality training data is another important factor. If the dataset is too small or unrepresentative, the network may struggle to learn effectively. Data augmentation techniques, such as flipping or rotating images, can artificially expand the dataset and improve learning.
Adjusting hyperparameters like learning rate or batch size can also mitigate underfitting. A higher learning rate might speed up convergence, while a smaller batch size ensures that the model sees diverse examples during each update. Fine-tuning these settings can significantly enhance the model's performance and reduce underfitting.