Class imbalance is addressed by ensuring that all classes contribute equally to training. Techniques like oversampling the minority class or undersampling the majority class adjust the dataset to balance class distribution. Synthetic data generation methods like SMOTE create new samples for the minority class.
Weighted loss functions assign higher penalties to misclassified examples from the minority class. For instance, in binary classification, setting higher weights for minority class errors ensures the model prioritizes their correct classification.
Ensemble methods like Random Forest or techniques like focal loss further improve performance on imbalanced data. Evaluating the model with metrics like AUC-ROC or F1-score provides a clearer picture of performance than relying solely on accuracy.