Zero-shot learning (ZSL) and few-shot learning (FSL) are two approaches in machine learning that aim to recognize or classify new categories of data with minimal labeled examples. In zero-shot learning, the model is trained on a set of classes and is then expected to generalize to completely unseen classes based on auxiliary information, such as attributes or descriptions of those classes. For instance, if a model learns to identify animals like cats and dogs, it could be asked to identify a horse if it knows semantic characteristics like "has four legs, mane, and hooves," even if it has never seen a horse in training data.
Few-shot learning, on the other hand, involves training a model to recognize or classify new categories with only a few labeled examples per category. Instead of relying solely on descriptive attributes, a few-shot system learns from a small number of training samples. For example, if you provide a model with only five images of a fox, it should be capable of classifying new images of foxes correctly. This approach often employs techniques like metric learning, where the model learns to distinguish between classes by comparing features rather than requiring a vast amount of data for training.
Though both methods aim to tackle the scarcity of labeled data, they differ in their fundamental approach. Zero-shot learning requires an understanding of the relationships between classes through auxiliary data and does not use any examples of the target classes during training. In contrast, few-shot learning makes use of a limited number of examples for each new class to improve performance. Both techniques are valuable in real-world applications where obtaining large datasets is challenging, such as in medical imaging or wildlife monitoring, where labeled data can be scarce.