Embeddings are called "dense representations" because the vectors used to represent data points (such as words, images, or documents) are compact and contain meaningful information in every dimension. Unlike sparse representations, where only a few dimensions contain non-zero values (as in one-hot encoding), dense embeddings have non-zero values spread across all dimensions, allowing them to capture more complex relationships.
For example, in word embeddings, each dimension of the vector encodes some aspect of the word's meaning, such as its syntactic or semantic properties. As a result, dense embeddings can capture nuanced relationships like synonyms, antonyms, and analogies in a compact format.
Dense representations are computationally efficient compared to sparse representations because they require less memory and can be processed more quickly by machine learning models. The ability to store complex information in a lower-dimensional space is a key reason why embeddings are widely used in modern AI systems.