3D Vector Map of Word Embeddings

How Transformers See Words

This visualization simulates how models like GPT "see" words. Each point is a "word" placed in 3D space based on its meaning. Words with similar meanings cluster together. This is a 3D simplification of a much higher-dimensional space (which can have hundreds or thousands of dimensions!).

Click and drag to rotate the view. Scroll to zoom.

The "Aha!" Moment

Transformers are pre-trained on vast amounts of text. During this process, they learn to convert words into numerical vectors (embeddings). The key takeaway is that the distance and direction between these vectors correspond to semantic relationships.

For example, the vector math `King - Man + Woman` results in a vector very close to `Queen`. This map shows that concept visually. The clusters you see are "neighborhoods" of meaning.

Legend

  • Fruits
  • Animals
  • Technology
  • Actions
  • Concepts