synopsisinternal representationcompartmentalizationmatrixconcept product

Giving LLM internal representation
via embedding

Embedding is the numerical representation of the meaning of tokens (words).

What if we give users an interface to adjust the embedding of each word to their unique internal representation?

Models can start to actually understand us human. If they are smaller and can be run locally, it also represent a more personal, intimate human-AI relationship.

the interface also shows something physically making computations, showing the tangible aspect of AI as opposed to a invisible, digital process and making the interaction more intentional.

explore the next object →