Supported Embedding Models
Hugging Face
You can use embedding models from Huggingface to run Mem0 locally.
Usage
Using Text Embeddings Inference (TEI)
You can also use Hugging Face’s Text Embeddings Inference service for faster and more efficient embeddings:
To run the TEI service, you can use Docker:
Config
Here are the parameters available for configuring Huggingface embedder:
Parameter | Description | Default Value |
---|---|---|
model | The name of the model to use | multi-qa-MiniLM-L6-cos-v1 |
embedding_dims | Dimensions of the embedding model | selected_model_dimensions |
model_kwargs | Additional arguments for the model | None |
huggingface_base_url | URL to connect to Text Embeddings Inference (TEI) API | None |