Supported Vector Databases
Azure AI Search
Azure AI Search (formerly known as “Azure Cognitive Search”) provides secure information retrieval at scale over user-owned content in traditional and generative AI search applications.
Usage
Using binary compression for large vector collections
Using hybrid search
Configuration Parameters
Parameter | Description | Default Value | Options |
---|---|---|---|
service_name | Azure AI Search service name | Required | - |
api_key | API key of the Azure AI Search service | Required | - |
collection_name | The name of the collection/index to store vectors | mem0 | Any valid index name |
embedding_model_dims | Dimensions of the embedding model | 1536 | Any integer value |
compression_type | Type of vector compression to use | none | none , scalar , binary |
use_float16 | Store vectors in half precision (Edm.Half) | False | True , False |
vector_filter_mode | Vector filter mode to use | preFilter | postFilter , preFilter |
hybrid_search | Use hybrid search | False | True , False |
Notes on Configuration Options
-
compression_type:
none
: No compression, uses full vector precisionscalar
: Scalar quantization with reasonable balance of speed and accuracybinary
: Binary quantization for maximum compression with some accuracy trade-off
-
vector_filter_mode:
preFilter
: Applies filters before vector search (faster)postFilter
: Applies filters after vector search (may provide better relevance)
-
use_float16: Using half precision (float16) reduces storage requirements but may slightly impact accuracy. Useful for very large vector collections.
-
Filterable Fields: The implementation automatically extracts
user_id
,run_id
, andagent_id
fields from payloads for filtering.