• Docs >
  • qdrant_client.qdrant_fastembed module
Shortcuts

qdrant_client.qdrant_fastembed module

class QdrantFastembedMixin(**kwargs: Any)[source]

Bases: QdrantBase

add(collection_name: str, documents: Iterable[str], metadata: Optional[Iterable[Dict[str, Any]]] = None, ids: Optional[Iterable[Union[int[int], str[str]]]] = None, batch_size: int = 32, parallel: Optional[int] = None, **kwargs: Any) List[Union[str, int]][source]

Adds text documents into qdrant collection. If collection does not exist, it will be created with default parameters. Metadata in combination with documents will be added as payload. Documents will be embedded using the specified embedding model.

If you want to use your own vectors, use upsert method instead.

Parameters
  • collection_name (str) – Name of the collection to add documents to.

  • documents (Iterable[str]) – List of documents to embed and add to the collection.

  • metadata (Iterable[Dict[str, Any]], optional) – List of metadata dicts. Defaults to None.

  • ids (Iterable[models.ExtendedPointId], optional) – List of ids to assign to documents. If not specified, UUIDs will be generated. Defaults to None.

  • batch_size (int, optional) – How many documents to embed and upload in single request. Defaults to 32.

  • parallel (Optional[int], optional) – How many parallel workers to use for embedding. Defaults to None. If number is specified, data-parallel process will be used.

Raises

ImportError – If fastembed is not installed.

Returns

List of IDs of added documents. If no ids provided, UUIDs will be randomly generated on client side.

get_fastembed_sparse_vector_params(on_disk: Optional[bool] = None, modifier: Optional[Modifier] = None) Optional[Dict[str, SparseVectorParams]][source]

Generates vector configuration, compatible with fastembed sparse models.

Parameters
  • on_disk – if True, vectors will be stored on disk. If None, default value will be used.

  • modifier – Sparse vector queries modifier. E.g. Modifier.IDF for idf-based rescoring. Default: None.

Returns

Configuration for vectors_config argument in create_collection method.

get_fastembed_vector_params(on_disk: Optional[bool] = None, quantization_config: Optional[Union[ScalarQuantization, ProductQuantization, BinaryQuantization]] = None, hnsw_config: Optional[HnswConfigDiff] = None) Dict[str, VectorParams][source]

Generates vector configuration, compatible with fastembed models.

Parameters
  • on_disk – if True, vectors will be stored on disk. If None, default value will be used.

  • quantization_config – Quantization configuration. If None, quantization will be disabled.

  • hnsw_config – HNSW configuration. If None, default configuration will be used.

Returns

Configuration for vectors_config argument in create_collection method.

get_sparse_vector_field_name() Optional[str][source]

Returns name of the vector field in qdrant collection, used by current fastembed model. :returns: Name of the vector field.

get_vector_field_name() str[source]

Returns name of the vector field in qdrant collection, used by current fastembed model. :returns: Name of the vector field.

query(collection_name: str, query_text: str, query_filter: Optional[Filter] = None, limit: int = 10, **kwargs: Any) List[QueryResponse][source]

Search for documents in a collection. This method automatically embeds the query text using the specified embedding model. If you want to use your own query vector, use search method instead.

Parameters
  • collection_name – Collection to search in

  • query_text – Text to search for. This text will be embedded using the specified embedding model. And then used as a query vector.

  • query_filter

    • Exclude vectors which doesn’t fit given conditions.

    • If None - search among all vectors

  • limit – How many results return

  • **kwargs – Additional search parameters. See qdrant_client.models.SearchRequest for details.

Returns

List[types.ScoredPoint] – List of scored points.

query_batch(collection_name: str, query_texts: List[str], query_filter: Optional[Filter] = None, limit: int = 10, **kwargs: Any) List[List[QueryResponse]][source]

Search for documents in a collection with batched query. This method automatically embeds the query text using the specified embedding model.

Parameters
  • collection_name – Collection to search in

  • query_texts – A list of texts to search for. Each text will be embedded using the specified embedding model. And then used as a query vector for a separate search requests.

  • query_filter

    • Exclude vectors which doesn’t fit given conditions.

    • If None - search among all vectors

    This filter will be applied to all search requests.

  • limit – How many results return

  • **kwargs – Additional search parameters. See qdrant_client.models.SearchRequest for details.

Returns

List[List[QueryResponse]] – List of lists of responses for each query text.

set_model(embedding_model_name: str, max_length: Optional[int] = None, cache_dir: Optional[str] = None, threads: Optional[int] = None, providers: Optional[Sequence[None]] = None, **kwargs: Any) None[source]

Set embedding model to use for encoding documents and queries.

Parameters
  • embedding_model_name – One of the supported embedding models. See SUPPORTED_EMBEDDING_MODELS for details.

  • max_length (int, optional) – Deprecated. Defaults to None.

  • cache_dir (str, optional) – The path to the cache directory. Can be set using the FASTEMBED_CACHE_PATH env variable. Defaults to fastembed_cache in the system’s temp directory.

  • threads (int, optional) – The number of threads single onnxruntime session can use. Defaults to None.

  • providers – The list of onnx providers (with or without options) to use. Defaults to None. Example configuration: https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#configuration-options

Raises
  • ValueError – If embedding model is not supported.

  • ImportError – If fastembed is not installed.

Returns

None

set_sparse_model(embedding_model_name: Optional[str], cache_dir: Optional[str] = None, threads: Optional[int] = None, providers: Optional[Sequence[None]] = None, **kwargs: Any) None[source]

Set sparse embedding model to use for hybrid search over documents in combination with dense embeddings.

Parameters
  • embedding_model_name – One of the supported sparse embedding models. See SUPPORTED_SPARSE_EMBEDDING_MODELS for details. If None, sparse embeddings will not be used.

  • cache_dir (str, optional) – The path to the cache directory. Can be set using the FASTEMBED_CACHE_PATH env variable. Defaults to fastembed_cache in the system’s temp directory.

  • threads (int, optional) – The number of threads single onnxruntime session can use. Defaults to None.

  • providers – The list of onnx providers (with or without options) to use. Defaults to None. Example configuration: https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#configuration-options

Raises
  • ValueError – If embedding model is not supported.

  • ImportError – If fastembed is not installed.

Returns

None

DEFAULT_EMBEDDING_MODEL = 'BAAI/bge-small-en'
property embedding_model_name: str
embedding_models: Dict[str, None] = {}
property sparse_embedding_model_name: Optional[str]
sparse_embedding_models: Dict[str, None] = {}

Qdrant

Learn more about Qdrant vector search project and ecosystem

Discover Qdrant

Similarity Learning

Explore practical problem solving with Similarity Learning

Learn Similarity Learning

Community

Find people dealing with similar problems and get answers to your questions

Join Community