• Docs >
  • qdrant_client.async_qdrant_client module
Shortcuts

qdrant_client.async_qdrant_client module

class AsyncQdrantClient(location: Optional[str] = None, url: Optional[str] = None, port: Optional[int] = 6333, grpc_port: int = 6334, prefer_grpc: bool = False, https: Optional[bool] = None, api_key: Optional[str] = None, prefix: Optional[str] = None, timeout: Optional[int] = None, host: Optional[str] = None, path: Optional[str] = None, force_disable_check_same_thread: bool = False, grpc_options: Optional[Dict[str, Any]] = None, **kwargs: Any)[source]

Bases: AsyncQdrantFastembedMixin

Entry point to communicate with Qdrant service via REST or gPRC API.

It combines interface classes and endpoint implementation. Additionally, it provides custom implementations for frequently used methods like initial collection upload.

All methods in QdrantClient accept both gRPC and REST structures as an input. Conversion will be performed automatically.

Note

This module methods are wrappers around generated client code for gRPC and REST methods. If you need lower-level access to generated clients, use following properties:

  • QdrantClient.grpc_points

  • QdrantClient.grpc_collections

  • QdrantClient.rest

Note

If you need async, please consider using Async Implementations of QdrantClient.

Parameters
  • location – If :memory: - use in-memory Qdrant instance. If str - use it as a url parameter. If None - use default values for host and port.

  • url – either host or str of “Optional[scheme], host, Optional[port], Optional[prefix]”. Default: None

  • port – Port of the REST API interface. Default: 6333

  • grpc_port – Port of the gRPC interface. Default: 6334

  • prefer_grpc – If true - use gPRC interface whenever possible in custom methods.

  • https – If true - use HTTPS(SSL) protocol. Default: None

  • api_key – API key for authentication in Qdrant Cloud. Default: None

  • prefix – If not None - add prefix to the REST URL path. Example: service/v1 will result in http://localhost:6333/service/v1/{qdrant-endpoint} for REST API. Default: None

  • timeout – Timeout for REST and gRPC API requests. Default: 5 seconds for REST and unlimited for gRPC

  • host – Host name of Qdrant service. If url and host are None, set to ‘localhost’. Default: None

  • path – Persistence path for QdrantLocal. Default: None

  • force_disable_check_same_thread – For QdrantLocal, force disable check_same_thread. Default: False Only use this if you can guarantee that you can resolve the thread safety outside QdrantClient.

  • **kwargs – Additional arguments passed directly into REST client initialization

async batch_update_points(collection_name: str, update_operations: Sequence[Union[UpsertOperation, DeleteOperation, SetPayloadOperation, OverwritePayloadOperation, DeletePayloadOperation, ClearPayloadOperation, UpdateVectorsOperation, DeleteVectorsOperation]], wait: bool = True, ordering: Optional[WriteOrdering] = None, **kwargs: Any) List[UpdateResult][source]

Batch update points in the collection.

Parameters
  • collection_name – Name of the collection

  • update_operations – List of update operations

  • wait – Await for the results to be processed. - If true, result will be returned only when all changes are applied - If false, result will be returned immediately after the confirmation of receiving.

  • ordering (Optional[WriteOrdering]) –

    Define strategy for ordering of the points. Possible values:

    • weak (default) - write operations may be reordered, works faster

    • medium - write operations go through dynamically selected leader, may be inconsistent for a short period of time in case of leader change

    • strong - Write operations go through the permanent leader, consistent, but may be unavailable if leader is down

Returns

Operation results

async clear_payload(collection_name: str, points_selector: Union[List[Union[int, str, PointId]], Filter, Filter, PointIdsList, FilterSelector, PointsSelector], wait: bool = True, ordering: Optional[WriteOrdering] = None, shard_key_selector: Optional[Union[int[int], str[str], List[Union[int[int], str[str]]]]] = None, **kwargs: Any) UpdateResult[source]

Delete all payload for selected points

Parameters
  • collection_name – Name of the collection

  • wait – Await for the results to be processed. - If true, result will be returned only when all changes are applied - If false, result will be returned immediately after the confirmation of receiving.

  • points_selector – List of affected points, filter or points selector. Example - points=[1, 2, 3, “cd3b53f0-11a7-449f-bc50-d06310e7ed90”] - points=Filter(must=[FieldCondition(key=’rand_number’, range=Range(gte=0.7))])

  • ordering (Optional[WriteOrdering]) –

    Define strategy for ordering of the points. Possible values:

    • weak (default) - write operations may be reordered, works faster

    • medium - write operations go through dynamically selected leader, may be inconsistent for a short period of time in case of leader change

    • strong - Write operations go through the permanent leader, consistent, but may be unavailable if leader is down

  • shard_key_selector – Defines the shard groups that should be used to write updates into. If multiple shard_keys are provided, the update will be written to each of them. Only works for collections with custom sharding method.

Returns

Operation result

async close(grpc_grace: Optional[float] = None, **kwargs: Any) None[source]

Closes the connection to Qdrant

Parameters

grpc_grace – Grace period for gRPC connection close. Default: None

async count(collection_name: str, count_filter: Optional[Union[Filter, Filter]] = None, exact: bool = True, shard_key_selector: Optional[Union[int[int], str[str], List[Union[int[int], str[str]]]]] = None, **kwargs: Any) CountResult[source]

Count points in the collection.

Count points in the collection matching the given filter.

Parameters
  • collection_name – name of the collection to count points in

  • count_filter – filtering conditions

  • exact – If True - provide the exact count of points matching the filter. If False - provide the approximate count of points matching the filter. Works faster.

  • shard_key_selector – This parameter allows to specify which shards should be queried. If None - query all shards. Only works for collections with custom sharding method.

Returns

Amount of points in the collection matching the filter.

async create_collection(collection_name: str, vectors_config: Union[VectorParams, Mapping[str, VectorParams]], sparse_vectors_config: Optional[Mapping[str, SparseVectorParams]] = None, shard_number: Optional[int] = None, sharding_method: Optional[ShardingMethod] = None, replication_factor: Optional[int] = None, write_consistency_factor: Optional[int] = None, on_disk_payload: Optional[bool] = None, hnsw_config: Optional[Union[HnswConfigDiff, HnswConfigDiff]] = None, optimizers_config: Optional[Union[OptimizersConfigDiff, OptimizersConfigDiff]] = None, wal_config: Optional[Union[WalConfigDiff, WalConfigDiff]] = None, quantization_config: Optional[Union[ScalarQuantization, ProductQuantization, BinaryQuantization, QuantizationConfig]] = None, init_from: Optional[Union[InitFrom, str]] = None, timeout: Optional[int] = None, **kwargs: Any) bool[source]

Create empty collection with given parameters

Parameters
  • collection_name – Name of the collection to recreate

  • vectors_config – Configuration of the vector storage. Vector params contains size and distance for the vector storage. If dict is passed, service will create a vector storage for each key in the dict. If single VectorParams is passed, service will create a single anonymous vector storage.

  • sparse_vectors_config – Configuration of the sparse vector storage. The service will create a sparse vector storage for each key in the dict.

  • shard_number – Number of shards in collection. Default is 1, minimum is 1.

  • sharding_method – Defines strategy for shard creation. Option auto (default) creates defined number of shards automatically. Data will be distributed between shards automatically. After creation, shards could be additionally replicated, but new shards could not be created. Option custom allows to create shards manually, each shard should be created with assigned unique shard_key. Data will be distributed between based on shard_key value.

  • replication_factor – Replication factor for collection. Default is 1, minimum is 1. Defines how many copies of each shard will be created. Have effect only in distributed mode.

  • write_consistency_factor – Write consistency factor for collection. Default is 1, minimum is 1. Defines how many replicas should apply the operation for us to consider it successful. Increasing this number will make the collection more resilient to inconsistencies, but will also make it fail if not enough replicas are available. Does not have any performance impact. Have effect only in distributed mode.

  • on_disk_payload – If true - point`s payload will not be stored in memory. It will be read from the disk every time it is requested. This setting saves RAM by (slightly) increasing the response time. Note: those payload values that are involved in filtering and are indexed - remain in RAM.

  • hnsw_config – Params for HNSW index

  • optimizers_config – Params for optimizer

  • wal_config – Params for Write-Ahead-Log

  • quantization_config – Params for quantization, if None - quantization will be disabled

  • init_from – Use data stored in another collection to initialize this collection

  • timeout – Wait for operation commit timeout in seconds. If timeout is reached - request will return with service error.

Returns

Operation result

async create_full_snapshot(wait: bool = True, **kwargs: Any) Optional[SnapshotDescription][source]

Create snapshot for a whole storage.

Parameters

wait

Await for the snapshot to be created.

  • If true, result will be returned only when the snapshot is created

  • If false, result will be returned immediately after the confirmation of receiving.

Returns

Snapshot description

async create_payload_index(collection_name: str, field_name: str, field_schema: Optional[Union[PayloadSchemaType, TextIndexParams, int, PayloadIndexParams]] = None, field_type: Optional[Union[PayloadSchemaType, TextIndexParams, int, PayloadIndexParams]] = None, wait: bool = True, ordering: Optional[WriteOrdering] = None, **kwargs: Any) UpdateResult[source]

Creates index for a given payload field. Indexed fields allow to perform filtered search operations faster.

Parameters
  • collection_name – Name of the collection

  • field_name – Name of the payload field

  • field_schema – Type of data to index

  • field_type – Same as field_schema, but deprecated

  • wait

    Await for the results to be processed.

    • If true, result will be returned only when all changes are applied

    • If false, result will be returned immediately after the confirmation of receiving.

  • ordering (Optional[WriteOrdering]) –

    Define strategy for ordering of the points. Possible values:

    • weak (default) - write operations may be reordered, works faster

    • medium - write operations go through dynamically selected leader, may be inconsistent for a short period of time in case of leader change

    • strong - Write operations go through the permanent leader, consistent, but may be unavailable if leader is down

Returns

Operation Result

async create_shard_key(collection_name: str, shard_key: Union[int[int], str[str]], shards_number: Optional[int] = None, replication_factor: Optional[int] = None, placement: Optional[List[int]] = None, **kwargs: Any) bool[source]

Create shard key for collection.

Only works for collections with custom sharding method.

Parameters
  • collection_name – Name of the collection

  • shard_key – Shard key to create

  • shards_number – How many shards to create for this key

  • replication_factor – Replication factor for this key

  • placement – List of peers to place shards on. If None - place on all peers.

Returns

Operation result

async create_shard_snapshot(collection_name: str, shard_id: int, wait: bool = True, **kwargs: Any) Optional[SnapshotDescription][source]

Create snapshot for a given shard.

Parameters
  • collection_name – Name of the collection

  • shard_id – Index of the shard

  • wait

    Await for the snapshot to be created.

    • If true, result will be returned only when the snapshot is created.

    • If false, result will be returned immediately after the confirmation of receiving.

Returns

Snapshot description

async create_snapshot(collection_name: str, wait: bool = True, **kwargs: Any) Optional[SnapshotDescription][source]

Create snapshot for a given collection.

Parameters
  • collection_name – Name of the collection

  • wait

    Await for the snapshot to be created.

    • If true, result will be returned only when a snapshot is created

    • If false, result will be returned immediately after the confirmation of receiving.

Returns

Snapshot description

async delete(collection_name: str, points_selector: Union[List[Union[int, str, PointId]], Filter, Filter, PointIdsList, FilterSelector, PointsSelector], wait: bool = True, ordering: Optional[WriteOrdering] = None, shard_key_selector: Optional[Union[int[int], str[str], List[Union[int[int], str[str]]]]] = None, **kwargs: Any) UpdateResult[source]

Deletes selected points from collection

Parameters
  • collection_name – Name of the collection

  • wait

    Await for the results to be processed.

    • If true, result will be returned only when all changes are applied

    • If false, result will be returned immediately after the confirmation of receiving.

  • points_selector

    Selects points based on list of IDs or filter. Examples

    • points=[1, 2, 3, “cd3b53f0-11a7-449f-bc50-d06310e7ed90”]

    • points=Filter(must=[FieldCondition(key=’rand_number’, range=Range(gte=0.7))])

  • ordering (Optional[WriteOrdering]) –

    Define strategy for ordering of the points. Possible values:

    • weak (default) - write operations may be reordered, works faster

    • medium - write operations go through dynamically selected leader, may be inconsistent for a short period of time in case of leader change

    • strong - Write operations go through the permanent leader, consistent, but may be unavailable if leader is down

  • shard_key_selector – Defines the shard groups that should be used to write updates into. If multiple shard_keys are provided, the update will be written to each of them. Only works for collections with custom sharding method.

Returns

Operation result

async delete_collection(collection_name: str, timeout: Optional[int] = None, **kwargs: Any) bool[source]

Removes collection and all it’s data

Parameters
  • collection_name – Name of the collection to delete

  • timeout – Wait for operation commit timeout in seconds. If timeout is reached - request will return with service error.

Returns

Operation result

async delete_full_snapshot(snapshot_name: str, wait: bool = True, **kwargs: Any) Optional[bool][source]

Delete snapshot for a whole storage.

Parameters
  • snapshot_name – Snapshot name

  • wait

    Await for the snapshot to be deleted.

    • If true, result will be returned only when the snapshot is deleted

    • If false, result will be returned immediately after the confirmation of receiving.

Returns

True if snapshot was deleted

async delete_payload(collection_name: str, keys: Sequence[str], points: Union[List[Union[int, str, PointId]], Filter, Filter, PointIdsList, FilterSelector, PointsSelector], wait: bool = True, ordering: Optional[WriteOrdering] = None, shard_key_selector: Optional[Union[int[int], str[str], List[Union[int[int], str[str]]]]] = None, **kwargs: Any) UpdateResult[source]

Remove values from point’s payload

Parameters
  • collection_name – Name of the collection

  • wait

    Await for the results to be processed.

    • If true, result will be returned only when all changes are applied

    • If false, result will be returned immediately after the confirmation of receiving.

  • keys – List of payload keys to remove

  • points

    List of affected points, filter or points selector. Example

    • points=[1, 2, 3, “cd3b53f0-11a7-449f-bc50-d06310e7ed90”]

    • points=Filter(must=[FieldCondition(key=’rand_number’, range=Range(gte=0.7))])

  • ordering (Optional[WriteOrdering]) –

    Define strategy for ordering of the points. Possible values:

    • weak (default) - write operations may be reordered, works faster

    • medium - write operations go through dynamically selected leader, may be inconsistent for a short period of time in case of leader change

    • strong - Write operations go through the permanent leader, consistent, but may be unavailable if leader is downn

  • shard_key_selector – Defines the shard groups that should be used to write updates into. If multiple shard_keys are provided, the update will be written to each of them. Only works for collections with custom sharding method.

Returns

Operation result

async delete_payload_index(collection_name: str, field_name: str, wait: bool = True, ordering: Optional[WriteOrdering] = None, **kwargs: Any) UpdateResult[source]

Removes index for a given payload field.

Parameters
  • collection_name – Name of the collection

  • field_name – Name of the payload field

  • wait

    Await for the results to be processed.

    • If true, result will be returned only when all changes are applied

    • If false, result will be returned immediately after the confirmation of receiving.

  • ordering (Optional[WriteOrdering]) –

    Define strategy for ordering of the points. Possible values:

    • weak (default) - write operations may be reordered, works faster

    • medium - write operations go through dynamically selected leader, may be inconsistent for a short period of time in case of leader change

    • strong - Write operations go through the permanent leader, consistent, but may be unavailable if leader is down

Returns

Operation Result

async delete_shard_key(collection_name: str, shard_key: Union[int[int], str[str]], **kwargs: Any) bool[source]

Delete shard key for collection.

Only works for collections with custom sharding method.

Parameters
  • collection_name – Name of the collection

  • shard_key – Shard key to delete

Returns

Operation result

async delete_shard_snapshot(collection_name: str, shard_id: int, snapshot_name: str, wait: bool = True, **kwargs: Any) Optional[bool][source]

Delete snapshot for a given shard.

Parameters
  • collection_name – Name of the collection

  • shard_id – Index of the shard

  • snapshot_name – Snapshot id

  • wait

    Await for the snapshot to be deleted.

    • If true, result will be returned only when the snapshot is deleted

    • If false, result will be returned immediately after the confirmation of receiving.

Returns

True if snapshot was deleted

async delete_snapshot(collection_name: str, snapshot_name: str, wait: bool = True, **kwargs: Any) Optional[bool][source]

Delete snapshot for a given collection.

Parameters
  • collection_name – Name of the collection

  • snapshot_name – Snapshot id

  • wait

    Await for the snapshot to be deleted.

    • If true, result will be returned only when the snapshot is deleted

    • If false, result will be returned immediately after the confirmation of receiving.

Returns

True if snapshot was deleted

async delete_vectors(collection_name: str, vectors: Sequence[str], points: Union[List[Union[int, str, PointId]], Filter, Filter, PointIdsList, FilterSelector, PointsSelector], wait: bool = True, ordering: Optional[WriteOrdering] = None, shard_key_selector: Optional[Union[int[int], str[str], List[Union[int[int], str[str]]]]] = None, **kwargs: Any) UpdateResult[source]

Delete specified vector from the collection. Does not affect payload.

Parameters
  • collection_name (str) – Name of the collection to delete vector from

  • vectors – List of names of the vectors to delete. Use “” to delete the default vector. At least one vector should be specified.

  • points (Point) –

    Selects points based on list of IDs or filter Examples

    • points=[1, 2, 3, “cd3b53f0-11a7-449f-bc50-d06310e7ed90”]

    • points=Filter(must=[FieldCondition(key=’rand_number’, range=Range(gte=0.7))])

  • wait (bool) –

    Await for the results to be processed.

    • If true, result will be returned only when all changes are applied

    • If false, result will be returned immediately after the confirmation of receiving.

  • ordering (Optional[WriteOrdering]) –

    Define strategy for ordering of the points. Possible values:

    • weak (default) - write operations may be reordered, works faster

    • medium - write operations go through dynamically selected leader, may be inconsistent for a short period of time in case of leader change

    • strong - Write operations go through the permanent leader, consistent, but may be unavailable if leader is down

  • shard_key_selector – Defines the shard groups that should be used to write updates into. If multiple shard_keys are provided, the update will be written to each of them. Only works for collections with custom sharding method.

Returns

Operation result

async discover(collection_name: str, target: Optional[Union[int[int], str[str], SparseVector, List[float[float]], TargetVector]] = None, context: Optional[Sequence[Union[ContextExamplePair, ContextExamplePair]]] = None, query_filter: Optional[Union[Filter, Filter]] = None, search_params: Optional[Union[SearchParams, SearchParams]] = None, limit: int = 10, offset: int = 0, with_payload: Union[bool, List[str], PayloadSelectorInclude, PayloadSelectorExclude, WithPayloadSelector] = True, with_vectors: Union[bool, List[str]] = False, using: Optional[str] = None, lookup_from: Optional[Union[LookupLocation, LookupLocation]] = None, consistency: Optional[Union[ReadConsistencyType, int[int]]] = None, shard_key_selector: Optional[Union[int[int], str[str], List[Union[int[int], str[str]]]]] = None, timeout: Optional[int] = None, **kwargs: Any) List[ScoredPoint][source]

Use context and a target to find the most similar points, constrained by the context.

Parameters
  • collection_name – Collection to discover in

  • target

    Look for vectors closest to this.

    When using the target (with or without context), the integer part of the score represents the rank with respect to the context, while the decimal part of the score relates to the distance to the target.

  • context

    Pairs of { positive, negative } examples to constrain the search.

    When using only the context (without a target), a special search - called context search - is performed where pairs of points are used to generate a loss that guides the search towards the zone where most positive examples overlap. This means that the score minimizes the scenario of finding a point closer to a negative than to a positive part of a pair.

    Since the score of a context relates to loss, the maximum score a point can get is 0.0, and it becomes normal that many points can have a score of 0.0.

    For discovery search (when including a target), the context part of the score for each pair is calculated +1 if the point is closer to a positive than to a negative part of a pair, and -1 otherwise.

  • query_filter – Look only for points which satisfies this conditions

  • search_params – Additional search params

  • limit – Max number of result to return

  • offset – Offset of the first result to return. May be used to paginate results. Note: large offset values may cause performance issues.

  • with_payload – Select which payload to return with the response. Default: None

  • with_vectors – Whether to return the point vector with the result?

  • using – Define which vector to use for recommendation, if not specified - try to use default vector.

  • lookup_from – The location used to lookup vectors. If not specified - use current collection. Note: the other collection should have the same vector size as the current collection.

  • consistency

    Read consistency of the search. Defines how many replicas should be queried before returning the result. Values:

    • int - number of replicas to query, values should present in all queried replicas

    • ’majority’ - query all replicas, but return values present in the majority of replicas

    • ’quorum’ - query the majority of replicas, return values present in all of them

    • ’all’ - query all replicas, and return values present in all replicas

  • shard_key_selector – This parameter allows to specify which shards should be queried. If None - query all shards. Only works for collections with custom sharding method.

  • timeout – Overrides global timeout for this search. Unit is seconds.

Returns

List of discovered points with discovery or context scores, accordingly.

async discover_batch(collection_name: str, requests: Sequence[Union[DiscoverRequest, DiscoverPoints]], consistency: Optional[Union[ReadConsistencyType, int[int]]] = None, timeout: Optional[int] = None, **kwargs: Any) List[List[ScoredPoint]][source]
async get_aliases(**kwargs: Any) CollectionsAliasesResponse[source]

Get all aliases

Returns

All aliases of all collections

async get_collection(collection_name: str, **kwargs: Any) CollectionInfo[source]

Get detailed information about specified existing collection

Parameters

collection_name – Name of the collection

Returns

Detailed information about the collection

async get_collection_aliases(collection_name: str, **kwargs: Any) CollectionsAliasesResponse[source]

Get collection aliases

Parameters

collection_name – Name of the collection

Returns

Collection aliases

async get_collections(**kwargs: Any) CollectionsResponse[source]

Get list name of all existing collections

Returns

List of the collections

async get_locks(**kwargs: Any) LocksOption[source]

Get current locks state.

async list_full_snapshots(**kwargs: Any) List[SnapshotDescription][source]

List all snapshots for a whole storage

Returns

List of snapshots

async list_shard_snapshots(collection_name: str, shard_id: int, **kwargs: Any) List[SnapshotDescription][source]

List all snapshots of a given shard

Parameters
  • collection_name – Name of the collection

  • shard_id – Index of the shard

Returns

List of snapshots

async list_snapshots(collection_name: str, **kwargs: Any) List[SnapshotDescription][source]

List all snapshots for a given collection.

Parameters

collection_name – Name of the collection

Returns

List of snapshots

async lock_storage(reason: str, **kwargs: Any) LocksOption[source]

Lock storage for writing.

async overwrite_payload(collection_name: str, payload: Dict[str, Any], points: Union[List[Union[int, str, PointId]], Filter, Filter, PointIdsList, FilterSelector, PointsSelector], wait: bool = True, ordering: Optional[WriteOrdering] = None, shard_key_selector: Optional[Union[int[int], str[str], List[Union[int[int], str[str]]]]] = None, **kwargs: Any) UpdateResult[source]

Overwrites payload of the specified points After this operation is applied, only the specified payload will be present in the point. The existing payload, even if the key is not specified in the payload, will be deleted.

Examples:

Set payload:

# Overwrite payload value with key `"key"` to points 1, 2, 3.
# If any other valid payload value exists - it will be deleted
qdrant_client.overwrite_payload(
    collection_name="test_collection",
    wait=True,
    payload={
        "key": "value"
    },
    points=[1,2,3]
)
Parameters
  • collection_name – Name of the collection

  • wait

    Await for the results to be processed.

    • If true, result will be returned only when all changes are applied

    • If false, result will be returned immediately after the confirmation of receiving.

  • payload – Key-value pairs of payload to assign

  • points

    List of affected points, filter or points selector. Example

    • points=[1, 2, 3, “cd3b53f0-11a7-449f-bc50-d06310e7ed90”]

    • points=Filter(must=[FieldCondition(key=’rand_number’, range=Range(gte=0.7))])

  • ordering (Optional[WriteOrdering]) –

    Define strategy for ordering of the points. Possible values:

    • weak (default) - write operations may be reordered, works faster

    • medium - write operations go through dynamically selected leader, may be inconsistent for a short period of time in case of leader change

    • strong - Write operations go through the permanent leader, consistent, but may be unavailable if leader is down

  • shard_key_selector – Defines the shard groups that should be used to write updates into. If multiple shard_keys are provided, the update will be written to each of them. Only works for collections with custom sharding method.

Returns

Operation result

async recommend(collection_name: str, positive: Optional[Sequence[Union[int[int], str[str], SparseVector, List[float[float]]]]] = None, negative: Optional[Sequence[Union[int[int], str[str], SparseVector, List[float[float]]]]] = None, query_filter: Optional[Union[Filter, Filter]] = None, search_params: Optional[Union[SearchParams, SearchParams]] = None, limit: int = 10, offset: int = 0, with_payload: Union[bool, List[str], PayloadSelectorInclude, PayloadSelectorExclude, WithPayloadSelector] = True, with_vectors: Union[bool, List[str]] = False, score_threshold: Optional[float] = None, using: Optional[str] = None, lookup_from: Optional[Union[LookupLocation, LookupLocation]] = None, strategy: Optional[RecommendStrategy] = None, consistency: Optional[Union[ReadConsistencyType, int[int]]] = None, shard_key_selector: Optional[Union[int[int], str[str], List[Union[int[int], str[str]]]]] = None, timeout: Optional[int] = None, **kwargs: Any) List[ScoredPoint][source]

Recommend points: search for similar points based on already stored in Qdrant examples.

Provide IDs of the stored points, and Qdrant will perform search based on already existing vectors. This functionality is especially useful for recommendation over existing collection of points.

Parameters
  • collection_name – Collection to search in

  • positive – List of stored point IDs or vectors, which should be used as reference for similarity search. If there is only one example - this request is equivalent to the regular search with vector of that point. If there are more than one example, Qdrant will attempt to search for similar to all of them. Recommendation for multiple vectors is experimental. Its behaviour may change depending on selected strategy.

  • negative – List of stored point IDs or vectors, which should be dissimilar to the search result. Negative examples is an experimental functionality. Its behaviour may change depending on selected strategy.

  • query_filter

    • Exclude vectors which doesn’t fit given conditions.

    • If None - search among all vectors

  • search_params – Additional search params

  • limit – How many results return

  • offset – Offset of the first result to return. May be used to paginate results. Note: large offset values may cause performance issues.

  • with_payload

    • Specify which stored payload should be attached to the result.

    • If True - attach all payload

    • If False - do not attach any payload

    • If List of string - include only specified fields

    • If PayloadSelector - use explicit rules

  • with_vectors

    • If True - Attach stored vector to the search result.

    • If False - Do not attach vector.

    • If List of string - include only specified fields

    • Default: False

  • score_threshold – Define a minimal score threshold for the result. If defined, less similar results will not be returned. Score of the returned result might be higher or smaller than the threshold depending on the Distance function used. E.g. for cosine similarity only higher scores will be returned.

  • using – Name of the vectors to use for recommendations. If None - use default vectors.

  • lookup_from – Defines a location (collection and vector field name), used to lookup vectors for recommendations. If None - use current collection will be used.

  • consistency

    Read consistency of the search. Defines how many replicas should be queried before returning the result. Values:

    • int - number of replicas to query, values should present in all queried replicas

    • ’majority’ - query all replicas, but return values present in the majority of replicas

    • ’quorum’ - query the majority of replicas, return values present in all of them

    • ’all’ - query all replicas, and return values present in all replicas

  • shard_key_selector – This parameter allows to specify which shards should be queried. If None - query all shards. Only works for collections with custom sharding method.

  • strategy

    Strategy to use for recommendation. Strategy defines how to combine multiple examples into a recommendation query. Possible values:

    • ’average_vector’ - calculates average vector of all examples and uses it for search

    • ’best_score’ - finds the result which is closer to positive examples and further from negative

  • timeout – Overrides global timeout for this search. Unit is seconds.

Returns

List of recommended points with similarity scores.

async recommend_batch(collection_name: str, requests: Sequence[Union[RecommendRequest, RecommendPoints]], consistency: Optional[Union[ReadConsistencyType, int[int]]] = None, timeout: Optional[int] = None, **kwargs: Any) List[List[ScoredPoint]][source]

Perform multiple recommend requests in batch mode

Parameters
  • collection_name – Name of the collection

  • requests – List of recommend requests

  • consistency

    Read consistency of the search. Defines how many replicas should be queried before returning the result. Values:

    • int - number of replicas to query, values should present in all queried replicas

    • ’majority’ - query all replicas, but return values present in the majority of replicas

    • ’quorum’ - query the majority of replicas, return values present in all of them

    • ’all’ - query all replicas, and return values present in all replicas

  • timeout – Overrides global timeout for this search. Unit is seconds.

Returns

List of recommend responses

async recommend_groups(collection_name: str, group_by: str, positive: Optional[Sequence[Union[int[int], str[str], SparseVector, List[float[float]]]]] = None, negative: Optional[Sequence[Union[int[int], str[str], SparseVector, List[float[float]]]]] = None, query_filter: Optional[Union[Filter, Filter]] = None, search_params: Optional[Union[SearchParams, SearchParams]] = None, limit: int = 10, group_size: int = 1, score_threshold: Optional[float] = None, with_payload: Union[bool, Sequence[str], PayloadSelectorInclude, PayloadSelectorExclude, WithPayloadSelector] = True, with_vectors: Union[bool, Sequence[str]] = False, using: Optional[str] = None, lookup_from: Optional[Union[LookupLocation, LookupLocation]] = None, with_lookup: Optional[Union[WithLookup, str[str]]] = None, strategy: Optional[RecommendStrategy] = None, consistency: Optional[Union[ReadConsistencyType, int[int]]] = None, shard_key_selector: Optional[Union[int[int], str[str], List[Union[int[int], str[str]]]]] = None, timeout: Optional[int] = None, **kwargs: Any) GroupsResult[source]

Recommend point groups: search for similar points based on already stored in Qdrant examples and groups by payload field.

Recommend best matches for given stored examples grouped by the value of payload field. Useful to obtain most relevant results for each category, deduplicate results, finding the best representation vector for the same entity.

Parameters
  • collection_name – Collection to search in

  • positive – List of stored point IDs or vectors, which should be used as reference for similarity search. If there is only one example - this request is equivalent to the regular search with vector of that point. If there are more than one example, Qdrant will attempt to search for similar to all of them. Recommendation for multiple vectors is experimental. Its behaviour may change depending on selected strategy.

  • negative – List of stored point IDs or vectors, which should be dissimilar to the search result. Negative examples is an experimental functionality. Its behaviour may change depending on selected strategy.

  • group_by – Name of the payload field to group by. Field must be of type “keyword” or “integer”. Nested fields are specified using dot notation, e.g. “nested_field.subfield”.

  • query_filter

    • Exclude vectors which doesn’t fit given conditions.

    • If None - search among all vectors

  • search_params – Additional search params

  • limit – How many groups return

  • group_size – How many results return for each group

  • with_payload

    • Specify which stored payload should be attached to the result.

    • If True - attach all payload

    • If False - do not attach any payload

    • If List of string - include only specified fields

    • If PayloadSelector - use explicit rules

  • with_vectors

    • If True - Attach stored vector to the search result.

    • If False - Do not attach vector.

    • If List of string - include only specified fields

    • Default: False

  • score_threshold – Define a minimal score threshold for the result. If defined, less similar results will not be returned. Score of the returned result might be higher or smaller than the threshold depending on the Distance function used. E.g. for cosine similarity only higher scores will be returned.

  • using – Name of the vectors to use for recommendations. If None - use default vectors.

  • lookup_from – Defines a location (collection and vector field name), used to lookup vectors for recommendations. If None - use current collection will be used.

  • with_lookup – Look for points in another collection using the group ids. If specified, each group will contain a record from the specified collection with the same id as the group id. In addition, the parameter allows to specify which parts of the record should be returned, like in with_payload and with_vectors parameters.

  • consistency

    Read consistency of the search. Defines how many replicas should be queried before returning the result. Values:

    • int - number of replicas to query, values should present in all queried replicas

    • ’majority’ - query all replicas, but return values present in the majority of replicas

    • ’quorum’ - query the majority of replicas, return values present in all of them

    • ’all’ - query all replicas, and return values present in all replicas

  • shard_key_selector – This parameter allows to specify which shards should be queried. If None - query all shards. Only works for collections with custom sharding method.

  • strategy

    Strategy to use for recommendation. Strategy defines how to combine multiple examples into a recommendation query. Possible values:

    • ’average_vector’ - calculates average vector of all examples and uses it for search

    • ’best_score’ - finds the result which is closer to positive examples and further from negative

  • timeout – Overrides global timeout for this search. Unit is seconds.

Returns

List of groups with not more than group_size hits in each group. Each group also contains an id of the group, which is the value of the payload field.

async recover_shard_snapshot(collection_name: str, shard_id: int, location: str, priority: Optional[SnapshotPriority] = None, wait: bool = True, **kwargs: Any) Optional[bool][source]

Recover shard from snapshot.

Parameters
  • collection_name – Name of the collection

  • shard_id – Index of the shard

  • location – URL of the snapshot Example: - URL http://localhost:8080/collections/my_collection/snapshots/my_snapshot

  • priority

    Defines source of truth for snapshot recovery

    • replica (default) means - prefer existing data over the snapshot

    • no_sync means - do not sync shard with other shards

    • snapshot means - prefer snapshot data over the current state

  • wait

    Await for the recovery to be done.

    • If true, result will be returned only when the recovery is done

    • If false, result will be returned immediately after the confirmation of receiving.

Returns

True if snapshot was recovered

async recover_snapshot(collection_name: str, location: str, priority: Optional[SnapshotPriority] = None, wait: bool = True, **kwargs: Any) Optional[bool][source]

Recover collection from snapshot.

Parameters
  • collection_name – Name of the collection

  • location – URL of the snapshot Example: - URL http://localhost:8080/collections/my_collection/snapshots/my_snapshot - Local path file:///qdrant/snapshots/test_collection/test_collection-6194298859870377-2023-11-09-15-17-51.snapshot

  • priority

    Defines source of truth for snapshot recovery

    • replica (default) means - prefer existing data over the snapshot

    • no_sync means - do not sync shard with other shards

    • snapshot means - prefer snapshot data over the current state

  • wait

    Await for the recovery to be done.

    • If true, result will be returned only when the recovery is done

    • If false, result will be returned immediately after the confirmation of receiving.

Returns

True if snapshot was recovered

async recreate_collection(collection_name: str, vectors_config: Union[VectorParams, Mapping[str, VectorParams]], sparse_vectors_config: Optional[Mapping[str, SparseVectorParams]] = None, shard_number: Optional[int] = None, sharding_method: Optional[ShardingMethod] = None, replication_factor: Optional[int] = None, write_consistency_factor: Optional[int] = None, on_disk_payload: Optional[bool] = None, hnsw_config: Optional[Union[HnswConfigDiff, HnswConfigDiff]] = None, optimizers_config: Optional[Union[OptimizersConfigDiff, OptimizersConfigDiff]] = None, wal_config: Optional[Union[WalConfigDiff, WalConfigDiff]] = None, quantization_config: Optional[Union[ScalarQuantization, ProductQuantization, BinaryQuantization, QuantizationConfig]] = None, init_from: Optional[Union[InitFrom, str]] = None, timeout: Optional[int] = None, **kwargs: Any) bool[source]

Delete and create empty collection with given parameters

Parameters
  • collection_name – Name of the collection to recreate

  • vectors_config – Configuration of the vector storage. Vector params contains size and distance for the vector storage. If dict is passed, service will create a vector storage for each key in the dict. If single VectorParams is passed, service will create a single anonymous vector storage.

  • sparse_vectors_config – Configuration of the sparse vector storage. The service will create a sparse vector storage for each key in the dict.

  • shard_number – Number of shards in collection. Default is 1, minimum is 1.

  • sharding_method – Defines strategy for shard creation. Option auto (default) creates defined number of shards automatically. Data will be distributed between shards automatically. After creation, shards could be additionally replicated, but new shards could not be created. Option custom allows to create shards manually, each shard should be created with assigned unique shard_key. Data will be distributed between based on shard_key value.

  • replication_factor – Replication factor for collection. Default is 1, minimum is 1. Defines how many copies of each shard will be created. Have effect only in distributed mode.

  • write_consistency_factor – Write consistency factor for collection. Default is 1, minimum is 1. Defines how many replicas should apply the operation for us to consider it successful. Increasing this number will make the collection more resilient to inconsistencies, but will also make it fail if not enough replicas are available. Does not have any performance impact. Have effect only in distributed mode.

  • on_disk_payload – If true - point`s payload will not be stored in memory. It will be read from the disk every time it is requested. This setting saves RAM by (slightly) increasing the response time. Note: those payload values that are involved in filtering and are indexed - remain in RAM.

  • hnsw_config – Params for HNSW index

  • optimizers_config – Params for optimizer

  • wal_config – Params for Write-Ahead-Log

  • quantization_config – Params for quantization, if None - quantization will be disabled

  • init_from – Use data stored in another collection to initialize this collection

  • timeout – Wait for operation commit timeout in seconds. If timeout is reached - request will return with service error.

Returns

Operation result

async retrieve(collection_name: str, ids: Sequence[Union[int, str, PointId]], with_payload: Union[bool, Sequence[str], PayloadSelectorInclude, PayloadSelectorExclude, WithPayloadSelector] = True, with_vectors: Union[bool, Sequence[str]] = False, consistency: Optional[Union[ReadConsistencyType, int[int]]] = None, shard_key_selector: Optional[Union[int[int], str[str], List[Union[int[int], str[str]]]]] = None, **kwargs: Any) List[Record][source]

Retrieve stored points by IDs

Parameters
  • collection_name – Name of the collection to lookup in

  • ids – list of IDs to lookup

  • with_payload

    • Specify which stored payload should be attached to the result.

    • If True - attach all payload

    • If False - do not attach any payload

    • If List of string - include only specified fields

    • If PayloadSelector - use explicit rules

  • with_vectors

    • If True - Attach stored vector to the search result.

    • If False - Do not attach vector.

    • If List of string - Attach only specified vectors.

    • Default: False

  • consistency

    Read consistency of the search. Defines how many replicas should be queried before returning the result. Values:

    • int - number of replicas to query, values should present in all queried replicas

    • ’majority’ - query all replicas, but return values present in the majority of replicas

    • ’quorum’ - query the majority of replicas, return values present in all of them

    • ’all’ - query all replicas, and return values present in all replicas

  • shard_key_selector – This parameter allows to specify which shards should be queried. If None - query all shards. Only works for collections with custom sharding method.

Returns

List of points

async scroll(collection_name: str, scroll_filter: Optional[Union[Filter, Filter]] = None, limit: int = 10, offset: Optional[Union[int, str, PointId]] = None, with_payload: Union[bool, Sequence[str], PayloadSelectorInclude, PayloadSelectorExclude, WithPayloadSelector] = True, with_vectors: Union[bool, Sequence[str]] = False, consistency: Optional[Union[ReadConsistencyType, int[int]]] = None, shard_key_selector: Optional[Union[int[int], str[str], List[Union[int[int], str[str]]]]] = None, **kwargs: Any) Tuple[List[Record], Optional[Union[int, str, PointId]]][source]

Scroll over all (matching) points in the collection.

This method provides a way to iterate over all stored points with some optional filtering condition. Scroll does not apply any similarity estimations, it will return points sorted by id in ascending order.

Parameters
  • collection_name – Name of the collection

  • scroll_filter – If provided - only returns points matching filtering conditions

  • limit – How many points to return

  • offset – If provided - skip points with ids less than given offset

  • with_payload

    • Specify which stored payload should be attached to the result.

    • If True - attach all payload

    • If False - do not attach any payload

    • If List of string - include only specified fields

    • If PayloadSelector - use explicit rules

  • with_vectors

    • If True - Attach stored vector to the search result.

    • If False (default) - Do not attach vector.

    • If List of string - include only specified fields

  • consistency

    Read consistency of the search. Defines how many replicas should be queried before returning the result. Values:

    • int - number of replicas to query, values should present in all queried replicas

    • ’majority’ - query all replicas, but return values present in the majority of replicas

    • ’quorum’ - query the majority of replicas, return values present in all of them

    • ’all’ - query all replicas, and return values present in all replicas

  • shard_key_selector – This parameter allows to specify which shards should be queried. If None - query all shards. Only works for collections with custom sharding method.

Returns

A pair of (List of points) and (optional offset for the next scroll request). If next page offset is None - there is no more points in the collection to scroll.

async search(collection_name: str, query_vector: Union[ndarray[Any, dtype[Union[bool_, int8, int16, int32, int64, uint8, uint16, uint32, uint64, float16, float32, float64, float128]]], Sequence[float], Tuple[str, List[float]], NamedVector, NamedSparseVector], query_filter: Optional[Union[Filter, Filter]] = None, search_params: Optional[Union[SearchParams, SearchParams]] = None, limit: int = 10, offset: Optional[int] = None, with_payload: Union[bool, Sequence[str], PayloadSelectorInclude, PayloadSelectorExclude, WithPayloadSelector] = True, with_vectors: Union[bool, Sequence[str]] = False, score_threshold: Optional[float] = None, append_payload: bool = True, consistency: Optional[Union[ReadConsistencyType, int[int]]] = None, shard_key_selector: Optional[Union[int[int], str[str], List[Union[int[int], str[str]]]]] = None, timeout: Optional[int] = None, **kwargs: Any) List[ScoredPoint][source]

Search for closest vectors in collection taking into account filtering conditions

Parameters
  • collection_name – Collection to search in

  • query_vector – Search for vectors closest to this. Can be either a vector itself, or a named vector, or a named sparse vector, or a tuple of vector name and vector itself

  • query_filter

    • Exclude vectors which doesn’t fit given conditions.

    • If None - search among all vectors

  • search_params – Additional search params

  • limit – How many results return

  • offset – Offset of the first result to return. May be used to paginate results. Note: large offset values may cause performance issues.

  • with_payload

    • Specify which stored payload should be attached to the result.

    • If True - attach all payload

    • If False - do not attach any payload

    • If List of string - include only specified fields

    • If PayloadSelector - use explicit rules

  • with_vectors

    • If True - Attach stored vector to the search result.

    • If False - Do not attach vector.

    • If List of string - include only specified fields

    • Default: False

  • score_threshold – Define a minimal score threshold for the result. If defined, less similar results will not be returned. Score of the returned result might be higher or smaller than the threshold depending on the Distance function used. E.g. for cosine similarity only higher scores will be returned.

  • append_payload – Same as with_payload. Deprecated.

  • consistency

    Read consistency of the search. Defines how many replicas should be queried before returning the result. Values:

    • int - number of replicas to query, values should present in all queried replicas

    • ’majority’ - query all replicas, but return values present in the majority of replicas

    • ’quorum’ - query the majority of replicas, return values present in all of them

    • ’all’ - query all replicas, and return values present in all replicas

  • shard_key_selector – This parameter allows to specify which shards should be queried. If None - query all shards. Only works for collections with custom sharding method.

  • timeout – Overrides global timeout for this search. Unit is seconds.

Examples:

Search with filter:

qdrant.search(
    collection_name="test_collection",
    query_vector=[1.0, 0.1, 0.2, 0.7],
    query_filter=Filter(
        must=[
            FieldCondition(
                key='color',
                range=Match(
                    value="red"
                )
            )
        ]
    )
)
Returns

List of found close points with similarity scores.

async search_batch(collection_name: str, requests: Sequence[Union[SearchRequest, SearchPoints]], timeout: Optional[int] = None, consistency: Optional[Union[ReadConsistencyType, int[int]]] = None, **kwargs: Any) List[List[ScoredPoint]][source]

Search for points in multiple collections

Parameters
  • collection_name – Name of the collection

  • requests – List of search requests

  • consistency

    Read consistency of the search. Defines how many replicas should be queried before returning the result. Values:

    • int - number of replicas to query, values should present in all queried replicas

    • ’majority’ - query all replicas, but return values present in the majority of replicas

    • ’quorum’ - query the majority of replicas, return values present in all of them

    • ’all’ - query all replicas, and return values present in all replicas

  • timeout – Overrides global timeout for this search. Unit is seconds.

Returns

List of search responses

async search_groups(collection_name: str, query_vector: Union[ndarray[Any, dtype[Union[bool_, int8, int16, int32, int64, uint8, uint16, uint32, uint64, float16, float32, float64, float128]]], Sequence[float], Tuple[str, List[float]], NamedVector, NamedSparseVector], group_by: str, query_filter: Optional[Union[Filter, Filter]] = None, search_params: Optional[Union[SearchParams, SearchParams]] = None, limit: int = 10, group_size: int = 1, with_payload: Union[bool, Sequence[str], PayloadSelectorInclude, PayloadSelectorExclude, WithPayloadSelector] = True, with_vectors: Union[bool, Sequence[str]] = False, score_threshold: Optional[float] = None, with_lookup: Optional[Union[WithLookup, str[str]]] = None, consistency: Optional[Union[ReadConsistencyType, int[int]]] = None, shard_key_selector: Optional[Union[int[int], str[str], List[Union[int[int], str[str]]]]] = None, timeout: Optional[int] = None, **kwargs: Any) GroupsResult[source]

Search for closest vectors grouped by payload field.

Searches best matches for query vector grouped by the value of payload field. Useful to obtain most relevant results for each category, deduplicate results, finding the best representation vector for the same entity.

Parameters
  • collection_name – Collection to search in

  • query_vector – Search for vectors closest to this. Can be either a vector itself, or a named vector, or a named sparse vector, or a tuple of vector name and vector itself

  • group_by – Name of the payload field to group by. Field must be of type “keyword” or “integer”. Nested fields are specified using dot notation, e.g. “nested_field.subfield”.

  • query_filter

    • Exclude vectors which doesn’t fit given conditions.

    • If None - search among all vectors

  • search_params – Additional search params

  • limit – How many groups return

  • group_size – How many results return for each group

  • with_payload

    • Specify which stored payload should be attached to the result.

    • If True - attach all payload

    • If False - do not attach any payload

    • If List of string - include only specified fields

    • If PayloadSelector - use explicit rules

  • with_vectors

    • If True - Attach stored vector to the search result.

    • If False - Do not attach vector.

    • If List of string - include only specified fields

    • Default: False

  • score_threshold – Minimal score threshold for the result. If defined, less similar results will not be returned. Score of the returned result might be higher or smaller than the threshold depending on the Distance function used. E.g. for cosine similarity only higher scores will be returned.

  • with_lookup – Look for points in another collection using the group ids. If specified, each group will contain a record from the specified collection with the same id as the group id. In addition, the parameter allows to specify which parts of the record should be returned, like in with_payload and with_vectors parameters.

  • consistency – Read consistency of the search. Defines how many replicas should be queried before returning the result. Values: - int - number of replicas to query, values should present in all queried replicas - ‘majority’ - query all replicas, but return values present in the majority of replicas - ‘quorum’ - query the majority of replicas, return values present in all of them - ‘all’ - query all replicas, and return values present in all replicas

  • shard_key_selector – This parameter allows to specify which shards should be queried. If None - query all shards. Only works for collections with custom sharding method.

  • timeout – Overrides global timeout for this search. Unit is seconds.

Returns

List of groups with not more than group_size hits in each group. Each group also contains an id of the group, which is the value of the payload field.

async set_payload(collection_name: str, payload: Dict[str, Any], points: Union[List[Union[int, str, PointId]], Filter, Filter, PointIdsList, FilterSelector, PointsSelector], wait: bool = True, ordering: Optional[WriteOrdering] = None, shard_key_selector: Optional[Union[int[int], str[str], List[Union[int[int], str[str]]]]] = None, **kwargs: Any) UpdateResult[source]

Modifies payload of the specified points

Examples:

Set payload:

# Assign payload value with key `"key"` to points 1, 2, 3.
# If payload value with specified key already exists - it will be overwritten
qdrant_client.set_payload(
    collection_name="test_collection",
    wait=True,
    payload={
        "key": "value"
    },
    points=[1,2,3]
)
Parameters
  • collection_name – Name of the collection

  • wait

    Await for the results to be processed.

    • If true, result will be returned only when all changes are applied

    • If false, result will be returned immediately after the confirmation of receiving.

  • payload – Key-value pairs of payload to assign

  • points

    List of affected points, filter or points selector Example

    • points=[1, 2, 3, “cd3b53f0-11a7-449f-bc50-d06310e7ed90”]

    • points=Filter(must=[FieldCondition(key=’rand_number’, range=Range(gte=0.7))])

  • ordering (Optional[WriteOrdering]) –

    Define strategy for ordering of the points. Possible values:

    • weak (default) - write operations may be reordered, works faster

    • medium - write operations go through dynamically selected leader, may be inconsistent for a short period of time in case of leader change

    • strong - Write operations go through the permanent leader, consistent, but may be unavailable if leader is down

  • shard_key_selector – Defines the shard groups that should be used to write updates into. If multiple shard_keys are provided, the update will be written to each of them. Only works for collections with custom sharding method.

Returns

Operation result

async unlock_storage(**kwargs: Any) LocksOption[source]

Unlock storage for writing.

async update_collection(collection_name: str, optimizers_config: Optional[Union[OptimizersConfigDiff, OptimizersConfigDiff]] = None, collection_params: Optional[Union[CollectionParamsDiff, CollectionParamsDiff]] = None, vectors_config: Optional[Union[Dict[str, VectorParamsDiff], VectorsConfigDiff]] = None, hnsw_config: Optional[Union[HnswConfigDiff, HnswConfigDiff]] = None, quantization_config: Optional[Union[ScalarQuantization, ProductQuantization, BinaryQuantization, Disabled, QuantizationConfigDiff]] = None, timeout: Optional[int] = None, sparse_vectors_config: Optional[Mapping[str, SparseVectorParams]] = None, **kwargs: Any) bool[source]

Update parameters of the collection

Parameters
  • collection_name – Name of the collection

  • optimizers_config – Override for optimizer configuration

  • collection_params – Override for collection parameters

  • vectors_config – Override for vector-specific configuration

  • hnsw_config – Override for HNSW index params

  • quantization_config – Override for quantization params

  • timeout – Wait for operation commit timeout in seconds. If timeout is reached - request will return with service error.

  • sparse_vectors_config – Override for sparse vector-specific configuration

Returns

Operation result

async update_collection_aliases(change_aliases_operations: Sequence[Union[CreateAliasOperation, RenameAliasOperation, DeleteAliasOperation, AliasOperations]], timeout: Optional[int] = None, **kwargs: Any) bool[source]

Operation for performing changes of collection aliases.

Alias changes are atomic, meaning that no collection modifications can happen between alias operations.

Parameters
  • change_aliases_operations – List of operations to perform

  • timeout – Wait for operation commit timeout in seconds. If timeout is reached - request will return with service error.

Returns

Operation result

async update_vectors(collection_name: str, points: Sequence[PointVectors], wait: bool = True, ordering: Optional[WriteOrdering] = None, shard_key_selector: Optional[Union[int[int], str[str], List[Union[int[int], str[str]]]]] = None, **kwargs: Any) UpdateResult[source]

Update specified vectors in the collection. Keeps payload and unspecified vectors unchanged.

Parameters
  • collection_name (str) – Name of the collection to update vectors in

  • points (Point) –

    List of (id, vector) pairs to update. Vector might be a list of numbers or a dict of named vectors. Examples:

    • PointVectors(id=1, vector=[1, 2, 3])

    • PointVectors(id=2, vector={‘vector_1’: [1, 2, 3], ‘vector_2’: [4, 5, 6]})

  • wait (bool) –

    Await for the results to be processed.

    • If true, result will be returned only when all changes are applied

    • If false, result will be returned immediately after the confirmation of receiving.

  • ordering (Optional[WriteOrdering]) –

    Define strategy for ordering of the points. Possible values:

    • weak (default) - write operations may be reordered, works faster

    • medium - write operations go through dynamically selected leader, may be inconsistent for a short period of time in case of leader change

    • strong - Write operations go through the permanent leader, consistent, but may be unavailable if leader is down

  • shard_key_selector – Defines the shard groups that should be used to write updates into. If multiple shard_keys are provided, the update will be written to each of them. Only works for collections with custom sharding method.

Returns

Operation Result(UpdateResult)

upload_collection(collection_name: str, vectors: Union[Dict[str, ndarray[Any, dtype[Union[bool_, int8, int16, int32, int64, uint8, uint16, uint32, uint64, float16, float32, float64, float128]]]], ndarray[Any, dtype[Union[bool_, int8, int16, int32, int64, uint8, uint16, uint32, uint64, float16, float32, float64, float128]]], Iterable[Union[List[float[float]], Dict[str[str], Union[SparseVector, List[float[float]]]]]]], payload: Optional[Iterable[Dict[Any, Any]]] = None, ids: Optional[Iterable[Union[int, str, PointId]]] = None, batch_size: int = 64, parallel: int = 1, method: Optional[str] = None, max_retries: int = 3, wait: bool = False, shard_key_selector: Optional[Union[int[int], str[str], List[Union[int[int], str[str]]]]] = None, **kwargs: Any) None[source]

Upload vectors and payload to the collection. This method will perform automatic batching of the data. If you need to perform a single update, use upsert method. Note: use upload_records method if you want to upload multiple vectors with single payload.

Parameters
  • collection_name – Name of the collection to upload to

  • vectors – np.ndarray or an iterable over vectors to upload. Might be mmaped

  • payload – Iterable of vectors payload, Optional, Default: None

  • ids – Iterable of custom vectors ids, Optional, Default: None

  • batch_size – How many vectors upload per-request, Default: 64

  • parallel – Number of parallel processes of upload

  • method – Start method for parallel processes, Default: forkserver

  • max_retries – maximum number of retries in case of a failure during the upload of a batch

  • wait – Await for the results to be applied on the server side. If true, each update request will explicitly wait for the confirmation of completion. Might be slower. If false, each update request will return immediately after the confirmation of receiving. Default: false

  • shard_key_selector – Defines the shard groups that should be used to write updates into. If multiple shard_keys are provided, the update will be written to each of them. Only works for collections with custom sharding method.

async upload_points(collection_name: str, points: Iterable[PointStruct], batch_size: int = 64, parallel: int = 1, method: Optional[str] = None, max_retries: int = 3, wait: bool = False, shard_key_selector: Optional[Union[int[int], str[str], List[Union[int[int], str[str]]]]] = None, **kwargs: Any) None[source]

Upload points to the collection

Similar to upload_collection method, but operates with points, rather than vector and payload individually.

Parameters
  • collection_name – Name of the collection to upload to

  • points – Iterator over points to upload

  • batch_size – How many vectors upload per-request, Default: 64

  • parallel – Number of parallel processes of upload

  • method – Start method for parallel processes, Default: forkserver

  • max_retries – maximum number of retries in case of a failure during the upload of a batch

  • wait – Await for the results to be applied on the server side. If true, each update request will explicitly wait for the confirmation of completion. Might be slower. If false, each update request will return immediately after the confirmation of receiving. Default: false

  • shard_key_selector – Defines the shard groups that should be used to write updates into. If multiple shard_keys are provided, the update will be written to each of them. Only works for collections with custom sharding method. This parameter overwrites shard keys written in the records.

upload_records(collection_name: str, records: Iterable[Record], batch_size: int = 64, parallel: int = 1, method: Optional[str] = None, max_retries: int = 3, wait: bool = False, shard_key_selector: Optional[Union[int[int], str[str], List[Union[int[int], str[str]]]]] = None, **kwargs: Any) None[source]

Upload records to the collection

Similar to upload_collection method, but operates with records, rather than vector and payload individually.

Parameters
  • collection_name – Name of the collection to upload to

  • records – Iterator over records to upload

  • batch_size – How many vectors upload per-request, Default: 64

  • parallel – Number of parallel processes of upload

  • method – Start method for parallel processes, Default: forkserver

  • max_retries – maximum number of retries in case of a failure during the upload of a batch

  • wait – Await for the results to be applied on the server side. If true, each update request will explicitly wait for the confirmation of completion. Might be slower. If false, each update request will return immediately after the confirmation of receiving. Default: false

  • shard_key_selector – Defines the shard groups that should be used to write updates into. If multiple shard_keys are provided, the update will be written to each of them. Only works for collections with custom sharding method. This parameter overwrites shard keys written in the records.

async upsert(collection_name: str, points: Union[Batch, List[Union[PointStruct, PointStruct]]], wait: bool = True, ordering: Optional[WriteOrdering] = None, shard_key_selector: Optional[Union[int[int], str[str], List[Union[int[int], str[str]]]]] = None, **kwargs: Any) UpdateResult[source]

Update or insert a new point into the collection.

If point with given ID already exists - it will be overwritten.

Parameters
  • collection_name (str) – To which collection to insert

  • points (Point) – Batch or list of points to insert

  • wait (bool) –

    Await for the results to be processed.

    • If true, result will be returned only when all changes are applied

    • If false, result will be returned immediately after the confirmation of receiving.

  • ordering (Optional[WriteOrdering]) –

    Define strategy for ordering of the points. Possible values:

    • weak (default) - write operations may be reordered, works faster

    • medium - write operations go through dynamically selected leader, may be inconsistent for a short period of time in case of leader change

    • strong - Write operations go through the permanent leader, consistent, but may be unavailable if leader is down

  • shard_key_selector – Defines the shard groups that should be used to write updates into. If multiple shard_keys are provided, the update will be written to each of them. Only works for collections with custom sharding method.

Returns

Operation Result(UpdateResult)

property grpc_collections: qdrant_client.grpc.collections_service_pb2_grpc.CollectionsStub

gRPC client for collections methods

Returns

An instance of raw gRPC client, generated from Protobuf

property grpc_points: qdrant_client.grpc.points_service_pb2_grpc.PointsStub

gRPC client for points methods

Returns

An instance of raw gRPC client, generated from Protobuf

property http: AsyncApis[AsyncApiClient]

REST Client

Returns

An instance of raw REST API client, generated from OpenAPI schema

property rest: AsyncApis[AsyncApiClient]

REST Client

Returns

An instance of raw REST API client, generated from OpenAPI schema

Qdrant

Learn more about Qdrant vector search project and ecosystem

Discover Qdrant

Similarity Learning

Explore practical problem solving with Similarity Learning

Learn Similarity Learning

Community

Find people dealing with similar problems and get answers to your questions

Join Community