Releases: milvus-io/milvus
milvus-2.3.4
2.3.4
Release date: Jan 2, 2024
Milvus version | Python SDK version | Java SDK version | Go SDK version | Node.js SDK version |
---|---|---|---|---|
2.3.4 | 2.3.5 | 2.3.3 | 2.3.4 | 2.3.5 |
Milvus 2.3.4 brings significant enhancements, focusing on availability and usability. The update introduces access logs for better monitoring and integrates Parquet for efficient bulk imports. A key feature is the binlog index on growing segments for faster searches. Major improvements include support for up to 10,000 collections/partitions, reduced memory usage, clearer error messages, quicker loading, and better query shard balance. It addresses critical issues like resource leakage, load/release failures, and concurrency challenges. However, it discontinues regular expression searches in partitions to save resources, with an option to re-enable this feature in the configuration.
Features
-
Access Logs:
- Milvus now supports access logs for monitoring external interfaces. These logs record method names, user requests, response times, and error codes.
- Note: Currently, this feature supports only gRPC; RESTful requests are not included.
-
Parquet File Import:
- This update introduces support for Parquet file imports, enhancing performance and memory efficiency. It also broadens data type support, including arrays and JSON.
- This feature supersedes the previous limitation of JSON and NumPy formats.
-
Binlog Index on Growing Segments:
- Milvus now employs a binlog index on growing segments to enhance search efficiency, allowing for advanced indices like IVF or Fast Scann.
- This improvement can increase search speeds in growing segments by up to tenfold.
Improvements
-
Expanded Collection/Partition Support:
- Milvus now supports up to 10,000 collections/partitions in a cluster, benefiting multi-tenant environments.
- The improvement comes from timetick mechanism refinement, goroutine management, and memory usage improvement.
- Note: Exceeding the recommended limit may affect failure recovery and resource usage. Recommended limit is 10,000 (Collection * Shard * Partition).
-
Reduced Memory Usage:
- Enhancements have been made to improve memory efficiency during various operations, including data retrieval and variable length data handling.
-
Refined Error Messaging:
- Error messages have been split into summaries and details for clearer understanding.
-
Accelerated Loading Speed:
- Various optimizations have been implemented to increase loading speeds, particularly in scenarios with frequent flushes and deletions.
-
Improved Query Shard Balance:
- Implemented balance channel in
querycoord
and other improvements for efficient shard management.
- Implemented balance channel in
-
Other Enhancements:
- Includes security improvements, MMap support for index loading, partition-level privileges, and more.
Critical Bug Fixes
-
Resource Leakage Fixes:
- Addressed critical memory leaks in Pulsar producer/consumer and improved garbage collection of meta snapshots.
-
Load/Release Failure Fixes:
- Resolved issues causing load/release operations to stall, especially in clusters with many segments.
-
Concurrency Issues:
- Fixed problems related to concurrent insertions, deletions, and queries.
-
Other Critical Fixes:
- Fixed an issue where upgrades from version 2.2 failed due to missing
CollectionLoadInfo
. - Fixed an issue where deletions might be lost because of errors in parsing compacted file logpaths (#29276).
- Fixed an issue where flush and compaction processes could become stuck under heavy insert/delete traffic.
- Fixed the inability to perform compact operations on the array type (#29505) (#29504).
- Fixed an issue where collections with more than 128 partitions failed to be released (#28567).
- Fixed an issue related to parsing expressions that include quotation marks (#28418).
- Addressed a failure in Azure Blob Storage's
ListObjects
operation causing garbage collection failures (#27931) (#28894). - Fixed an issue with missing target database names in
RenameCollection
operations (#28911). - Fixed an issue where iterators lost data in cases of duplicated results (#29406) (#29446).
- Corrected the bulk insert binlog process to consider timestamp order when processing delta data (#29176).
- Fixed an issue to exclude insert data before a growing checkpoint (#29559).
- Addressed a problem where frequent flushing caused rate limits in Minio (#28625).
- Fixed an issue where creating growing segments could introduce an excessive number of threads (#29314).
- Fixed an issue in retrieving binary vectors from chunk cache (#28866) (#28884).
- Fixed an issue where checkpoints were incorrectly updated after dropping a collection (#29221).
- Fixed an issue where upgrades from version 2.2 failed due to missing
Breaking Change
- Discontinued Regular Expression Search in Partitions:
- To reduce resource consumption, regular expression searches in partitions have been discontinued. However, this feature can be re-enabled through configuration (see #29154 for details).
milvus-2.2.16
2.2.16
Release date: Nov 27, 2023
Milvus version | Python SDK version | Java SDK version | Go SDK version | Node.js SDK version |
---|---|---|---|---|
2.2.16 | 2.2.17 | 2.2.15 | 2.2.8 | 2.2.24 |
Milvus 2.2.16 represents a minor patch release following Milvus 2.2.15. This update primarily concentrates on bolstering system stability, enhancing fault recovery speed, and addressing various identified issues. Notably, the Knowhere version has been updated in this release, leading to quicker loading of DiskAnn indexes.
For an optimal experience, we highly recommend all users currently on the 2.2.0 series to upgrade to this version before considering a move to 2.3.
Bug Fixes
- Corrected the docker-compose etcd health check command (27980).
- Completed the cleanup of remaining meta information after dropping a Collection (28500).
- Rectified the issue causing panic during the execution of stop logic in query coordination (28543).
- Resolved the problem of the cmux server failing to gracefully shut down (28384).
- Eliminated the reference counting logic related to the query shard service to prevent potential leaks (28547).
- Removed the logic of polling collection information from RootCoord during the restart process of QueryCoord to prevent startup failures (28607).
- Fixed parsing errors in expressions containing mixed single and double quotations (28417).
- Addressed DataNode panic during flushing delete buffer (28710).
Enhancements
milvus-2.2.15
2.2.15
Release date: Nov 10, 2023
Milvus version | Python SDK version | Java SDK version | Go SDK version | Node.js SDK version |
---|---|---|---|---|
2.2.15 | 2.2.17 | 2.2.15 | v2.2.8 | 2.2.24 |
Milvus 2.2.15, a bugfix version of the Milvus 2.2.x series, has introduced significant improvements and bug fixes. This version enhanced the bulkinsert
functionality to support partitionkey
and the new JSON list format. Additionally, 2.2.15 has substantially improved the rolling upgrade process to 2.3.3 and resolved many critical issues. We strongly recommend all 2.2.15 users upgrade to this version before moving to 2.3.
Incompatible Update
- Removed MySQL metastore support (#26634).
Features
- Enabled
bulkinsert
of binlog data withpartitionkey
(#27336). - Added support for
bulkinsert
with pure list JSON (#28127).
Improvements
- Added
-g
flag for compiling with debug information (#26698). - Implemented a workaround to fix
ChannelManager
holding mutex for too long (#26870, #26874). - Reduced the number of goroutines resulting from
GetIndexInfos
(#27547). - Eliminated the recollection of segment stats during
datacoord
startup (#27562). - Removed
flush
from the DDL queue (#27691). - Decreased the write lock scope in channel manager (#27824).
- Reduced the number of parallel tasks for compaction (#27900).
- Refined RPC call in
unwatch drop channel
(#27884). - Enhanced
bulkinsert
to readvarchar
in batches (#26198). - Optimized Milvus rolling upgrade process, including:
- Refined standalone components' stop order (#26742, #26778).
- Improved RPC client retry mechanism (#26797).
- Handled errors from new
RootCoord
forDescribeCollection
(#27029). - Added a stop hook for session cleanup (#27565).
- Accelerated shard leader cache update frequency (#27641).
- Disabled retryable error logic in search/query operations (#27661).
- Supported signal reception from parent process (#27755).
- Checked data sync service number during graceful stop (#27789).
- Fixed query shard service leak (#27848).
- Refined Proxy stop process (#27910).
- Fixed deletion of session key with prefix (#28261).
- Addressed unretryable errors (#27955).
- Refined stop order for components (#28017).
- Added timeout for graceful stop (#27326, #28226).
- Implemented fast fail when querynode is not ready (#28204).
Bug Fixes
- Resolved
CollectionNotFound
error duringdescribe rg
(#26569). - Fixed issue where timeout tasks never released the queue (#26594).
- Refined signal handler for the entire Milvus role lifetime (#26642, #26702).
- Addressed panic caused by non-nil component pointer to
component
interface (#27079). - Enhanced garbage collector to fetch meta after listing from storage (#27205).
- Fixed Kafka consumer connection leak (#27223).
- Reduced RPC size for
GetRecoveryInfoV2
(#27484). - Resolved concurrent parsing expression issues with strings (#26721, #27539).
- Fixed query shard
inUse
leak (#27765). - Corrected
rootPath
issue when querynode cleaned local directory (#28314). - Ensured compatibility with sync target version (#28290).
- Fixed release of query shard when releasing growing segment (#28040).
- Addressed slow response in
flushManager.isFull
(#28141, #28149). - Implemented check for length before comparing strings (#28111).
- Resolved panic during close delete flow graph (#28202).
- Fixed
bulkinsert
bug where segments were compacted after import (#28200). - Solved data node panic during save binlog path (#28243).
- Updated collection target after observer start (#27962).
milvus-2.3.3
2.3.3
Release date: Nov 10, 2023
Milvus version | Python SDK version | Java SDK version | Go SDK version | Node.js SDK version |
---|---|---|---|---|
2.3.3 | 2.3.4 | 2.3.3 | 2.3.3 | 2.3.3 |
Features
Supported pure list JSON in bulk insert (#28126)
Improvements
- Constructed a plan directly when searching with vector output (#27963)
- Removed binlog/delta log from getRecoveryInfoV2 (#27895) (#28090)
- Refined code for fixed-length types array (#28109)
- Improved rolling upgrade unserviceable time
- Refined stop order (#28016) (#28089)
- Set qcv2 index task priority to Low (#28117) (#28134)
- Removed retry in getShards (#28011) (#28091)
- Fixed load index for stopping node (#28047) (#28137)
- Fixed retry on offline node (#28079) (#28139)
- Fixed QueryNode panic while upgrading (#28034) (#28114)
- Fixed coordinator fast restart by deleting old session (#28205)
- Fixed check grpc error logic (#28182) (#28218)
- Delayed the cancellation of ctx when stopping the node (#28249)
- Disabled auto balance when an old node exists (#28191) (#28224)
- Fixed auto balance block channel reassign after datanode restart (#28276)
- Fixed retry when proxy stopped (#28263)
- Reduced useless ObjectExists in AzureBlobManager (#28157)
- Got vector concurrently (#28119)
- Forced set Aliyun use_virtual_host to true for all (#28237)
- Fixed delete session key with prefix causing multiple QueryNode crashes (#28267)
Bug Fixes
- Fixed script stop unable to find Milvus process (#27958)
- Fixed timestamp reordering issue with delete records (#27941) (#28113)
- Fixed prefix query with longer subarray potentially causing a crash (#28112)
- Limited max thread num for pool (#28018) (#28115)
- Fixed sync distribution with the wrong version (#28130) (#28170)
- Added a custom HTTP header: Accept-Type-Allow-Int64 for JS client (#28125)
- Fixed bug for constructing ArrayView with fixed-length type (#28186)
- Fixed bug for setting index state when IndexNode connecting failed (#28221)
- Fixed bulk insert bug that segments are compacted after import (#28227)
- Fixed the target updated before version updated to cause data missing (#28257)
- Handled exceptions while loading (#28306)
milvus-2.3.2
v2.3.2
Release date: Oct 26, 2023
Milvus version | Python SDK version | Java SDK version | Go SDK version | Node.js SDK version |
---|---|---|---|---|
2.3.2 | 2.3.4 | 2.3.2 | 2.3.2 | 2.3.3 |
We're thrilled to unveil Milvus 2.3.2, enriched with an array of novel features. Experience support for array data types, delve into intricate delete expressions, and celebrate the return of binary metric types such as SUBSTRUCTURE/SUPERSTRUCTURE.
This release promises enhanced performance through minimized data copying during loading and better bulk insertions. Coupled with heightened error messaging and handling, you're in for a smoother experience. Notably, our commitment to rolling upgrade stability ensures minimized service disruptions during updates.
Breaking Changes
New Features
- Array datatype now supported (#26369)
- Introduced complex delete expressions (#25752)
- Reintroduced binary metric types SUBSTRUCTURE/SUPERSTRUCTURE (#26766)
- Vector index mmap now available (#26750)
- CDC: Added capability to replicate mq messages (#27240)
- Facilitated renaming of database names within collections (#26543)
- Activated bulk insert of binlog data with partition keys (#27241)
- Enhanced support for multiple index engines (#27178)
- Introduced chunk cache to fetch raw vectors:
- Newly added ChunkCache facilitates vector retrieval from storage (#26142)
- Implemented Tikv as a distributed meta solution:
- Integrated Tikv (#26246)
- Rolled out float16 vector support (#25852)
Note: Index for float16 vector coming in the next version - Restful updates:
Performance Enhancements
- Optimized data loading by minimizing data copy operations (#26746)
- Streamlined bulk inserts with batched varchar reading (#26199)
- Improved handling of large structs using pointer receivers (#26668)
- Removed unnecessary offset checks during data fills (#26666)
- Addressed high CPU consumption linked to proto.size (#27054)
- Optimized scalar column data with MADV_WILLNEED (#27170)
Additional Enhancements
- Robust rolling upgrade capabilities:
- Significant improvement in system availability during rolling upgrades, ensuring minimal service interruptions.
- Upgraded error messaging and handling for a seamless experience.
- Optimized flushing processes:
- Addressed issues where delete commands weren't being saved during flush operations.
- Resolved slow flush-related issues.
- Segregated task queues for Flush and DDL to prevent mutual blockages.
- Improved RocksMQ seek speeds (#27646) and standalone recovery times.
- Streamlined compact tasks (#27899)
- Added a channel manager in DataNode (#27308)
- Refined chunk management:
- Integrated grpc compression (#27894)
- Decoupled client-server API interfaces (#27186)
- Transitioned etcd watch-related code to event manager (#27192)
- Displayed index details during GetSegmentInfo (#26981)
Bug Fixes
- Resolved concurrent string parsing expression issues (#26721)
- Fixed connection issues with Kafka under SASL_SSL (#26617)
- Implemented error responses for yet-to-be-implemented APIs, replacing panic reactions (#26589)
- Addressed data race concerns:
- Mended partition garbage collection issues (#27816).
- Rectified SIGSEGV errors encountered when operating within gdb (#27736).
- Addressed thread safety issues in glog for standalone mode (#27703).
- Fixed instances where segments were inadvertently retained post-task cancellations (#26685).
- Resolved loading failures for collections exceeding 128 partitions (#26763).
- Ensured compatibility with scalar index types such as marisa-trie and Ascending (#27638).
- Corrected issues causing retrieval to sometimes exceed specified result limits (#26670).
- Solved startup failures in rootcoord due to role number limits (#27361).
- Patched Kafka consumer connection leaks (#27224).
- Disabled the enlarging of indices for flat structures (#27309).
- Updated garbage collector to fetch metadata post-storage listing (#27203).
- Fixed instances of datanode crashes stemming from simultaneous compaction and delete processes (#27167).
- Ironed out issues related to concurrent load logic in querynodev2 (#26959).
milvus-2.3.1
v2.3.1
Release date: Sep 22, 2023
Milvus version | Python SDK version | Java SDK version | Go SDK version | Node.js SDK version |
---|---|---|---|---|
2.3.1 | 2.3.1 | 2.3.1 | 2.3.1 | 2.3.2 |
We are excited to introduce Milvus 2.3.1, a patch release that includes several enhancements and bug fixes. These improvements are designed to enhance system stability and performance.
Features
- Restored support for SUBSTRUCTURE/SUPERSTRUCTURE binary metric types (#26766).
- Displayed index information during GetSegmentInfo (#26981).
Performance Improvement
- Improved loading mechanism (#26746): Unnecessary data copies have been reduced, resulting in enhanced overall load performance.
- Optimized MMap performance (#26750): The efficiency and capacity of MMap have been enhanced.
- Refactored storage merge insert data (#26839): The merging process has been optimized, leading to improved data node performance.
- Enhanced VARCHAR bulk insert speed (#26199): Batch processing reads have greatly improved the speed of VARCHAR bulk inserts.
- Utilized a pointer receiver for large structures (#26668): Memory copy has been enhanced by utilizing a pointer receiver.
Enhancements
- Enhanced error handling in QueryNode (#26910, #26940, #26918, #27013, #26904, #26521, #26773, #26676): Error messages have been made more descriptive and informative, improving the user experience.
- Enhanced Flush All API operations (#26802, #26769, #26859): The Flush, FlushAll, and GetFlushAllState API operations have undergone several improvements for better data syncing with object storage.
- Improved resilience of the RPC client with retry mechanism (#26795): The RPC client now has an enhanced retry mechanism, improving its resilience.
- Removed invalid offset check during data filling (#26666).
- Delayed connection reset for
Canceled
orDeadlineExceeded
gRPC code (#27014). - Achieved cleaner and more efficient error code management through miscellaneous code management and control enhancements (#26881, #26725, #26713, #26732).
Bug Fixes
- Fixed the index task retry issue (#26878): Canceled tasks are no longer marked as failed without retrying.
- Addressed load stability issues (#26763, #26959, #26931, #26813, #26685, #26630, #27027): Several stability issues related to load have been resolved.
- Resolved the segment retrieval issue (#26670): Retrieving now returns the correct number of results based on the specified limit.
- Fixed memory leak when putting duplicated segments (#26693).
- Fixed the bug for concurrent parsing expressions with strings (#26721).
- Fixed the panic caused by empty traceID (#26754) (#26808).
- Fixed the issue where timeout tasks never release the queue, leading to stuck compactions (#26593).
milvus-2.3.0
v2.3.0
Release date: Aug 23, 2023
Milvus version | Python SDK version | Java SDK version | Go SDK version | Node.js SDK version |
---|---|---|---|---|
2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 |
After months of meticulous refinement, we are pleased to announce the official release of Milvus 2.3.0. This highly anticipated release contains a wealth of exciting new features and enhancements, including GPU support, improved query architecture, enhanced load balancing capabilities, integrated message queues, Arm-compatible images, improved observability, and improved O&M tools. This represents a major leap forward in the maturity, reliability and usability of the Milvus 2.x series. We cordially invite community users to be among the first to explore them and request that any feedback or issues be submitted on GitHub. Let's work together to further refine and stabilize this exceptional 2.3.0 release.
Breaking changes
Deprecated Time Travel Feature
Due to its inactivity and the challenges it poses to the architecture design of Milvus, the time-travel feature has been deprecated in this release.
Discontinued CentOS Support
As CentOS 7 is about to reach its end of service (EOS) and official images based on CentOS 8 and CentOS 9 are not available, Milvus no longer supports CentOS. Instead, starting from this release, Milvus will provide images using the Amazonlinux distribution. It's worth noting that Ubuntu-based images remain the well-tested and recommended option.
Removed Index and Metrics Algorithms
The following algorithms have been removed in this release:
- ANNOY and RHNSW for index-building of float vectors
- TANIMOTO for index-building of binary vectors
- Superstructure and Substructure metrics
These changes have been made to streamline and optimize the functionality of Milvus.
Upgraded Architecture
GPU Support
Milvus had GPU support in its earlier versions (v1.x), but it was temporarily unavailable when Milvus transitioned to a distributed architecture in v2.x. Thanks to the contributions of NVIDIA engineers and their implementation of the RAFT algorithm for Milvus Knowhere, GPU support is once again available in Milvus. This latest update not only brings back GPU capabilities but also incorporates cutting-edge industry algorithms. In benchmark tests, Milvus with GPU support has demonstrated impressive performance improvements, achieving a three-fold increase in query per second (QPS) and even up to a ten-fold increase for certain datasets.
Arm64 Support
With the growing popularity of Arm CPUs among cloud providers and developers, Milvus has recognized the importance of catering to the needs of both x86 and Arm architectures. To accommodate this demand, Milvus now offers images for both platforms. Additionally, the release of Arm images aims to provide MacOS users with a seamless experience when working with Milvus on their projects.
Refactored QueryNode
QueryNode plays a vital role in data retrieval within Milvus, making its availability, performance, and extensibility essential. However, the legacy QueryNode had several reported issues, including complex status management, duplicate message queues, unclear code structure, and unintuitive error messages. To address these concerns, we have undertaken a significant refactoring effort. This involved transforming QueryNode into a stateless component and eliminating data-deletion-related message queues. These changes aim to enhance the overall functionality and usability of QueryNode within the Milvus system.
Merged IndexCoord and DataCoord
We have merged IndexCoord and DataCoord into a single component, simplifying the deployment of Milvus. This consolidation reduces complexity and streamlines operations. Moving forward, subsequent releases will also witness the integration of certain functions of IndexNode and DataNode to align with this unified approach. These updates ensure a more efficient and seamless experience when utilizing Milvus.
NATS-based Message Queue (Experimental)
The stability, extensibility, and performance of the message queue are of utmost importance to Milvus, given its log-based architecture. To expedite the development of Milvus 2.x, we have introduced support for Pulsar and Kafka as the core log brokers. However, these external log brokers have their limitations. They can exhibit instability when handling multiple topics simultaneously, complexity in managing duplicate messages, and resource management challenges when there are no messages to process. Additionally, their GO SDKs may have inactive communities.
To address these issues, we have made the decision to develop our own log broker based on NATS and Bookeeper. This custom message queue is currently undergoing experimentation, and we welcome feedback and comments from the community. Our aim is to create a robust and efficient solution that addresses the unique requirements of Milvus.
New features
Upsert
Users now can use the upsert API in Milvus for updating or inserting data. It is important to note that the upsert API combines search, delete, and insert operations, which may result in a degradation of performance. Therefore, it is recommended to use the insert APIs for specific and definitive insertions, while reserving the upsert APIs for more ambiguous scenarios.
Range Search
Users now have the option to set a distance range using arguments to retrieve specific results within that range in Milvus.
// add radius and range_filter to params in search_params
search_params = {"params": {"nprobe": 10, "radius": 20, "range_filter" : 10}, "metric_type": "L2"}
res = collection.search(
vectors, "float_vector", search_params, topK,
"int64 > 100", output_fields=["int64", "float"]
)
In the above example, the returned vectors will have distances ranging from 10 to 20 regarding the query vector. It is important to note that the method of distance measurement varies depending on the chosen metric type. Therefore, it is recommended to familiarize yourself with each metric type before applying a range search. Additionally, please be aware that the maximum number of vectors returned is limited to 16384.
Count
In previous releases, users would often use the num_entities API to retrieve the total number of entities in a collection. However, it is important to note that the num_entities API only applies to entities within sealed segments. Making frequent calls to the flush API can result in the creation of numerous small segments, which can negatively impact the stability of the system and the performance of data retrieval in Milvus.
In this release, Milvus introduces the count statement as an alternative solution for users to obtain the number of entities in a collection without relying on the flush API.
Please be aware that the count statement consumes system resources, and it is advisable to avoid calling it frequently to prevent unnecessary resource consumption.
Cosine Metrics
The Cosine Metrics is widely regarded as the standard method for measuring the distance between vectors, particularly in Large Language Models (LLMs). With the release of Milvus 2.3.0, cosine metrics are now natively supported. As a result, users no longer need to quantize vectors for IP (Inner Product) metrics.
Raw Vectors in Search Returns
Starting from Milvus 2.3.0, the capability to include raw vectors in search results is introduced for certain metrics. However, please note that including raw vectors in search results necessitates secondary searches, which can potentially impact performance. In scenarios where performance is critical, it is recommended to use indexes such as HNSW and IVF_FLAT, which inherently support the inclusion of vectors in their search results. It's important to mention that this feature currently does not apply to quantization-related indexes like IVF_PQ and IVF_SQ8. For more detailed information, please refer to https://github.com/zilliztech/knowhere/releases.
ScaNN
Milvus now includes support for FAISS' FastScan, which has demonstrated a 20% performance improvement compared to HNSW and a 7-fold increase compared to IVF-FLAT in multiple benchmark tests. ScaNN, an index-building algorithm similar to IVF-PQ, offers a faster index-building process. However, it's important to note that using ScaNN may result in a potential loss of precision and therefore requires refinement using the raw vectors.
The table below presents performance comparison results obtained using VectorDBBench. It evaluates the performance of ScaNN, HNSW, and IVF-FLAT in handling data retrieval from a 768-dimensional vector dataset sourced from Cohere.
Index | Case | QPS | Latency (P99) ms | Recall |
---|---|---|---|---|
ScaNN | 99% filtered | 626 | 0.0069 | 0.9532 |
1% filtered | 750 | 0.0063 | 0.9493 | |
0% filtered | 883 | 0.0051 | 0.9491 | |
IVF-FLAT | 99% filtered | 722 | 0.0061 | 0.9532 |
1% filtered | 122 | 0.0161 | 0.9493 | |
0% filtered | 123 | 0.0154 | 0.9494 | |
HNSW | 99% filtered | 773 | 0.0066 | 1.0 |
1% filtered | 355 | 0.0081 | 0.9839 | |
0% filtered | 696 | 0.0054 | 0.9528 |
Iterator
PyMilvus now includes support for iterators, enabling users to retrieve more than 16,384 entities in a search or range search operation. The iterator functionality operates similarly to ElasticSearch's scroll API and the cursor co...
milvus-2.2.14
2.2.14
Release date: Aug 23, 2023
Milvus version | Python SDK version | Java SDK version | Go SDK version | Node.js SDK version |
---|---|---|---|---|
2.2.14 | 2.2.15 | 2.2.11 | 2.2.7 | 2.2.24 |
Milvus 2.2.14 is a minor bug-fix release that mainly addresses cluster unavailability issues during rolling upgrades. With this new release, Milvus deployed with Kubernetes operator can be upgraded with almost zero downtime.
Bug Fixes
This update addresses the following issues:
- Fixed the issues that caused rolling upgrades to take longer than expected:
- Changed the default
gracefulStopTimeout
and now only displays a warning when there is a failure to refresh the policy cache. (#26443) - Refined gRPC retries. (#26464)
- Checked and reset the gRPC client server ID if it mismatches with the session. (#26473)
- Added a server ID validation interceptor. (#26395) (#26424)
- Improved the performance of the server ID interceptor validation. (#26468) (#26496)
- Changed the default
- Fixed the expression incompatibility issue between the parser and the executor. (#26493) (#26495)
- Fixed failures in serializing string index when its size exceeds 2 GB. (#26393)
- Fixed issues where enormous duplicate collections were being re-dropped during restore. (#26030)
- Fixed the issue where the leader view returns a loading shard cluster. (#26263)
- Fixed the Liveness check block in SessionUtil to watch forever. (#26250)
- Fixed issues related to logical expressions. (#26513) (#26515)
- Fixed issues related to continuous restart of DataNode/DataCoord. #26470 (#26506)
- Fixed issues related to being stuck in channel checkpoint. (#26544)
- Fixed an issue so that Milvus considers the balance task with a released source segment as stale. (#26498)
Enhancement
- Refined error messages for fields that do not exist (#26331).
- Fixed unclear error messages of the proto parser (#26365) (#26366).
- Prohibited setting a partition name for a collection that already has a partition key (#26128).
- Added disk metric information (#25678).
- Fixed the CollectionNotExists error during vector search and retrieval (#26532).
- Added a default
MALLOC_CONF
environment variable to release memory after dropping a collection (#26353). - Made pulsar request timeout configurable (#26526).
milvus-2.2.13
2.2.13
Release date: Aug 9, 2023
Milvus version | Python SDK version | Java SDK version | Go SDK version | Node.js SDK version |
---|---|---|---|---|
2.2.13 | 2.2.15 | 2.2.11 | 2.2.7 | 2.2.23 |
Milvus 2.2.13 is a minor bugfix release that fixes several performance degrading issues, including excessive disk usage when TTL is enabled, and the failure to import dynamic fields via bulk load. In addition, Milvus 2.2.13 also extends object storage support beyond S3 and MinIO.
Bugfixes
- Resolved a crash bug in bulk-insert for dynamic fields. (#25980)
- Reduced excessive MinIO storage usage by saving metadata (timestampFrom, timestampTo) during compaction. (#26210)
- Corrected lock usage in DataCoord compaction. (#26032) (#26042)
- Incorporated session util fixes through cherry-picking. (#26101)
- Removed user-role mapping information along with a user. (#25988) (#26048)
- Improved the RBAC cache update process. (#26150) (#26151)
- Fixed MsgPack from mq msgstream ts not being set. (#25924)
- Fixed the issue of
sc.distribution
being nil. (#25904) - Fixed incorrect results while retrieving data of int8. (#26171)
Enhancements
- Upgraded MinIO-go and add region and virtual host config for segcore chunk manager (#25811)
- Reduced log volumes of DC&DN (#26060) (#26094)
- Added a new configuration item: proxy.http.port (#25923)
- Forced use DNS for AliyunOSS because of sdk bug (#26176)
- Fixed indexnode and datanode num metric (#25920)
- Disabled deny writing when the growing segment size exceeds the watermark (#26163) (#26208)
Performance-related issues
- Fixed the performance degradation in version 2.2.12 by adding back the segment CGO pool and separating sq/dm operations (#26035).
milvus-2.2.12
2.2.12
Release date: 24 July, 2023
Milvus version | Python SDK version | Java SDK version | Go SDK version | Node.js SDK version |
---|---|---|---|---|
2.2.12 | 2.2.14 | 2.2.9 | 2.2.7 | 2.2.20 |
This minor release is the last one in Milvus 2.2.x that comes with new features. Future minor releases of Milvus 2.2.x will focus on essential bug fixes.
New features in this release include:
-
A new set of RESTful APIs that simplify user-side operations.
Note that you must set a token even if the authentication is disabled in Milvus for now. For details, see #25873.
-
Improved ability to retrieve vectors during ANN searches, along with better vector-retrieving performance during queries. Users can now set the vector field as one of the output fields in ANN searches and queries against HNSW-, DiskANN-, or IVF-FLAT-indexed collections.
-
Better search performance with reduced overhead, even when dealing with large top-K values, improved write performance in partition-key-enabled or multi-partition scenarios, and enhanced CPU usage in scenarios with large machines.
Additionally, a large number of issues have been fixed, including excessive disk usage, stuck compaction, infrequent data deletions, object storage access failures using AWS S3 SDK, and bulk-insertion failures.
New Features
- Added support for a high-level RESTful API that listens on the same port as gRPC (#24761).
- Added support for getting vectors by IDs (#23450) (#25090).
- Added support for
json_contains
(#25724). - Enabled bulk-insert to support partition keys (#24995).
- Enabled the chunk manager to use GCS and OSS with an access key (#25241).
Bugfixes
- Fixed issue where Milvus was using too much extra MinIO/local disk space
- Fixed Delete related issues
- Fixed Blob storage-related issues
- Fixed etcd failure causing Milvus to crash (#25463)(#25111)
- Fixed Bulk-load issues
- Fixed indexnode memory leakage when update index fails (#25460) (#25478)
- Fixed Kafka panic when sending a message to a closed channel (#25116)
- Fixed insert returning success but not storing dynamic fields (#25494)
- Refined sync_cp_lag_too_behind_policy to avoid submitting sync tasks too frequently (#25441) (#25442)
- Fixed bug of missing JSON type when sorting retrieve results (#25412)
- Fixed possible deadlock when syncing segments to datanode (#25196) (#25211)
- Added write lock for
lru_cache.Get
(#25010) - Fixed expression on integer overflow case (#25320, #25372)
- Fixed data race in waitgroup for graceful stop (#25224)
- Fixed drop index with large txn exceeding etcd limit (#25623)
- Fixed incorrect IP distance (#25527) (#25528)
- Prevented
exclusive consumer
exception in Pulsar (#25376) (#25378) - Made query set guarantee ts based on default consistency level (#25579)
- Fixed rootcoord restoration missing gcConfirmStep (#25280)
- Fixed missing db parameter (#25759)
Enhancements
- Improved monitoring metrics:
- Reduced Standalone CPU usage:
- Used zstd compression after level 2 for RocksMQ (#25238)
- Made compaction RPC timeout and parallel maximum configurable (#25654)
- Accelerated compiling third-party libraries for AWS and Google SDK (#25408)
- Removed DataNode time-tick MQ and use RPC to report instead (#24011)
- Changed default log level to info (#25278)
- Added refunding tokens to limiter (#25660)
- Added write the cache file to the
cacheStorage.rootpath
directory (#25714) - Fixed inconsistency between catalog and in-memory segments meta (#25799) (#25801)
Performance-related issues
- Added PK index for string data type (#25402)
- Improved write performance with partition key; remove sync segmentLastExpire every time when assigning (#25271) (#25316)
- Fixed issues to avoid unnecessary reduce phase during search (#25166) (#25192)
- Updated default nb to 2000 (#25169)
- Added
minCPUParallelTaskNumRatio
config to enable better parallelism when estimated CPU usage of a single task is higher than total CPU usage (#25772) - Fixed coping segment offsets twice (#25729) (#25730)
- Added limits on the number of go routines (#25171)