Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FAISS HNSW #746

Merged
merged 21 commits into from
Aug 23, 2024
Merged

Conversation

alexanderguzhva
Copy link
Collaborator

@alexanderguzhva alexanderguzhva commented Aug 1, 2024

Introducing modified FAISS HNSW version, which outperforms current hnswlib HNSW implementation.

  • HNSW,Flat for fp32.
  • HNSW,SQ + optional refine.
    • SQ6, SQ8, BF16, FP16 for sq.
    • SQ8, FP16, BF16, FP32/FLAT for refine.
  • HNSW,PQ + optional refine.
    • TODO: bring speedup code.
    • SQ8, FP16, BF16, FP32/FLAT for refine.
  • HNSW,PRQ + optional refine.
    • TODO: bring speedup code.
    • SQ8, FP16, BF16, FP32/FLAT for refine.
    • The question is whether I need to keep this, because it provides some recall improvements at the cost of the training speed while keeping the same size and QPS.

The code is somewhat dirty and is subject to further possibly significant changes.

/kind improvement

Copy link

codecov bot commented Aug 1, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 61.79%. Comparing base (3c46f4c) to head (924049a).
Report is 148 commits behind head on main.

Additional details and impacted files

Impacted file tree graph

@@            Coverage Diff            @@
##           main     #746       +/-   ##
=========================================
+ Coverage      0   61.79%   +61.79%     
=========================================
  Files         0       84       +84     
  Lines         0     6123     +6123     
=========================================
+ Hits          0     3784     +3784     
- Misses        0     2339     +2339     

see 84 files with indirect coverage changes

@alexanderguzhva alexanderguzhva force-pushed the upgrade_faiss_hnsw_4 branch 4 times, most recently from edc1cbd to dec20d0 Compare August 1, 2024 23:00
Copy link
Collaborator

@liliu-z liliu-z left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a first glance comments. More detailed review will follow.

Can I ask some summary about the code structure for faster reviewing?

  1. What is the Wrappers and why we need them
  2. The relationship between Wrappers in Knowhere & Faiss
  3. Difference between the overrided HNSW and Faiss'

Thanks

include/knowhere/index/index_table.h Outdated Show resolved Hide resolved
include/knowhere/comp/index_param.h Outdated Show resolved Hide resolved
src/index/hnsw/faiss_hnsw.cc Show resolved Hide resolved
src/index/hnsw/faiss_hnsw.cc Show resolved Hide resolved
src/index/hnsw/faiss_hnsw.cc Show resolved Hide resolved
@alexanderguzhva alexanderguzhva force-pushed the upgrade_faiss_hnsw_4 branch 10 times, most recently from 2eaf4bd to 551e435 Compare August 8, 2024 20:51
@mergify mergify bot added the ci-passed label Aug 8, 2024
Signed-off-by: Alexandr Guzhva <alexanderguzhva@gmail.com>
Signed-off-by: Alexandr Guzhva <alexanderguzhva@gmail.com>
Signed-off-by: Alexandr Guzhva <alexanderguzhva@gmail.com>
Signed-off-by: Alexandr Guzhva <alexanderguzhva@gmail.com>
Signed-off-by: Alexandr Guzhva <alexanderguzhva@gmail.com>
Signed-off-by: Alexandr Guzhva <alexanderguzhva@gmail.com>
Signed-off-by: Alexandr Guzhva <alexanderguzhva@gmail.com>
Signed-off-by: Alexandr Guzhva <alexanderguzhva@gmail.com>
Signed-off-by: Alexandr Guzhva <alexanderguzhva@gmail.com>
Signed-off-by: Alexandr Guzhva <alexanderguzhva@gmail.com>
Signed-off-by: Alexandr Guzhva <alexanderguzhva@gmail.com>
Signed-off-by: Alexandr Guzhva <alexanderguzhva@gmail.com>
@mergify mergify bot removed the ci-passed label Aug 20, 2024
@alexanderguzhva alexanderguzhva force-pushed the upgrade_faiss_hnsw_4 branch 4 times, most recently from 069bdcd to bc6b6a8 Compare August 21, 2024 01:01
…tion

Signed-off-by: Alexandr Guzhva <alexanderguzhva@gmail.com>
@liorf95
Copy link

liorf95 commented Aug 21, 2024

How is it possible to run the benchmark /UT for this PR?

@alexanderguzhva
Copy link
Collaborator Author

alexanderguzhva commented Aug 21, 2024

@liorf95 We have benchmarks that are done in https://github.com/zilliztech/vdbbench, they will be published as soon as the code is reviewed. Basically, at this moment:

  • Faiss HNSW Flat outperforms our current hnswlib Flat up to 20%, depending on the filtering rate
  • Faiss HNSW SQ8 is slower than hnswlib SQ8, but the recall is higher, because of difference in SDC / ADC operating modes
  • Faiss HNSW PQ and especially Faiss HNSW PRQ are not at theirs peak performance, because they require some additional components that are in a beta stage, plus some additional but tiny changes. This will be done as well. But overall, the QPS for HNSW PQ and HNSW PRQ is somewhat close to HNSW Flat / HNSW SQ

Alternatively, if you'd like to test the code yourself, please use vdbbench, but I'm not sure tbh whether it might require some manual changes in vdbbench (I don't remember tbh lol). I'm just not sure whether we've added a support to Milvus yet.

hnswlib flat for 768 dim 1M dataset (current)


2024-08-20 20:37:45,815 | INFO |DB       | db_label case            label      | load_dur    qps          latency(p99)    recall        max_load_count | label (models.py:234)
2024-08-20 20:37:45,815 | INFO |-------- | -------- --------------- ---------- | ----------- ------------ --------------- ------------- -------------- | ----- (models.py:234)
2024-08-20 20:37:45,815 | INFO |Knowhere |          Cohere-None     2024082020 | 665.2817    4665.1736    0.0             0.9214        0              | :)    (models.py:234)
2024-08-20 20:37:45,815 | INFO |Knowhere |          Cohere-int-0p   2024082020 | 665.2817    4655.2296    0.0             0.9214        0              | :)    (models.py:234)
2024-08-20 20:37:45,815 | INFO |Knowhere |          Cohere-int-1p   2024082020 | 665.2817    4512.0473    0.0             0.9208        0              | :)    (models.py:234)
2024-08-20 20:37:45,815 | INFO |Knowhere |          Cohere-int-5p   2024082020 | 665.2817    4595.7376    0.0             0.9152        0              | :)    (models.py:234)
2024-08-20 20:37:45,815 | INFO |Knowhere |          Cohere-int-10p  2024082020 | 665.2817    4683.7024    0.0             0.9105        0              | :)    (models.py:234)
2024-08-20 20:37:45,815 | INFO |Knowhere |          Cohere-int-20p  2024082020 | 665.2817    4778.249     0.0             0.8996        0              | :)    (models.py:234)
2024-08-20 20:37:45,815 | INFO |Knowhere |          Cohere-int-30p  2024082020 | 665.2817    4779.8595    0.0             0.8886        0              | :)    (models.py:234)
2024-08-20 20:37:45,815 | INFO |Knowhere |          Cohere-int-40p  2024082020 | 665.2817    4639.7362    0.0             0.8801        0              | :)    (models.py:234)
2024-08-20 20:37:45,815 | INFO |Knowhere |          Cohere-int-50p  2024082020 | 665.2817    4296.0571    0.0             0.8759        0              | :)    (models.py:234)
2024-08-20 20:37:45,815 | INFO |Knowhere |          Cohere-int-60p  2024082020 | 665.2817    3753.4411    0.0             0.878         0              | :)    (models.py:234)
2024-08-20 20:37:45,815 | INFO |Knowhere |          Cohere-int-70p  2024082020 | 665.2817    2985.7408    0.0             0.8844        0              | :)    (models.py:234)
2024-08-20 20:37:45,815 | INFO |Knowhere |          Cohere-int-80p  2024082020 | 665.2817    2078.155     0.0             0.901         0              | :)    (models.py:234)
2024-08-20 20:37:45,815 | INFO |Knowhere |          Cohere-int-90p  2024082020 | 665.2817    1354.3858    0.0             0.891         0              | :)    (models.py:234)
2024-08-20 20:37:45,815 | INFO |Knowhere |          Cohere-int-95p  2024082020 | 665.2817    479.1046     0.0             1.0           0              | :)    (models.py:234)
2024-08-20 20:37:45,815 | INFO |Knowhere |          Cohere-int-98p  2024082020 | 665.2817    2098.4618    0.0             1.0           0              | :)    (models.py:234)
2024-08-20 20:37:45,815 | INFO |Knowhere |          Cohere-int-99p  2024082020 | 665.2817    3511.0782    0.0             1.0           0              | :)    (models.py:234)

faiss hnsw flat for 768 dim 1M dataset (candidate)

2024-08-20 20:16:55,630 | INFO |DB       | db_label case            label      | load_dur    qps          latency(p99)    recall        max_load_count | label (models.py:234)
2024-08-20 20:16:55,637 | INFO |-------- | -------- --------------- ---------- | ----------- ------------ --------------- ------------- -------------- | ----- (models.py:234)
2024-08-20 20:16:55,637 | INFO |Knowhere |          Cohere-None     2024082019 | 627.703     5017.7925    0.0             0.9265        0              | :)    (models.py:234)
2024-08-20 20:16:55,637 | INFO |Knowhere |          Cohere-int-0p   2024082019 | 627.703     5012.7896    0.0             0.9265        0              | :)    (models.py:234)
2024-08-20 20:16:55,637 | INFO |Knowhere |          Cohere-int-1p   2024082019 | 627.703     4955.4562    0.0             0.9253        0              | :)    (models.py:234)
2024-08-20 20:16:55,637 | INFO |Knowhere |          Cohere-int-5p   2024082019 | 627.703     5085.4225    0.0             0.9198        0              | :)    (models.py:234)
2024-08-20 20:16:55,637 | INFO |Knowhere |          Cohere-int-10p  2024082019 | 627.703     5265.8881    0.0             0.9139        0              | :)    (models.py:234)
2024-08-20 20:16:55,637 | INFO |Knowhere |          Cohere-int-20p  2024082019 | 627.703     5485.4166    0.0             0.9028        0              | :)    (models.py:234)
2024-08-20 20:16:55,637 | INFO |Knowhere |          Cohere-int-30p  2024082019 | 627.703     5575.8401    0.0             0.8906        0              | :)    (models.py:234)
2024-08-20 20:16:55,637 | INFO |Knowhere |          Cohere-int-40p  2024082019 | 627.703     5440.1608    0.0             0.883         0              | :)    (models.py:234)
2024-08-20 20:16:55,637 | INFO |Knowhere |          Cohere-int-50p  2024082019 | 627.703     5054.3116    0.0             0.8799        0              | :)    (models.py:234)
2024-08-20 20:16:55,637 | INFO |Knowhere |          Cohere-int-60p  2024082019 | 627.703     4426.2809    0.0             0.8803        0              | :)    (models.py:234)
2024-08-20 20:16:55,637 | INFO |Knowhere |          Cohere-int-70p  2024082019 | 627.703     3552.6454    0.0             0.8896        0              | :)    (models.py:234)
2024-08-20 20:16:55,637 | INFO |Knowhere |          Cohere-int-80p  2024082019 | 627.703     2510.4212    0.0             0.9021        0              | :)    (models.py:234)
2024-08-20 20:16:55,637 | INFO |Knowhere |          Cohere-int-90p  2024082019 | 627.703     1665.417     0.0             0.8952        0              | :)    (models.py:234)
2024-08-20 20:16:55,637 | INFO |Knowhere |          Cohere-int-95p  2024082019 | 627.703     474.7159     0.0             1.0           0              | :)    (models.py:234)
2024-08-20 20:16:55,637 | INFO |Knowhere |          Cohere-int-98p  2024082019 | 627.703     2242.8977    0.0             1.0           0              | :)    (models.py:234)
2024-08-20 20:16:55,637 | INFO |Knowhere |          Cohere-int-99p  2024082019 | 627.703     3969.9828    0.0             1.0           0              | :)    (models.py:234)

@liorf95
Copy link

liorf95 commented Aug 21, 2024

@liorf95 We have benchmarks that are done in https://github.com/zilliztech/vdbbench, they will be published as soon as the code is reviewed. Basically, at this moment:

  • Faiss HNSW Flat outperforms our current hnswlib Flat up to 20%, depending on the filtering rate
  • Faiss HNSW SQ8 is slower than hnswlib SQ8, but the recall is higher, because of difference in SDC / ADC operating modes
  • Faiss HNSW PQ and especially Faiss HNSW PRQ are not at theirs peak performance, because they require some additional components that are in a beta stage, plus some additional but tiny changes. This will be done as well. But overall, the QPS for HNSW PQ and HNSW PRQ is somewhat close to HNSW Flat / HNSW SQ

Alternatively, if you'd like to test the code yourself, please use vdbbench, but I'm not sure tbh whether it might require some manual changes in vdbbench (I don't remember tbh lol). I'm just not sure whether we've added a support to Milvus yet.

hnswlib flat for 768 dim 1M dataset (current)


2024-08-20 20:37:45,815 | INFO |DB       | db_label case            label      | load_dur    qps          latency(p99)    recall        max_load_count | label (models.py:234)
2024-08-20 20:37:45,815 | INFO |-------- | -------- --------------- ---------- | ----------- ------------ --------------- ------------- -------------- | ----- (models.py:234)
2024-08-20 20:37:45,815 | INFO |Knowhere |          Cohere-None     2024082020 | 665.2817    4665.1736    0.0             0.9214        0              | :)    (models.py:234)
2024-08-20 20:37:45,815 | INFO |Knowhere |          Cohere-int-0p   2024082020 | 665.2817    4655.2296    0.0             0.9214        0              | :)    (models.py:234)
2024-08-20 20:37:45,815 | INFO |Knowhere |          Cohere-int-1p   2024082020 | 665.2817    4512.0473    0.0             0.9208        0              | :)    (models.py:234)
2024-08-20 20:37:45,815 | INFO |Knowhere |          Cohere-int-5p   2024082020 | 665.2817    4595.7376    0.0             0.9152        0              | :)    (models.py:234)
2024-08-20 20:37:45,815 | INFO |Knowhere |          Cohere-int-10p  2024082020 | 665.2817    4683.7024    0.0             0.9105        0              | :)    (models.py:234)
2024-08-20 20:37:45,815 | INFO |Knowhere |          Cohere-int-20p  2024082020 | 665.2817    4778.249     0.0             0.8996        0              | :)    (models.py:234)
2024-08-20 20:37:45,815 | INFO |Knowhere |          Cohere-int-30p  2024082020 | 665.2817    4779.8595    0.0             0.8886        0              | :)    (models.py:234)
2024-08-20 20:37:45,815 | INFO |Knowhere |          Cohere-int-40p  2024082020 | 665.2817    4639.7362    0.0             0.8801        0              | :)    (models.py:234)
2024-08-20 20:37:45,815 | INFO |Knowhere |          Cohere-int-50p  2024082020 | 665.2817    4296.0571    0.0             0.8759        0              | :)    (models.py:234)
2024-08-20 20:37:45,815 | INFO |Knowhere |          Cohere-int-60p  2024082020 | 665.2817    3753.4411    0.0             0.878         0              | :)    (models.py:234)
2024-08-20 20:37:45,815 | INFO |Knowhere |          Cohere-int-70p  2024082020 | 665.2817    2985.7408    0.0             0.8844        0              | :)    (models.py:234)
2024-08-20 20:37:45,815 | INFO |Knowhere |          Cohere-int-80p  2024082020 | 665.2817    2078.155     0.0             0.901         0              | :)    (models.py:234)
2024-08-20 20:37:45,815 | INFO |Knowhere |          Cohere-int-90p  2024082020 | 665.2817    1354.3858    0.0             0.891         0              | :)    (models.py:234)
2024-08-20 20:37:45,815 | INFO |Knowhere |          Cohere-int-95p  2024082020 | 665.2817    479.1046     0.0             1.0           0              | :)    (models.py:234)
2024-08-20 20:37:45,815 | INFO |Knowhere |          Cohere-int-98p  2024082020 | 665.2817    2098.4618    0.0             1.0           0              | :)    (models.py:234)
2024-08-20 20:37:45,815 | INFO |Knowhere |          Cohere-int-99p  2024082020 | 665.2817    3511.0782    0.0             1.0           0              | :)    (models.py:234)

faiss hnsw flat for 768 dim 1M dataset (candidate)

2024-08-20 20:16:55,630 | INFO |DB       | db_label case            label      | load_dur    qps          latency(p99)    recall        max_load_count | label (models.py:234)
2024-08-20 20:16:55,637 | INFO |-------- | -------- --------------- ---------- | ----------- ------------ --------------- ------------- -------------- | ----- (models.py:234)
2024-08-20 20:16:55,637 | INFO |Knowhere |          Cohere-None     2024082019 | 627.703     5017.7925    0.0             0.9265        0              | :)    (models.py:234)
2024-08-20 20:16:55,637 | INFO |Knowhere |          Cohere-int-0p   2024082019 | 627.703     5012.7896    0.0             0.9265        0              | :)    (models.py:234)
2024-08-20 20:16:55,637 | INFO |Knowhere |          Cohere-int-1p   2024082019 | 627.703     4955.4562    0.0             0.9253        0              | :)    (models.py:234)
2024-08-20 20:16:55,637 | INFO |Knowhere |          Cohere-int-5p   2024082019 | 627.703     5085.4225    0.0             0.9198        0              | :)    (models.py:234)
2024-08-20 20:16:55,637 | INFO |Knowhere |          Cohere-int-10p  2024082019 | 627.703     5265.8881    0.0             0.9139        0              | :)    (models.py:234)
2024-08-20 20:16:55,637 | INFO |Knowhere |          Cohere-int-20p  2024082019 | 627.703     5485.4166    0.0             0.9028        0              | :)    (models.py:234)
2024-08-20 20:16:55,637 | INFO |Knowhere |          Cohere-int-30p  2024082019 | 627.703     5575.8401    0.0             0.8906        0              | :)    (models.py:234)
2024-08-20 20:16:55,637 | INFO |Knowhere |          Cohere-int-40p  2024082019 | 627.703     5440.1608    0.0             0.883         0              | :)    (models.py:234)
2024-08-20 20:16:55,637 | INFO |Knowhere |          Cohere-int-50p  2024082019 | 627.703     5054.3116    0.0             0.8799        0              | :)    (models.py:234)
2024-08-20 20:16:55,637 | INFO |Knowhere |          Cohere-int-60p  2024082019 | 627.703     4426.2809    0.0             0.8803        0              | :)    (models.py:234)
2024-08-20 20:16:55,637 | INFO |Knowhere |          Cohere-int-70p  2024082019 | 627.703     3552.6454    0.0             0.8896        0              | :)    (models.py:234)
2024-08-20 20:16:55,637 | INFO |Knowhere |          Cohere-int-80p  2024082019 | 627.703     2510.4212    0.0             0.9021        0              | :)    (models.py:234)
2024-08-20 20:16:55,637 | INFO |Knowhere |          Cohere-int-90p  2024082019 | 627.703     1665.417     0.0             0.8952        0              | :)    (models.py:234)
2024-08-20 20:16:55,637 | INFO |Knowhere |          Cohere-int-95p  2024082019 | 627.703     474.7159     0.0             1.0           0              | :)    (models.py:234)
2024-08-20 20:16:55,637 | INFO |Knowhere |          Cohere-int-98p  2024082019 | 627.703     2242.8977    0.0             1.0           0              | :)    (models.py:234)
2024-08-20 20:16:55,637 | INFO |Knowhere |          Cohere-int-99p  2024082019 | 627.703     3969.9828    0.0             1.0           0              | :)    (models.py:234)

Wow!Thanks a lot for all that helpful information.
I did not see any additional UT for that PR in tests/ut/knowhere_tests - had I missed anything here?

@alexanderguzhva
Copy link
Collaborator Author

@liorf95 benchmark/hdf5/benchmark_faiss_hnsw.cpp. The reason why it is done this way because it introduces way too many options to test. As of now, this benchmark generates 2.7 GB of temporary data and tests all new possible combinations of indices. You may wish to quickly tweak it for your own needs.

for (int64_t i = 0; i < rows; i++) {
const int64_t id = ids[i];
assert(id >= 0 && id < index->ntotal);
index->reconstruct(id, tmp.get());
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What if the datatype is FP16 but index type is HNSWSQ4 without refinement or FP16 refinement?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if HNSWSQ is used and no refinement is available, then the reconstruction will be inaccurate.
Please let me know the requirements for this particular method. What is it expected to return exactly?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

GetVectorById need to return an error code if there is no raw data available (If the data type is FP16, then the raw data need to be FP16).

So we need a raw data existence check there.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Created an issue #778 to track and unblock this

src/index/hnsw/faiss_hnsw.cc Show resolved Hide resolved
include/knowhere/tolower.h Show resolved Hide resolved
Comment on lines +31 to +32
// the following structure is a hack, because GCC cannot properly
// de-virtualize a plain BitsetViewIDSelector.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

https://github.com/liliu-z/knowhere/blob/2006b77c5b005a7cacdaaa1d567be363d44ef55e/include/knowhere/bitsetview_idselector.h#L20

This reminds me that Knowhere's IVF-series indexes are using the virtual selector. Will the performance also be a concern there?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also in the faiss_hnsw.cc, we use BitsetViewIdSelector to do search

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the performance for selector impact may have an impact in brture force search, because it implies 90+% of filtering. In other cases, the cost of calling a virtual method is negligible compares to the cost of distance computations.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Saying that it is also a concern.

  1. IVF can have super high filtering rate but still uses virtual method.
  2. To my experience the, HNSW validation is not negligible. We noticed 10% performance regression in cardinal if we try to do one more deference for that.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Created an issue #779 to track this and unblock this PR

namespace knowhere {

// a wrapper that overrides a distance computer
IndexWrapperCosine::IndexWrapperCosine(faiss::Index* index, const float* inverse_l2_norms_in)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why we need a wrapper instead of putting logics in IndexHNSWxxxxCosine?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ultimately, it was needed for refine and to enable/disable refine for the search wirh a config parameter.

namespace cppcontrib {
namespace knowhere {

IndexBruteForceWrapper::IndexBruteForceWrapper(Index* underlying_index) :
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why we have two BruteForce Wrappers? one in Knowhere and one in Faiss. And looks like nowhere call this one

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I expect to merge one, a generic one into Faiss, if they accept it. Another one is a specialized version for our needs. The other one might be removed in Faiss guys reject it.

Copy link
Collaborator

@liliu-z liliu-z left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

@sre-ci-robot
Copy link
Collaborator

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: alexanderguzhva, liliu-z

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@liliu-z
Copy link
Collaborator

liliu-z commented Aug 23, 2024

Let's check this big change in first to unblock the following work.

@sre-ci-robot sre-ci-robot merged commit d20907f into zilliztech:main Aug 23, 2024
13 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants