Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Offline IVF powered by faiss big batch search #3175

Closed
wants to merge 5 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
52 changes: 52 additions & 0 deletions demos/offline_ivf/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@

# Offline IVF

This folder contains the code for the offline ivf algorithm powered by faiss big batch search.

Create a conda env:

`conda create --name oivf python=3.10`

`conda activate oivf`

`conda install -c pytorch/label/nightly -c nvidia faiss-gpu=1.7.4`

`conda install tqdm`

`conda install pyyaml`

`conda install -c conda-forge submitit`


## Run book

1. Optionally shard your dataset (see create_sharded_dataset.py) and create the corresponding yaml file `config_ssnpp.yaml`. You can use `generate_config.py` by specifying the root directory of your dataset and the files with the data shards

`python generate_config`

2. Run the train index command

`python run.py --command train_index --config config_ssnpp.yaml --xb ssnpp_1B`


3. Run the index-shard command so it produces sharded indexes, required for the search step

`python run.py --command index_shard --config config_ssnpp.yaml --xb ssnpp_1B`


6. Send jobs to the cluster to run search

`python run.py --command search --config config_ssnpp.yaml --xb ssnpp_1B --cluster_run --partition <PARTITION-NAME>`


Remarks about the `search` command: it is assumed that the database vectors are the query vectors when performing the search step.
a. If the query vectors are different than the database vectors, it should be passed in the xq argument
b. A new dataset needs to be prepared (step 1) before passing it to the query vectors argument `–xq`

`python run.py --command search --config config_ssnpp.yaml --xb ssnpp_1B --xq <QUERIES_DATASET_NAME>`


6. We can always run the consistency-check for sanity checks!

`python run.py --command consistency_check--config config_ssnpp.yaml --xb ssnpp_1B`

Empty file added demos/offline_ivf/__init__.py
Empty file.
109 changes: 109 additions & 0 deletions demos/offline_ivf/config_ssnpp.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,109 @@
d: 256
output: /checkpoint/marialomeli/offline_faiss/ssnpp
index:
prod:
- 'IVF8192,PQ128'
non-prod:
- 'IVF16384,PQ128'
- 'IVF32768,PQ128'
nprobe:
prod:
- 512
non-prod:
- 256
- 128
- 1024
- 2048
- 4096
- 8192

k: 50
index_shard_size: 50000000
query_batch_size: 50000000
evaluation_sample: 10000
training_sample: 1572864
datasets:
ssnpp_1B:
root: /checkpoint/marialomeli/ssnpp_data
size: 1000000000
files:
- dtype: uint8
format: npy
name: ssnpp_0000000000.npy
size: 50000000
- dtype: uint8
format: npy
name: ssnpp_0000000001.npy
size: 50000000
- dtype: uint8
format: npy
name: ssnpp_0000000002.npy
size: 50000000
- dtype: uint8
format: npy
name: ssnpp_0000000003.npy
size: 50000000
- dtype: uint8
format: npy
name: ssnpp_0000000004.npy
size: 50000000
- dtype: uint8
format: npy
name: ssnpp_0000000005.npy
size: 50000000
- dtype: uint8
format: npy
name: ssnpp_0000000006.npy
size: 50000000
- dtype: uint8
format: npy
name: ssnpp_0000000007.npy
size: 50000000
- dtype: uint8
format: npy
name: ssnpp_0000000008.npy
size: 50000000
- dtype: uint8
format: npy
name: ssnpp_0000000009.npy
size: 50000000
- dtype: uint8
format: npy
name: ssnpp_0000000010.npy
size: 50000000
- dtype: uint8
format: npy
name: ssnpp_0000000011.npy
size: 50000000
- dtype: uint8
format: npy
name: ssnpp_0000000012.npy
size: 50000000
- dtype: uint8
format: npy
name: ssnpp_0000000013.npy
size: 50000000
- dtype: uint8
format: npy
name: ssnpp_0000000014.npy
size: 50000000
- dtype: uint8
format: npy
name: ssnpp_0000000015.npy
size: 50000000
- dtype: uint8
format: npy
name: ssnpp_0000000016.npy
size: 50000000
- dtype: uint8
format: npy
name: ssnpp_0000000017.npy
size: 50000000
- dtype: uint8
format: npy
name: ssnpp_0000000018.npy
size: 50000000
- dtype: uint8
format: npy
name: ssnpp_0000000019.npy
size: 50000000
63 changes: 63 additions & 0 deletions demos/offline_ivf/create_sharded_ssnpp_files.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,63 @@
# Copyright (c) Meta Platforms, Inc. and affiliates.
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.

import numpy as np
import argparse
import os


def xbin_mmap(fname, dtype, maxn=-1):
"""
Code from
https://github.com/harsha-simhadri/big-ann-benchmarks/blob/main/benchmark/dataset_io.py#L94
mmap the competition file format for a given type of items
"""
n, d = map(int, np.fromfile(fname, dtype="uint32", count=2))
assert os.stat(fname).st_size == 8 + n * d * np.dtype(dtype).itemsize
if maxn > 0:
n = min(n, maxn)
return np.memmap(fname, dtype=dtype, mode="r", offset=8, shape=(n, d))


def main(args: argparse.Namespace):
ssnpp_data = xbin_mmap(fname=args.filepath, dtype="uint8")
num_batches = ssnpp_data.shape[0] // args.data_batch
assert (
ssnpp_data.shape[0] % args.data_batch == 0
), "num of embeddings per file should divide total num of embeddings"
for i in range(num_batches):
xb_batch = ssnpp_data[
i * args.data_batch : (i + 1) * args.data_batch, :
]
filename = args.output_dir + f"/ssnpp_{(i):010}.npy"
np.save(filename, xb_batch)
print(f"File {filename} is saved!")


if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument(
"--data_batch",
dest="data_batch",
type=int,
default=50000000,
help="Number of embeddings per file, should be a divisor of 1B",
)
parser.add_argument(
"--filepath",
dest="filepath",
type=str,
default="/datasets01/big-ann-challenge-data/FB_ssnpp/FB_ssnpp_database.u8bin",
help="path of 1B ssnpp database vectors' original file",
)
parser.add_argument(
"--filepath",
dest="output_dir",
type=str,
default="/checkpoint/marialomeli/ssnpp_data",
help="path to put sharded files",
)

args = parser.parse_args()
main(args)
Loading