Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
enhance: update float16/bfloat16 examples (#2388)
In the Python ecosystem, users may use basic libraries such as numpy, Pandas, TensorFlow, PyTorch... to process float16/bfloat16 vectors. However, users may have float32 vectors and are not clear about how to handle float16/bfloat16 vectors in pymilvus. Currently, pymilvus supports numpy array as embedding vector inputs. However, numpy itself does not support bfloat16 type. This PR demonstrates the way of converting float arrays in insert/search API. **insert (accept numpy array as input)**: - float32 vector (owned by users) -> float16 vector (input param of insert API). numpy is enough, no more dependency. - float32 vector (owned by users) -> bfloat16 vector (input param of insert API). Depends on `tf.bfloat16`. Pytorch can not convert `torch.bfloat16` to numpy array. **search (the API returns bytes as float16/bfloat16 vector)**: - float16 vector (bytes). User can convert it into numpy array, PyTorch Tensor or TensorFlow Tensor. - bfloat16 vector (bytes). User can convert it into PyTorch Tensor or TensorFlow Tensor. There are many deep learning platforms available in Python, and we can't determine which ecosystem users want. Therefore, this PR doesn't add the method for float vector conversion in pymilvus. References: - numpy/numpy#19808 - pytorch/pytorch#90574 issue: milvus-io/milvus#37448 Signed-off-by: Yinzuo Jiang <yinzuo.jiang@zilliz.com> Signed-off-by: Yinzuo Jiang <jiangyinzuo@foxmail.com>
- Loading branch information