Version 1.6
Pre-release
Pre-release
Here are the list of changes for the version 1.6 (manual isn't updated yet):
We especially thank the following people for the fixes:
- Bileg Naidan (@bileg)
- Bob Poekert (@bobpoekert)
- @orgoro
- We simplified the build by excluding the code that required 3rd party code from the core library. In other words, the core library does not have any 3rd party dependencies (not even boost). To build the full version of library you have to run cmake as follows:
cmake . -DWITH_EXTRAS=1
- It should now be possible to build on MAC.
- We improve Python bindings (thanks to @bileg) and their installation process (thanks to @bobpoekert):
- We merged our generic and vector bindings into a single module. We upgraded to a more standard installation process via
distutils
. You can run:python setup.py build
and thensudo python setup.py install
. - We improved our support for sparse spaces: you can pass data in the form of a numpy sparse array!
- There are now batch multi-threaded querying and addition of data.
addDataPoint*
functions return a position of an inserted entry. This can be useful if you use functiongetDataPoint
- For examples of using Python API, please, see
*.py
files in the folderpython_bindings
. - Note that to execute unit tests you need: python-numpy, python-scipy, and python-pandas.
- We merged our generic and vector bindings into a single module. We upgraded to a more standard installation process via
- Because we got rid of boost, we, unfortunately, do not support command-line options WITHOUT arguments. Instead, you have pass values 0 or 1.
- However, the utility
experiment
(experiment.exe
) now accepts the optionrecallOnly
. If this option has argument 1, then the only effectiveness metric computed is recall. This is useful for evaluation of HNSW, because (for efficiency reasons) HNSW does not return proper distance values (e.g., for L2 it's a squared distance, not the original one). This makes it impossible to compute effectiveness metrics other than recall (returning wrong distance values would also lead toexperiment
terminating with an error message). - Additional spaces:
negdotprod_sparse
: negative inner (dot) product. This is asparse
space.querynorm_negdotprod_sparse
: query-normalized inner (dot) product, which is the dot product divded by the query norm.renyi_diverg
: Renyi divergence. It has the parameteralpha
.ab_diverg
: α-β-divergence. It has two parameters:alpha
andbeta
.
- Additional search methods:
simple_invindx
: A classical inverted index with a document-at-a-time processing (via a prirority queue). It doesn't have parameters, but works only with the sparse spacenegdotprod_sparse
.falconn
: we ported (created a wrapper for) a June 2016's version of FALCONN library.- Unlike the original implementation, our wrapper works directly with sparse vector spaces as well as with dense vector spaces.
- However, our wrapper has to duplicate data twice: so this method is useful mostly as a benchmark.
- Our wrapper directly supports a data centering trick, which can boost performance sometimes.
- Most parameters (
hash_family
,cross_polytope
,hyperplane
,storage_hash_table
,num_hash_bits
,num_hash_tables
,num_probes
,num_rotations
,seed
,feature_hashing_dimension
) merely map to FALCONN parameters. - Setting additional parameters
norm_data
andcenter_data
tells us to center and normalize data. Our implementation of the centering (which is done unfortunately before the hashing trick is applied) for sparse data is horribly inefficient, so we wouldn't recommend using it. Besides, it doesn't seem to improve results. Just in case, the number of sprase dimensions used for centering is controlled by the parametermax_sparse_dim_to_center
. - Our FALCONN wrapper would normally use the distance provided by NMSLIB, but you can force using FALCONN's distance function implementation by setting:
use_falconn_dist
to 1.