Releases: pavlin-policar/openTSNE
Releases · pavlin-policar/openTSNE
v1.0.2
v1.0.1
v1.0.0
v0.7.1
v0.7.0
Changes
- By default, we now add jitter to non-random initialization schemes. This has almost no effect on the resulting visualizations, but helps avoid potential problems when points are initialized at identical positions (#225)
- By default, the learning rate is now calculated as
N/exaggeration
. This speeds up convergence of the resulting embedding. Note that the learning rate during the EE phase will differ from the learning rate during the standard phase. Additionally, we setmomentum=0.8
in both phases. Before, it was 0.5 during EE and 0.8 during the standard phase. This, again, speeds up convergence. (#220) - Add
PrecomputedAffinities
to wrap square affinity matrices (#217)
Build changes
- Build
universal2
macos wheels enabling ARM support (#226)
Bug Fixes
- Fix BH collapse for smaller data sets (#235)
- Fix
updates
in optimizer not being stored correctly between optimization calls (#229) - Fix
inplace=True
optimization changing the initializations themselves in some rare use-cases (#225)
As usual, a special thanks to @dkobak for helping with practically all of these bugs/changes.
v0.6.2
Changes
- By default, we now use the
MultiscaleMixture
affinity model, enabling us to pass in a list of perplexities instead of a single perplexity value. This is fully backwards compatible. - Previously, perplexity values would be changed according to the dataset. E.g. we pass in
perplexity=100
with N=150. ThenTSNE.perplexity
would be equal to 50. Instead, keep this value as is and add aneffective_perplexity_
attribute (following the convention from scikit-learn, which puts in the corrected perplexity values. - Fix bug where interpolation grid was being prepared even when using BH optimization during transform.
- Enable calling
.transform
with precomputed distances. In this case, the data matrix will be assumed to be a distance matrix.
Build changes
- Build with
oldest-supported-numpy
- Build linux wheels on
manylinux2014
instead ofmanylinux2010
, following numpy's example - Build MacOS wheels on
macOS-10.15
instead ofmacos-10.14
Azure VM - Fix potential problem with clang-13, which actually does optimization with infinities using the
-ffast-math
flag
v0.6.0
Changes:
- Remove
affinites
fromTSNE
construction, allow custom affinities and initialization in.fit
method. This improves the API when dealing with non-tabular data. This is not backwards compatible. - Add
metric="precomputed"
. This includes the addition ofopenTSNE.nearest_neighbors.PrecomputedDistanceMatrix
andopenTSNE.nearest_neighbors.PrecomputedNeighbors
. - Add
knn_index
parameter toopenTSNE.affinity
classes. - Add (less-than-ideal) workaround for pickling Annoy objects.
- Extend the range of recommended FFTW boxes up to 1000.
- Remove deprecated
openTSNE.nearest_neighbors.BallTree
. - Remove deprecated
openTSNE.callbacks.ErrorLogger
. - Remove deprecated
TSNE.neighbors_method
property. - Add and set as default
negative_gradient_method="auto"
.
v0.5.0
Main changes:
- Build wheels for MacOS target 10.6
- Update to annoy v1.17.0, this should result in much faster multi-threaded performance
v0.4.0
Major changes:
- Remove numba dependency, switch over to using Annoy nearest neighbor search. Pynndescent is now optional and can be used if installed manually.
- Massively speed-up transform by keeping reference interpolation grid fixed. Limit new points to circle centered around reference embedding.
- Implement variable degrees of freedom.
Minor changes:
- Add spectral initialization using diffusion maps.
- Replace cumbersome
ErrorLogger
callback with theverbose
flag. - Change the default number of iterations to 750.
- Add
learning_rate="auto"
option. - Remove the
min_grad_norm
parameter.
Bugfixes:
- Fix case where KL divergence was sometimes reported as NaN.
Replace FFTW with numpy's FFT
In order to make usage as simple as possible and remove and external dependencies on FFTW (which needed to be installed locally before), this update replaces FFTW with numpy's FFT.