Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Replace custom epsilons with numpy equivalent in LdaModel #2308

Merged
merged 3 commits into from
Jan 9, 2019
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
22 changes: 5 additions & 17 deletions gensim/models/ldamodel.py
Original file line number Diff line number Diff line change
Expand Up @@ -106,13 +106,6 @@

logger = logging.getLogger(__name__)

# Epsilon (very small) values used by each expected data type instead of 0, to avoid Arithmetic Errors.
DTYPE_TO_EPS = {
np.float16: 1e-5,
np.float32: 1e-35,
np.float64: 1e-100,
}


def update_dir_prior(prior, N, logphat, rho):
"""Update a given prior using Newton's method, described in
Expand Down Expand Up @@ -426,12 +419,7 @@ def __init__(self, corpus=None, num_topics=100, id2word=None,
Data-type to use during calculations inside model. All inputs are also converted.

"""
if dtype not in DTYPE_TO_EPS:
raise ValueError(
"Incorrect 'dtype', please choose one of {}".format(
", ".join("numpy.{}".format(tp.__name__) for tp in sorted(DTYPE_TO_EPS))))

self.dtype = dtype
self.dtype = np.finfo(dtype).dtype

# store user-supplied parameters
self.id2word = id2word
Expand Down Expand Up @@ -668,6 +656,7 @@ def inference(self, chunk, collect_sstats=False):
# Lee&Seung trick which speeds things up by an order of magnitude, compared
# to Blei's original LDA-C code, cool!).
integer_types = six.integer_types + (np.integer,)
epsilon = np.finfo(self.dtype).eps
Copy link
Owner

@piskvorky piskvorky Dec 24, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure this is a good idea. What are the guarantees for such epsilon?

If the epsilon is too close to the underflow edge, it might be silently ignored in some cases. I'd prefer an epsilon that is less ambiguous. I don't think we really care about getting the smallest possible number here.

In fact, do we need epsilon at all? It hints at some instability in the algorithm if it needs to be avoiding singularities in this way. Identifying when such singularities happen as soon as possible (is it a function of the input corpus? empty documents? something else?), and raising an exception, might be a preferable solution.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

New Epsilon already better than we have right now (bigger, we can even use 3 * eps). I agree that this is not the best solution (reason in algorithm instability), but this is a good workaround to avoid NaN values in models (at least, this will happens less often).

LGTM for me (improve overall model stability, but not prefect solution of course), wdyt @piskvorky ?

Copy link
Owner

@piskvorky piskvorky Jan 8, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, if it's an improvement we should merge it. But I'm still wary of the implications of this. Isn't it better to just raise an exception, rather than work around x / 0.0 by doing x / eps? Isn't the user screwed anyway (no exception, but nonsense results)?

Unfortunately I no longer remember why this code needs to be there :(

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't it better to just raise an exception, rather than work around x / 0.0 by doing x / eps?

No, because this can be raised at any moment (for example, I train model 10h and before the end, model raises an exception, in final - time already spent and no model).

Isn't the user screwed anyway (no exception, but nonsense results)

Usually not: if no NaNs in matrices, model behaves adequately.

for d, doc in enumerate(chunk):
if len(doc) > 0 and not isinstance(doc[0][0], integer_types):
# make sure the term IDs are ints, otherwise np will get upset
Expand All @@ -683,8 +672,7 @@ def inference(self, chunk, collect_sstats=False):
# The optimal phi_{dwk} is proportional to expElogthetad_k * expElogbetad_w.
# phinorm is the normalizer.
# TODO treat zeros explicitly, instead of adding epsilon?
eps = DTYPE_TO_EPS[self.dtype]
phinorm = np.dot(expElogthetad, expElogbetad) + eps
phinorm = np.dot(expElogthetad, expElogbetad) + epsilon

# Iterate between gamma and phi until convergence
for _ in range(self.iterations):
Expand All @@ -695,7 +683,7 @@ def inference(self, chunk, collect_sstats=False):
gammad = self.alpha + expElogthetad * np.dot(cts / phinorm, expElogbetad.T)
Elogthetad = dirichlet_expectation(gammad)
expElogthetad = np.exp(Elogthetad)
phinorm = np.dot(expElogthetad, expElogbetad) + eps
phinorm = np.dot(expElogthetad, expElogbetad) + epsilon
# If gamma hasn't changed much, we're done.
meanchange = mean_absolute_difference(gammad, lastgamma)
if meanchange < self.gamma_threshold:
Expand Down Expand Up @@ -1289,7 +1277,7 @@ def get_document_topics(self, bow, minimum_probability=None, minimum_phi_value=N
minimum_probability : float
Topics with an assigned probability lower than this threshold will be discarded.
minimum_phi_value : float
f `per_word_topics` is True, this represents a lower bound on the term probabilities that are included.
If `per_word_topics` is True, this represents a lower bound on the term probabilities that are included.
If set to None, a value of 1e-8 is used to prevent 0s.
per_word_topics : bool
If True, this function will also return two extra lists as explained in the "Returns" section.
Expand Down