Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add BijectiveSimplexLink in the docs #66

Open
wants to merge 8 commits into
base: master
Choose a base branch
from
Open

Conversation

theogf
Copy link
Member

@theogf theogf commented Jan 31, 2022

No description provided.

@codecov
Copy link

codecov bot commented Jan 31, 2022

Codecov Report

All modified and coverable lines are covered by tests ✅

Comparison is base (fccaf89) 96.40% compared to head (84178bf) 96.40%.

Additional details and impacted files
@@           Coverage Diff           @@
##           master      #66   +/-   ##
=======================================
  Coverage   96.40%   96.40%           
=======================================
  Files          11       11           
  Lines         139      139           
=======================================
  Hits          134      134           
  Misses          5        5           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

src/links.jl Outdated Show resolved Hide resolved
src/links.jl Outdated
For example with the [`SoftMaxLink`](@ref), to obtain a `n-1`-simplex leading to
`n` categories for the [`CategoricalLikelihood`](@ref),
one needs to pass `n` latent GP.
However, by wrapping the link into a `BijectiveSimplexLink`, only `n-1` latent GP are needed.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What effect does this reparametrisation have on the model? Does it break the symmetry amongst classes (i.e., would changing the order of classes change the resulting fit)? Would it be useful to mention the 0 added at the end in the docstring, or is that irrelevant?

Also, a harder question that you may not have the answer to but I would be curious if this makes it easier to fit the model (because there's no more redundancy through the overall level, and hence it becomes identifiable)...

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So in some quick 1-D experiments with a few classes I did in my thesis. I generated data via the [fs, 0] process and trying to recover the latent GPs with the bijective and non-bijective link.
I consistently observed that the bijective link produced more correct probability distributions (compared to the true generating probabilities) but that the non-bijective likelihood had a better log-likelihood on the training data

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was with the logistic-softmax link. Also the augmentation non-bijective creates improper priors whereas in the bijective case all is nice and beautiful.
In terms of speed there does not seem to be a difference, but I did not check thoroughly

theogf and others added 2 commits March 22, 2022 15:14
Co-authored-by: st-- <st--@users.noreply.github.com>
@theogf
Copy link
Member Author

theogf commented Mar 22, 2022

@st-- Did I address your comments?

@theogf
Copy link
Member Author

theogf commented Jul 8, 2022

Bumpity bump

@theogf
Copy link
Member Author

theogf commented Feb 17, 2024

Since this is just a minor doc change, I think it can safely be merged?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants